Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | Users Custom Security Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-custom-security-attributes.md | Title: Assign, update, list, or remove custom security attributes for a user (Pr description: Assign, update, list, or remove custom security attributes for a user in Azure Active Directory. + Last updated 02/20/2023 |
active-directory | Custom Security Attributes Add | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-add.md | Title: Add or deactivate custom security attributes in Azure AD (Preview) description: Learn how to add new custom security attributes or deactivate custom security attributes in Azure Active Directory. + |
active-directory | Custom Security Attributes Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-manage.md | description: Learn how to manage access to custom security attributes in Azure A + |
active-directory | Custom Security Attributes Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-overview.md | Title: What are custom security attributes in Azure AD? (Preview) description: Learn about custom security attributes in Azure Active Directory. + |
active-directory | Custom Security Attributes Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-troubleshoot.md | Title: Troubleshoot custom security attributes in Azure AD (Preview) description: Learn how to troubleshoot custom security attributes in Azure Active Directory. + |
active-directory | Tutorial Offboard Custom Workflow Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-offboard-custom-workflow-portal.md | Title: 'Execute employee offboarding tasks in real-time on their last day of work with Azure portal (preview)' + Title: 'Execute employee off-boarding tasks in real-time on their last day of work with Azure portal (preview)' description: Tutorial for off-boarding users from an organization using Lifecycle workflows with Azure portal (preview). -+ Previously updated : 08/18/2022- Last updated : 03/18/2023+ -# Execute employee offboarding tasks in real-time on their last day of work with Azure portal (preview) +# Execute employee off-boarding tasks in real-time on their last day of work with Azure portal (preview) This tutorial provides a step-by-step guide on how to execute a real-time employee termination with Lifecycle workflows using the Azure portal. -This off-boarding scenario will run a workflow on-demand and accomplish the following tasks: +This off-boarding scenario runs a workflow on-demand and accomplishes the following tasks: 1. Remove user from all groups 2. Remove user from all Teams The Lifecycle Workflows preview requires Azure AD Premium P2. For more informati ## Before you begin -As part of the prerequisites for completing this tutorial, you'll need an account that has group and Teams memberships and that can be deleted during the tutorial. For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md). +As part of the prerequisites for completing this tutorial, you need an account that has group and Teams memberships and that can be deleted during the tutorial. For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md). The leaver scenario can be broken down into the following: - **Prerequisite:** Create a user account that represents an employee leaving your organization Use the following steps to create a leaver on-demand workflow that will execute 6. From the templates, select **Select** under **Real-time employee termination**. :::image type="content" source="media/tutorial-lifecycle-workflows/select-template.png" alt-text="Screenshot of selecting template leaver workflow." lightbox="media/tutorial-lifecycle-workflows/select-template.png"::: - 7. Next, you'll configure the basic information about the workflow. Select **Next:Review tasks** when you're done with this step. + 7. Next, you configure the basic information about the workflow. Select **Next:Review tasks** when you're done with this step. :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-leaver.png" alt-text="Screenshot of review template tasks." lightbox="media/tutorial-lifecycle-workflows/real-time-leaver.png"::: 8. On the following page, you may inspect the tasks if desired but no additional configuration is needed. Select **Next: Select users** when you're finished. Use the following steps to create a leaver on-demand workflow that will execute 10. Next, select on **+Add users** to designate the users to be executed on this workflow. :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-add-users.png" alt-text="Screenshot of real time leaver add users." lightbox="media/tutorial-lifecycle-workflows/real-time-add-users.png"::: - 11. A panel with the list of available users will pop up on the right side of the screen. Select **Select** when you're done with your selection. + 11. A panel with the list of available users pops up on the right side of the screen. Select **Select** when you're done with your selection. :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-user-list.png" alt-text="Screenshot of real time leaver template selected users." lightbox="media/tutorial-lifecycle-workflows/real-time-user-list.png"::: 12. Select **Next: Review and create** when you're satisfied with your selection. Use the following steps to create a leaver on-demand workflow that will execute :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-create.png" alt-text="Screenshot of creating real time leaver workflow." lightbox="media/tutorial-lifecycle-workflows/real-time-create.png"::: ## Run the workflow -Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows will check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature. +Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature. >[!NOTE] >Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature. To run a workflow on-demand, for users using the Azure portal, do the following ## Check tasks and workflow status -At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks that are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we'll look at the status using the user focused reports. +At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks that are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we look at the status using the user focused reports. - 1. To begin, select the **Workflow history (Preview)** tab on the left to view the user summary and associated workflow tasks and statuses. + 1. To begin, select the **Workflow history (Preview)** tab to view the user summary and associated workflow tasks and statuses. :::image type="content" source="media/tutorial-lifecycle-workflows/workflow-history-real-time.png" alt-text="Screenshot of real time history overview." lightbox="media/tutorial-lifecycle-workflows/workflow-history-real-time.png"::: -1. Once the **Workflow history (Preview)** tab has been selected, you'll land on the workflow history page as shown. +1. Once the **Workflow history (Preview)** tab has been selected, you land on the workflow history page as shown. :::image type="content" source="media/tutorial-lifecycle-workflows/user-summary-real-time.png" alt-text="Screenshot of real time workflow history." lightbox="media/tutorial-lifecycle-workflows/user-summary-real-time.png"::: 1. Next, you may select **Total tasks** for the user Jane Smith to view the total number of tasks created and their statuses. In this example, there are three total tasks assigned to the user Jane Smith. |
active-directory | Tutorial Onboard Custom Workflow Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-onboard-custom-workflow-portal.md | Title: 'Automate employee onboarding tasks before their first day of work with Azure portal (preview)' description: Tutorial for onboarding users to an organization using Lifecycle workflows with Azure portal (preview). -+ Previously updated : 08/18/2022- Last updated : 03/18/2023+ Detailed breakdown of the relevant attributes: |employeeHireDate|Used to trigger the workflow|Employee| |department|Used to provide the scope for the workflow|Employee| -The prehire scenario can be broken down into the following: +The pre-hire scenario can be broken down into the following: - **Prerequisite:** Create two user accounts, one to represent an employee and one to represent a manager - **Prerequisite:** Editing the attributes required for this scenario in the portal - **Prerequisite:** Edit the attributes for this scenario using Microsoft Graph Explorer The prehire scenario can be broken down into the following: - Triggering the workflow - Verifying the workflow was successfully executed -## Create a workflow using pre-hire template -Use the following steps to create a prehire workflow that will generate a TAP and send it via email to the user's manager using the Azure portal. +## Create a workflow using prehire template +Use the following steps to create a pre-hire workflow that generates a TAP and send it via email to the user's manager using the Azure portal. 1. Sign in to Azure portal. 2. On the right, select **Azure Active Directory**. Use the following steps to create a prehire workflow that will generate a TAP an 6. From the templates, select **select** under **Onboard pre-hire employee**. :::image type="content" source="media/tutorial-lifecycle-workflows/select-template.png" alt-text="Screenshot of selecting workflow template." lightbox="media/tutorial-lifecycle-workflows/select-template.png"::: - 7. Next, you'll configure the basic information about the workflow. This information includes when the workflow triggers, known as **Days from event**. So in this case, the workflow triggers two days before the employee's hire date. On the onboard prehire employee screen, add the following settings and then select **Next: Configure Scope**. + 7. Next, you configure the basic information about the workflow. This information includes when the workflow triggers, known as **Days from event**. So in this case, the workflow triggers two days before the employee's hire date. On the onboard pre-hire employee screen, add the following settings and then select **Next: Configure Scope**. :::image type="content" source="media/tutorial-lifecycle-workflows/configure-scope.png" alt-text="Screenshot of selecting a configuration scope." lightbox="media/tutorial-lifecycle-workflows/configure-scope.png"::: - 8. Next, you'll configure the scope. The scope determines which users this workflow runs against. In this case, it is on all users in the Sales department. On the configure scope screen, under **Rule** add the following settings and then select **Next: Review tasks**. For a full list of supported user properties, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters). + 8. Next, you configure the scope. The scope determines which users this workflow runs against. In this case, it is on all users in the Sales department. On the configure scope screen, under **Rule** add the following settings and then select **Next: Review tasks**. For a full list of supported user properties, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters). :::image type="content" source="media/tutorial-lifecycle-workflows/review-tasks.png" alt-text="Screenshot of selecting review tasks." lightbox="media/tutorial-lifecycle-workflows/review-tasks.png"::: To run a workflow on-demand, for users using the Azure portal, do the following ## Check tasks and workflow status -At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks that are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we'll look at the status using the user focused reports. +At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks that are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we look at the status using the user focused reports. - 1. To begin, select the **Workflow history (Preview)** tab on the left to view the user summary and associated workflow tasks and statuses. + 1. To begin, select the **Workflow history (Preview)** tab to view the user summary and associated workflow tasks and statuses. :::image type="content" source="media/tutorial-lifecycle-workflows/workflow-history.png" alt-text="Screenshot of workflow History status." lightbox="media/tutorial-lifecycle-workflows/workflow-history.png"::: -1. Once the **Workflow history (Preview)** tab has been selected, you'll land on the workflow history page as shown. +1. Once the **Workflow history (Preview)** tab has been selected, you land on the workflow history page as shown. :::image type="content" source="media/tutorial-lifecycle-workflows/user-summary.png" alt-text="Screenshot of workflow history overview" lightbox="media/tutorial-lifecycle-workflows/user-summary.png"::: 1. Next, you may select **Total tasks** for the user Jane Smith to view the total number of tasks created and their statuses. In this example, there are three total tasks assigned to the user Jane Smith. |
active-directory | Tutorial Prepare Azure Ad User Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-prepare-azure-ad-user-accounts.md | Title: 'Tutorial: Preparing user accounts for Lifecycle workflows (preview)' description: Tutorial for preparing user accounts for Lifecycle workflows (preview). -+ Previously updated : 06/13/2022- Last updated : 03/18/2023+ # Preparing user accounts for Lifecycle workflows tutorials (Preview) -For the on-boarding and off-boarding tutorials you'll need accounts for which the workflows will be executed, the following section will help you prepare these accounts, if you already have test accounts that meet the following requirements you can proceed directly to the on-boarding and off-boarding tutorials. Two accounts are required for the on-boarding tutorials, one account for the new hire and another account that acts as the manager of the new hire. The new hire account must have the following attributes set: +For the on-boarding and off-boarding tutorials you need accounts for which the workflows are executed. This section helps you prepare these accounts, if you already have test accounts that meet the following requirements, you can proceed directly to the on-boarding and off-boarding tutorials. Two accounts are required for the on-boarding tutorials, one account for the new hire and another account that acts as the manager of the new hire. The new hire account must have the following attributes set: - employeeHireDate must be set to today - department must be set to sales The off-boarding tutorials only require one account that has group and Teams mem [!INCLUDE [active-directory-p2-license.md](../../../includes/active-directory-p2-license.md)] - An Azure AD tenant-- A global administrator account for the Azure AD tenant. This account will be used to create our users and workflows.+- A global administrator account for the Azure AD tenant. This account is used to create our users and workflows. ## Before you begin -In most cases, users are going to be provisioned to Azure AD either from an on-premises solution (Azure AD Connect, Cloud sync, etc.) or with an HR solution. These users will have the attributes and values populated at the time of creation. Setting up the infrastructure to provision users is outside the scope of this tutorial. For information, see [Tutorial: Basic Active Directory environment](../cloud-sync/tutorial-basic-ad-azure.md) and [Tutorial: Integrate a single forest with a single Azure AD tenant](../cloud-sync/tutorial-single-forest.md) +In most cases, users are going to be provisioned to Azure AD either from an on-premises solution (Azure AD Connect, Cloud sync, etc.) or with an HR solution. These users have the attributes and values populated at the time of creation. Setting up the infrastructure to provision users is outside the scope of this tutorial. For information, see [Tutorial: Basic Active Directory environment](../cloud-sync/tutorial-basic-ad-azure.md) and [Tutorial: Integrate a single forest with a single Azure AD tenant](../cloud-sync/tutorial-single-forest.md) ## Create users in Azure AD -We'll use the Graph Explorer to quickly create two users needed to execute the Lifecycle Workflows in the tutorials. One user will represent our new employee and the second will represent the new employee's manager. +We use the Graph Explorer to quickly create two users needed to execute the Lifecycle Workflows in the tutorials. One user represents our new employee and the second represents the new employee's manager. -You'll need to edit the POST and replace the <your tenant name here> portion with the name of your tenant. For example: $UPN_manager = "bsimon@<your tenant name here>" to $UPN_manager = "bsimon@contoso.onmicrosoft.com". +You need to edit the POST and replace the <your tenant name here> portion with the name of your tenant. For example: $UPN_manager = "bsimon@<your tenant name here>" to $UPN_manager = "bsimon@contoso.onmicrosoft.com". >[!NOTE] >Be aware that a workflow will not trigger when the employee hire date (Days from event) is prior to the workflow creation date. You must set a employeeHiredate in the future by design. The dates used in this tutorial are a snapshot in time. Therefore, you should change the dates accordingly to accommodate for this situation. -First we'll create our employee, Melva Prince. +First we create our employee, Melva Prince. 1. Now navigate to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). 2. Sign-in to Graph Explorer with the global administrator account for your tenant. 3. At the top, change **GET** to **POST** and add `https://graph.microsoft.com/v1.0/users/` to the box. - 4. Copy the code below in to the **Request body** - 5. Replace `<your tenant here>` in the code below with the value of your Azure AD tenant. + 4. Copy the following code in to the **Request body** + 5. Replace `<your tenant here>` in the following code with the value of your Azure AD tenant. 6. Select **Run query**- 7. Copy the ID that is returned in the results. This will be used later to assign a manager. + 7. Copy the ID that is returned in the results. This is used later to assign a manager. ```HTTP { First we'll create our employee, Melva Prince. ``` :::image type="content" source="media/tutorial-lifecycle-workflows/graph-post-user.png" alt-text="Screenshot of POST create Melva in graph explorer." lightbox="media/tutorial-lifecycle-workflows/graph-post-user.png"::: -Next, we'll create Britta Simon. This is the account that will be used as our manager. +Next, we create Britta Simon. This is the account that is used as our manager. 1. Still in [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). 2. Make sure the top is still set to **POST** and `https://graph.microsoft.com/v1.0/users/` is in the box. - 3. Copy the code below in to the **Request body** - 4. Replace `<your tenant here>` in the code below with the value of your Azure AD tenant. + 3. Copy the following code in to the **Request body** + 4. Replace `<your tenant here>` in the following code with the value of your Azure AD tenant. 5. Select **Run query**- 6. Copy the ID that is returned in the results. This will be used later to assign a manager. + 6. Copy the ID that is returned in the results. This is used later to assign a manager. ```HTTP { "accountEnabled": true, Next, we'll create Britta Simon. This is the account that will be used as our m >[!NOTE] > You need to change the <your tenant name here> section of the code to match your Azure AD tenant. -As an alternative, the following PowerShell script may also be used to quickly create two users needed execute a lifecycle workflow. One user will represent our new employee and the second will represent the new employee's manager. +As an alternative, the following PowerShell script may also be used to quickly create two users needed execute a lifecycle workflow. One user represents our new employee and the second represents the new employee's manager. >[!IMPORTANT] >The following PowerShell script is provided to quickly create the two users required for this tutorial. These users can also be created manually by signing in to the Azure portal as a global administrator and creating them. -In order to create this step, save the PowerShell script below to a location on a machine that has access to Azure. +In order to create this step, save the following PowerShell script to a location on a machine that has access to Azure. Next, you need to edit the script and replace the <your tenant name here> portion with the name of your tenant. For example: $UPN_manager = "bsimon@<your tenant name here>" to $UPN_manager = "bsimon@contoso.onmicrosoft.com". You need to do perform this action for both $UPN_employee and $UPN_manager -After editing the script, save it and follow the steps below. +After editing the script, save it and follow these steps: 1. Open a Windows PowerShell command prompt, with Administrative privileges, from a machine that has access to the Azure portal. 2. Navigate to the saved PowerShell script location and run it. Some of the attributes required for the pre-hire onboarding tutorial are exposed |mail|Used to notify manager of the new employees temporary access pass|Manager| |manager|This attribute that is used by the lifecycle workflow|Employee| -For the tutorial, the **mail** attribute only needs to be set on the manager account and the **manager** attribute set on the employee account. Use the following steps below. +For the tutorial, the **mail** attribute only needs to be set on the manager account and the **manager** attribute set on the employee account. Use the following steps: 1. Sign in to Azure portal. 2. On the right, select **Azure Active Directory**. In order to do this, we must get the object ID for our user Melva Prince. 5. Select the copy sign next to the **Object ID**. 6. Now navigate to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). 7. Sign-in to Graph Explorer with the global administrator account for your tenant.- 8. At the top, change **GET** to **PATCH** and add `https://graph.microsoft.com/v1.0/users/<id>` to the box. Replace `<id>` with the value we copied above. + 8. At the top, change **GET** to **PATCH** and add `https://graph.microsoft.com/v1.0/users/<id>` to the box. Replace `<id>` with the value we copied before. 9. Copy the following in to the **Request body** and select **Run query** ```Example { The manager attribute is used for email notification tasks. It's used by the li 1. Still in [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). 2. Make sure the top is still set to **PUT** and `https://graph.microsoft.com/v1.0/users/<id>/manager/$ref` is in the box. Change `<id>` to the ID of Melva Prince. 3. Copy the code below in to the **Request body** - 4. Replace `<managerid>` in the code below with the value of Britta Simons ID. + 4. Replace `<managerid>` in the following code with the value of Britta Simons ID. 5. Select **Run query** ```Example { For more information about updating manager information for a user in Graph API, ### Enabling the Temporary Access Pass (TAP) A Temporary Access Pass is a time-limited pass issued by an admin that satisfies strong authentication requirements. -In this scenario, we'll use this feature of Azure AD to generate a temporary access pass for our new employee. It will then be mailed to the employee's manager. +In this scenario, we use this feature of Azure AD to generate a temporary access pass for our new employee. It is then mailed to the employee's manager. To use this feature, it must be enabled on our Azure AD tenant. To do this, use the following steps. |
active-directory | Groups Assign Role | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-assign-role.md | |
active-directory | Groups Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-concept.md | |
active-directory | Groups Create Eligible | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-create-eligible.md | |
active-directory | Groups Pim Eligible | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-pim-eligible.md | |
active-directory | Groups Remove Assignment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-remove-assignment.md | |
active-directory | Groups View Assignments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-view-assignments.md | |
aks | Nat Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/nat-gateway.md | -While you can route egress traffic through an Azure Load Balancer, there are limitations on the amount of outbound flows of traffic you can have. Azure NAT Gateway allows up to 64,512 outbound UDP and TCP traffic flows per IP address with a maximum of 16 IP addresses. +While you can route egress traffic through an Azure Load Balancer, there are limitations on the amount of outbound flows of traffic you can have. Azure NAT Gateway allows up to 64,512 outbound UDP and TCP traffic flows per IP address with a maximum of 16 IP addresses. This article shows you how to create an AKS cluster with a Managed NAT Gateway for egress traffic and how to disable OutboundNAT on Windows. This article shows you how to create an AKS cluster with a Managed NAT Gateway f ## Create an AKS cluster with a Managed NAT Gateway -To create an AKS cluster with a new Managed NAT Gateway, use `--outbound-type managedNATGateway`, `--nat-gateway-managed-outbound-ip-count`, and `--nat-gateway-idle-timeout` when running `az aks create`. The following example creates a *myResourceGroup* resource group, then creates a *natCluster* AKS cluster in *myResourceGroup* with a Managed NAT Gateway, two outbound IPs, and an idle timeout of 30 seconds. +To create an AKS cluster with a new Managed NAT Gateway, use `--outbound-type managedNATGateway`, `--nat-gateway-managed-outbound-ip-count`, and `--nat-gateway-idle-timeout` when running `az aks create`. If you want the NAT gateway to be able to operate out of availability zones, specify the zones using `--zones`. ++The following example creates a *myResourceGroup* resource group, then creates a *natCluster* AKS cluster in *myResourceGroup* with a Managed NAT Gateway, two outbound IPs, and an idle timeout of 30 seconds. ```azurecli-interactive az group create --name myResourceGroup --location southcentralus |
api-management | Trace Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/trace-policy.md | +- The policy creates a [Trace](../azure-monitor/app/data-model-complete.md#trace) telemetry in Application Insights, when [Application Insights integration](./api-management-howto-app-insights.md) is enabled and the `severity` specified in the policy is equal to or greater than the `verbosity` specified in the diagnostic setting. - The policy adds a property in the log entry when [resource logs](./api-management-howto-use-azure-monitor.md#resource-logs) are enabled and the severity level specified in the policy is at or higher than the verbosity level specified in the diagnostic setting. - The policy is not affected by Application Insights sampling. All invocations of the policy will be logged. The `trace` policy adds a custom trace into the request tracing output in the te |Name|Description|Required| |-|--|--| | message | A string or expression to be logged. | Yes |-| metadata | Adds a custom property to the Application Insights [Trace](../azure-monitor/app/data-model-trace-telemetry.md) telemetry. | No | +| metadata | Adds a custom property to the Application Insights [Trace](../azure-monitor/app/data-model-complete.md#trace) telemetry. | No | ### metadata attributes |
app-service | Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md | Title: Migrate to App Service Environment v3 by using the migration feature description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 03/16/2023 Last updated : 03/17/2023 At this time, the migration feature doesn't support migrations to App Service En ### Azure Public: -- Japan West - Jio India West - UAE Central |
azure-arc | Configure Transparent Data Encryption Sql Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-transparent-data-encryption-sql-managed-instance.md | Title: Turn on transparent data encryption in Azure Arc-enabled SQL Managed Instance (preview) description: How-to guide to turn on transparent data encryption in an Azure Arc-enabled SQL Managed Instance (preview)--++ |
azure-arc | Rotate Sql Managed Instance Credentials | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/rotate-sql-managed-instance-credentials.md | description: Rotate SQL Managed Instance service-managed credentials (preview) --++ Last updated 03/06/2023 |
azure-arc | Network Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/network-requirements.md | In addition, resource bridge (preview) requires connectivity to the [Arc-enabled ## SSL proxy configuration -If using a proxy, Azure Arc resource bridge must be configured for proxy so that it can connect to the Azure services. To configure the Arc resource bridge with proxy, provide the proxy certificate file path during creation of the configuration files. Only pass the single proxy certificate. If a certificate bundle is passed then the deployment will fail. Proxy configuration of the management machine isn't configured by the Azure Arc resource bridge. +If using a proxy, Azure Arc resource bridge must be configured for proxy so that it can connect to the Azure services. To configure the Arc resource bridge with proxy, provide the proxy certificate file path during creation of the configuration files. Only pass the single proxy certificate. If a certificate bundle is passed then the deployment will fail. The proxy server endpoint can't be a .local domain. Proxy configuration of the management machine isn't configured by the Azure Arc resource bridge. There are only two certificates that should be relevant when deploying the Arc resource bridge behind an SSL proxy: the SSL certificate for your SSL proxy (so that the host and guest trust your proxy FQDN and can establish an SSL connection to it), and the SSL certificate of the Microsoft download servers. This certificate must be trusted by your proxy server itself, as the proxy is the one establishing the final connection and needs to trust the endpoint. Non-Windows machines may not trust this second certificate by default, so you may need to ensure that it's trusted. - ## Exclusion list for no proxy The following table contains the list of addresses that must be excluded by using the `-noProxy` parameter in the `createconfig` command. The default value for `noProxy` is `localhost,127.0.0.1,.svc,10.0.0.0/8,172.16.0 - Review the [Azure Arc resource bridge (preview) overview](overview.md) to understand more about requirements and technical details. - Learn about [security configuration and considerations for Azure Arc resource bridge (preview)](security-overview.md).+ |
azure-maps | Elevation Data Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/elevation-data-services.md | + + Title: Create elevation data & services using open data +titeSuffix: Microsoft Azure Maps +description: a guide to help developers build Elevation services and tiles using open data on the Microsoft Azure Cloud. ++ Last updated : 3/17/2023++++++# Create elevation data & services ++This guide describes how to use USGS worldwide DEM data from their SRTM mission with 30m accuracy to build an Elevation service on the [Microsoft Azure Cloud]. ++This article describes how to: ++- Create Contour line vector tiles and RGB-encoded DEM tiles. +- Create Elevation API using Azure Function and RGB-encoded DEM tiles from Azure Blob Storage. +- Create Contour line vector tile service using Azure Function and PostgreSQL. ++## Prerequisites ++This guide requires the use of the following third-party software and data: ++- USGS Data. DEM data can be downloaded as GeoTiff with 1 arc second coverage per tile through the [USGS EarthExplorer]. This requires an EarthExplorer account, but the data can be downloaded for free. +- The [QGIS] desktop GIS application is used to process and smoothen the Raster tiles. QGIS is free to download and use. This guide uses QGIS version 3.26.2-Buenos Aires. +- The [rio-rgbify] Python package, developed by MapBox, is used to encode the GeoTIFF as RGB. +- [PostgreSQL] database with the [PostGIS] spatial extension. ++## Create Contour line vector tiles and RGB-encoded DEM tiles ++This guide uses the 36 tiles covering the state of Washington, available from [USGS EarthExplorer]. ++### Download raster tiles from USGS EarthExplorer ++#### Search criteria ++Select the region that you want raster tiles for. For demonstration purposes, this guide uses the "Polygon" method to select the region on the map. ++1. Navigate to the [USGS EarthExplorer]. ++1. In the **Search Criteria** tab, select **Polygon** then click on the map to create the boundary. ++ :::image type="content" source="./media/elevation-services/create-polygon.png" alt-text="A screenshot showing the search criteria tab in the USGS earth explorer web site." lightbox="./media/elevation-services/create-polygon.png"::: ++#### Data sets ++1. Select the **Data Sets** tab. ++1. Select **SRTM 1 Arc-Second Global** from the **Digital Elevations** section. ++ :::image type="content" source="./media/elevation-services/data-sets.png" alt-text="A screenshot showing the data sets tab in the USGS earth explorer web site." lightbox="./media/elevation-services/data-sets.png"::: ++#### Results ++1. Select **Results >>** to view the tiles for the selected region and data set. ++1. The list of downloadable tiles appear on the results page. To download + only tiles you want, select the **Download Options** button on the result card for each tile, + selecting the option **GeoTIFF 1 Arc-Second** and repeat this step for the remaining tiles. ++ :::image type="content" source="./media/elevation-services/results-export.png" alt-text="A screenshot showing the results tab in the USGS earth explorer web site." lightbox="./media/elevation-services/results-export.png"::: ++1. Alternatively, use the bulk download option and select **GeoTIFF 1 Arc-second**. ++### Add raster tiles to QGIS ++Once you have the raster tiles you need, you can import them in QGIS. ++1. Add raster tiles to QGIS by dragging the files to the **QGIS layer** + tab or selecting **Add Layer** in the **Layer** menu. ++ :::image type="content" source="./media/elevation-services/add-raster-tiles-qgis.png" alt-text="A screenshot showing raster tiles in QGIS." lightbox="./media/elevation-services/add-raster-tiles-qgis.png"::: ++2. When the raster layers are loaded into QGIS, there can be + different shades of tiles. Fix this by merging the raster + layers, which result in a single smooth raster image in GeoTIFF + format. To do this, select **Miscellaneous** from the **Raster** menu, then **Merge...** ++ :::image type="content" source="./media/elevation-services/merge-raster-layers.png" alt-text="A screenshot showing the merge raster menu in QGIS."::: ++3. Reproject the merged raster layer to EPSG:3857 (WGS84 / Pseudo-Mercator) using **Save Raster Layer as** + accessed by right clicking on the merged raster layer in the **table of content** -> + **Export** -> **Save As** option. EPSG:3857 is required to use it with [Azure Maps Web SDK]. ++ :::image type="content" source="./media/elevation-services/save-raster-layer.png" alt-text="A screenshot showing how the merge raster layers menu in QGIS."::: ++4. If you only want to create contour line vector tiles, you can skip the following steps and go to + [Create Contour line vector tile service using Azure Function and PostgreSQL]. ++5. To create an Elevation API, the next step is to RGB-Encode the GeoTIFF. This can be done using + [rio-rgbify], developed by MapBox. There are some challenges running this tool directly in + Windows, so it's easier to run from WSL. Below are the steps in Ubuntu on WSL: ++ ```bash + sudo apt get update + sudo apt get upgrade + sudo apt install python3-pip + pip install rio-rgbify + PATH="$PATH:/home/<user /.local/bin" + # The following two steps are only necessary when mounting an external hard drive or USB flash drive: + sudo mkdir /mnt/f + sudo mount -t drvfs D: /mnt/f ++ rio rgbify -b -10000 -i 0.1 wa_1arc_v3_merged_3857.tif wa_1arc_v3_merged_3857_rgb.tif ++ # The following steps are only necessary when unmounting an external hard drive or USB flash drive: + cd \~ + sudo umount /mnt/f/ + ``` ++ :::image type="content" source="./media/elevation-services/rgb-encoded-geotiff.png" alt-text="A screenshot showing the RGB-encoded GeoTIFF in QGIS."::: ++ The RGB-encoded GeoTIFF allows you to retrieve R, G and B values + for a pixel and calculate the elevation from these values: ++ `elevation (m) = -10000 + ((R * 256 * 256 + G * 256 + B) * 0.1)` ++6. Next, create a tile set to use with the map control and/or use it to get Elevation for any + geographic coordinates within the map extent of the tile set. The tile set can be created + in QGIS using the **Generate XYZ tiles (Directory)** tool. ++ :::image type="content" source="./media/elevation-services/generate-xyz-tiles-tool.png" alt-text="A screenshot showing the Generate XYZ tiles (Directory) tool in QGIS."::: ++7. Save the location of the tile set, you'll use it in the next Section. ++## Create Elevation API using Azure Function and RGB-encoded DEM tiles from Azure Blob Storage ++The RGB encoded DEM Tiles need to be uploaded to a database storage +before it can be used with the Azure Functions to create an API. ++1. Upload the tiles to Azure Blob Storage. [Azure Storage Explorer] is a useful tool for this purpose. ++ :::image type="content" source="./media/elevation-services/azure-storage-explorer.png" alt-text="A screenshot showing the Microsoft Azure Storage Explorer."::: ++ Uploading tiles to Azure Blob Storage can take several minutes to complete. ++1. Once the upload is complete, you can create Azure Function to build an + API that returns elevation for a given geographic coordinate. ++ This function receives a coordinate pair, determine the tile that + covers it at zoom level 14, then determine the pixel coordinates within + that tile that matches the geographic coordinates. It then retrieves + the tile, gets the RGB values for that pixel, then uses the + following formula to determine the elevation: + + `elevation (m) = -10000 + ((R * 256 * 256 + G * 256 + B) * 0.1)` ++```python +import logging +import json +import azure.functions as func +from PIL import Image +import requests +from io import BytesIO +import math ++def main(req: func.HttpRequest) -> func.HttpResponse: + logging.info('Python HTTP trigger function processed a request.') ++ # http://localhost:7071/api/GetElevationPoint?lng=-122.01911&lat=47.67091 + zoom = 14 + lng = float(req.params.get('lng')) + lat = float(req.params.get('lat')) + logging.info('Lng: ' + str(lng) + ' / lat: ' + str(lat)) ++ # Calculate th global pixel x and y coordinates for a lng / lat + gx = (lng + 180) / 360 + sinLat = math.sin(lat * math.pi / 180) + gy = 0.5 - math.log((1 + sinLat) / (1 - sinLat)) / (4 * math.pi) + mapSize = math.ceil(256 * math.pow(2, zoom)); + gxc = min(max(gx * mapSize + 0.5, 0), mapSize - 1); + gyc = min(max(gy * mapSize + 0.5, 0), mapSize - 1); ++ # Calclate the tile x and y covering the lng / lat + tileX = int(gxc / 256) + tileY = int(gyc / 256) ++ # Calculate the pixel coordinates for the tile covering the lng / lat + tilePixelX = math.floor(gxc - (tileX * 256)) + tilePixelY = math.floor(gyc - (tileY * 256)) ++ response = requests.get("{BlobStorageURL}" + str(zoom) + "/" + str(tileX) + "/" + str(tileY) + ".png") + im = Image.open(BytesIO(response.content)) ++ pix = im.load() + r = pix[tilePixelX,tilePixelY][0] + g = pix[tilePixelX,tilePixelY][1] + b = pix[tilePixelX,tilePixelY][2] ++ # elevation (m) = -10000 + ((R * 256 * 256 + G * 256 + B) * 0.1) + ele = -10000 + ((r * 256 * 256 + g * 256 + b) * 0.1) ++ jsonRes = {"elevation": + ele} + logging.info('Response: ' + json.dumps(jsonRes)) ++ if lng and lat: + return func.HttpResponse( + json.dumps(jsonRes), + mimetype="application/json", + ) + else: + return func.HttpResponse( + "ERROR: Missing parameter!", + status_code=400 + ) +``` ++To see the results of the code sample, run it locally: ++```http +localhost:7071/api/GetElevationPoint?lng=-122.01911&lat=47.67091` +``` ++## Create Contour line vector tile service using Azure Function and PostgreSQL ++This section describes the steps to create and process contour lines in +QGIS, upload them to PostgreSQL then create an Azure Function to Query +PostgreSQL to return vector tiles. ++1. In QGIS, open the merged raster tiles in the EPSG:4326 projection created + in step 3 of [Create Contour line vector tiles and RGB-encoded DEM tiles]. ++1. Select **Extraction -> Contour** from the **Raster** menu to + open the Contour tool. ++ :::image type="content" source="./media/elevation-services/contour-tool.png" alt-text="A screenshot showing the contour dialog in QGIS."::: ++ Selecting **Run** creates contour lines and add them as a layer to the map. + some of the contour line edges may appear a little rough. This will be addressed + in the next step. ++ :::image type="content" source="./media/elevation-services/contour-lines.png" alt-text="A screenshot showing a map with contours in QGIS."::: ++1. Select **Toolbox** from the **Processing** menu to bring up the **Processing Toolbox**. +1. Then select **Smooth** in the **Vector geometry** section of the **Processing Toolbox**. ++ :::image type="content" source="./media/elevation-services/smooth-dialog.png" alt-text="A screenshot showing the smooth dialog in QGIS."::: ++ > [!NOTE] + > Contour line smoothing can be substantially improved but at the cost of increased file-size. ++1. Load the contour lines to the database. This guide uses the free + version of [PostgreSQL] database that runs on localhost. You + can also load them to the Azure Database for PostgreSQL. ++ The next step requires a PostgreSQL database with [PostGIS] extension. ++1. To create a connection from QGIS to PostgreSQL, select **Add Layer** -> **Add PostGIS Layers** + from the **Layer** menu, then select the **New** button. ++ :::image type="content" source="./media/elevation-services/create-new-postgis-connection.png" alt-text="A screenshot showing the create new PostGIG connection dialog in QGIS."::: ++1. Next, load Data from QGIS to PostgreSQL using the Database Manager in + QGIS. To do this, select **DB Manager** from the **Database** menu. ++ :::image type="content" source="./media/elevation-services/db-manager.png" alt-text="A screenshot showing the DB Manager in QGIS."::: ++1. Connect to the PostGIS database and select **Import Layer/File...** to + Import contour lines to the database. ++ :::image type="content" source="./media/elevation-services/import-vector-layer.png" alt-text="A screenshot showing the import vector dialog in QGIS."::: ++1. You can now use an Azure Function to Query PostgreSQL and return + vector tiles for the contour lines. The tile server can be used with + the Azure Maps web SDK to create a web app that displays contour + lines on the map. ++ ```python + import logging + from wsgiref import headers + import azure.functions as func + import psycopg2 + # Database to connect to + DATABASE = { + 'user': 'postgres', + 'password': '{password}', + 'host': 'localhost', + 'port': '5432', + 'database': '{database}' + } + def main(req: func.HttpRequest) -> func.HttpResponse: + logging.info('Python HTTP trigger function processed a request.') + DATABASE_CONNECTION = None + # get url parameters http://localhost:7071/api/tileserver?zoom={z}&x={x}&y={y} + # http://localhost:7071/api/tileserver?zoom=16&x=10556&y=22870 + zoom = int(req.params.get('zoom')) + x = int(req.params.get('x')) + y = int(req.params.get('y')) + table = req.params.get('table') + # calculate the envelope of the tile + # Width of world in EPSG:3857 + worldMercMax = 20037508.3427892 + worldMercMin = -1 * worldMercMax + worldMercSize = worldMercMax - worldMercMin + # Width in tiles + worldTileSize = 2 ** zoom + + # Tile width in EPSG:3857 + tileMercSize = worldMercSize / worldTileSize + + # Calculate geographic bounds from tile coordinates + # XYZ tile coordinates are in "image space" so origin is + # top-left, not bottom right + xmin = worldMercMin + tileMercSize * x + xmax = worldMercMin + tileMercSize * (x + 1) + ymin = worldMercMax - tileMercSize * (y + 1) + ymax = worldMercMax - tileMercSize * y + # Generate SQL to materialize a query envelope in EPSG:3857. + # Densify the edges a little so the envelope can be + # safely converted to other coordinate systems. + DENSIFY_FACTOR = 4 + segSize = (xmax - xmin)/DENSIFY_FACTOR + sql01 = 'ST_Segmentize(ST_MakeEnvelope(' + str(xmin) + ', ' + str(ymin) + ', ' + str(xmax) + ', ' + str(ymax) + ', 3857), ' + str(segSize) +')' + + # Generate a SQL query to pull a tile worth of MVT data + # from the table of interest. + # Materialize the bounds + # Select the relevant geometry and clip to MVT bounds + # Convert to MVT format + sql02 = 'WITH bounds AS (SELECT ' + sql01 + ' AS geom, ' + sql01 + '::box2d AS b2d), mvtgeom AS (SELECT ST_AsMVTGeom(ST_Transform(t.geom, 3857), bounds.b2d) AS geom, elev FROM contourlines_smooth t, bounds WHERE ST_Intersects(t.geom, ST_Transform(bounds.geom, 4326))) SELECT ST_AsMVT(mvtgeom.*) FROM mvtgeom' + + # Run tile query SQL and return error on failure conditions + # Make and hold connection to database + if not DATABASE_CONNECTION: + try: + DATABASE_CONNECTION = psycopg2.connect(**DATABASE) + logging.info('Connected to database.') + except (Exception, psycopg2.Error) as error: + logging.error('ERROR: Cannot connect to database.') + # Query for MVT + with DATABASE_CONNECTION.cursor() as cur: + cur.execute(sql02) + if not cur: + logging.error('ERROR: SQL Query failed.') + pbf = cur.fetchone()[0] + logging.info('Queried database') + + if zoom and x and y: + return func.HttpResponse( + # f"This HTTP triggered function executed successfully.\n\nzoom={zoom}\nx={x}\ny={y}\n\nxmin={xmin}\nxmax={xmax}\nymin={ymin}\nymax={ymax}\n\nsql01={sql01}\n\nsql02={sql02}", + bytes(pbf), + status_code=200, + headers={"Content-type": "application/vnd.mapbox-vector-tile","Access-Control-Allow-Origin": "*"} + ) + else: + return func.HttpResponse( + "ERROR: Missing parameter!", + status_code=400 + ) + ``` ++To see the results of the code sample, run it locally: ++```http +http://localhost:7071/api/tileserver?zoom={z}&x={x}&y={y} +``` ++[Microsoft Azure Cloud]: https://azure.microsoft.com/free/cloud-services +[USGS EarthExplorer]: https://earthexplorer.usgs.gov/ +[QGIS]: https://www.qgis.org/en/site/forusers/download.html +[rio-rgbify]: https://pypi.org/project/rio-rgbify/ +[PostgreSQL]: https://www.postgresql.org/download/ +[PostGIS]: https://postgis.net/install/ +[Azure Maps Web SDK]: about-azure-maps.md#web-sdk +[Create Contour line vector tiles and RGB-encoded DEM tiles]: #create-contour-line-vector-tiles-and-rgb-encoded-dem-tiles +[Create Contour line vector tile service using Azure Function and PostgreSQL]: #create-contour-line-vector-tile-service-using-azure-function-and-postgresql +[Azure Storage Explorer]: https://azure.microsoft.com/products/storage/storage-explorer/ |
azure-monitor | App Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md | Application Insights is an extension of [Azure Monitor](../overview.md) and prov 1. *Proactively* understand how an application is performing. 1. *Reactively* review application execution data to determine the cause of an incident. -In addition to collecting [Metrics](standard-metrics.md) and application [Telemetry](data-model.md) data, which describe application activities and health, Application Insights can also be used to collect and store application [trace logging data](asp-net-trace-logs.md). +In addition to collecting [Metrics](standard-metrics.md) and application [Telemetry](data-model-complete.md) data, which describe application activities and health, Application Insights can also be used to collect and store application [trace logging data](asp-net-trace-logs.md). The [log trace](asp-net-trace-logs.md) is associated with other telemetry to give a detailed view of the activity. Adding trace logging to existing apps only requires providing a destination for the logs; the logging framework rarely needs to be changed. |
azure-monitor | Asp Net Dependencies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md | This section provides answers to common questions. ### How does the automatic dependency collector report failed calls to dependencies? -Failed dependency calls will have the `success` field set to False. The module `DependencyTrackingTelemetryModule` doesn't report `ExceptionTelemetry`. The full data model for dependency is described [Dependency telemetry: Application Insights data model](data-model-dependency-telemetry.md). +Failed dependency calls will have the `success` field set to False. The module `DependencyTrackingTelemetryModule` doesn't report `ExceptionTelemetry`. The full data model for dependency is described in [Application Insights telemetry data model](data-model-complete.md#dependency). ### How do I calculate ingestion latency for my dependency telemetry? A list of the latest [currently supported modules](https://github.com/microsoft/ * Set up custom dependency tracking for [Java](opentelemetry-enable.md?tabs=java#add-custom-spans). * Set up custom dependency tracking for [OpenCensus Python](./opencensus-python-dependency.md). * [Write custom dependency telemetry](./api-custom-events-metrics.md#trackdependency)-* See [data model](./data-model.md) for Application Insights types and data model. +* See [data model](./data-model-complete.md) for Application Insights types and data model. * Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights. |
azure-monitor | Codeless Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md | -Auto-instrumentation quickly and easily enables [Application Insights](app-insights-overview.md) to make [telemetry](data-model.md) like metrics, requests, and dependencies available in your [Application Insights resource](create-workspace-resource.md). +Auto-instrumentation quickly and easily enables [Application Insights](app-insights-overview.md) to make [telemetry](data-model-complete.md) like metrics, requests, and dependencies available in your [Application Insights resource](create-workspace-resource.md). > [!div class="checklist"] > - No code changes are required. |
azure-monitor | Correlation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/correlation.md | This article explains the data model used by Application Insights to correlate t ## Data model for telemetry correlation -Application Insights defines a [data model](../../azure-monitor/app/data-model.md) for distributed telemetry correlation. To associate telemetry with a logical operation, every telemetry item has a context field called `operation_Id`. This identifier is shared by every telemetry item in the distributed trace. So even if you lose telemetry from a single layer, you can still associate telemetry reported by other components. +Application Insights defines a [data model](../../azure-monitor/app/data-model-complete.md) for distributed telemetry correlation. To associate telemetry with a logical operation, every telemetry item has a context field called `operation_Id`. This identifier is shared by every telemetry item in the distributed trace. So even if you lose telemetry from a single layer, you can still associate telemetry reported by other components. -A distributed logical operation typically consists of a set of smaller operations that are requests processed by one of the components. These operations are defined by [request telemetry](../../azure-monitor/app/data-model-request-telemetry.md). Every request telemetry item has its own `id` that identifies it uniquely and globally. And all telemetry items (such as traces and exceptions) that are associated with the request should set the `operation_parentId` to the value of the request `id`. +A distributed logical operation typically consists of a set of smaller operations that are requests processed by one of the components. These operations are defined by [request telemetry](../../azure-monitor/app/data-model-complete.md#request). Every request telemetry item has its own `id` that identifies it uniquely and globally. And all telemetry items (such as traces and exceptions) that are associated with the request should set the `operation_parentId` to the value of the request `id`. -Every outgoing operation, such as an HTTP call to another component, is represented by [dependency telemetry](../../azure-monitor/app/data-model-dependency-telemetry.md). Dependency telemetry also defines its own `id` that's globally unique. Request telemetry, initiated by this dependency call, uses this `id` as its `operation_parentId`. +Every outgoing operation, such as an HTTP call to another component, is represented by [dependency telemetry](../../azure-monitor/app/data-model-complete.md#dependency). Dependency telemetry also defines its own `id` that's globally unique. Request telemetry, initiated by this dependency call, uses this `id` as its `operation_parentId`. You can build a view of the distributed logical operation by using `operation_Id`, `operation_parentId`, and `request.id` with `dependency.id`. These fields also define the causality order of telemetry calls. The [W3C Trace-Context](https://w3c.github.io/trace-context/) and Application In | `Operation_Id` | [trace-id](https://w3c.github.io/trace-context/#trace-id) | | `Operation_ParentId` | [parent-id](https://w3c.github.io/trace-context/#parent-id) of this span's parent span. This field must be empty if it's a root span.| -For more information, see [Application Insights telemetry data model](../../azure-monitor/app/data-model.md). +For more information, see [Application Insights telemetry data model](../../azure-monitor/app/data-model-complete.md). ### Enable W3C distributed tracing support for .NET apps You can also set the cloud role name via environment variable or system property - For advanced correlation scenarios in ASP.NET Core and ASP.NET, see [Track custom operations](custom-operations-tracking.md). - Learn more about [setting cloud_RoleName](./app-map.md#set-or-override-cloud-role-name) for other SDKs. - Onboard all components of your microservice on Application Insights. Check out the [supported platforms](./app-insights-overview.md#supported-languages).-- See the [data model](./data-model.md) for Application Insights types.+- See the [data model](./data-model-complete.md) for Application Insights types. - Learn how to [extend and filter telemetry](./api-filtering-sampling.md). - Review the [Application Insights config reference](configuration-with-applicationinsights-config.md). |
azure-monitor | Custom Operations Tracking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-operations-tracking.md | Each Application Insights operation (request or dependency) involves `Activity`. - Learn the basics of [telemetry correlation](correlation.md) in Application Insights. - Check out how correlated data powers [transaction diagnostics experience](./transaction-diagnostics.md) and [Application Map](./app-map.md).-- See the [data model](./data-model.md) for Application Insights types and data model.+- See the [data model](./data-model-complete.md) for Application Insights types and data model. - Report custom [events and metrics](./api-custom-events-metrics.md) to Application Insights. - Check out standard [configuration](configuration-with-applicationinsights-config.md#telemetry-initializers-aspnet) for context properties collection. - Check the [System.Diagnostics.Activity User Guide](https://github.com/dotnet/runtime/blob/master/src/libraries/System.Diagnostics.DiagnosticSource/src/ActivityUserGuide.md) to see how we correlate telemetry. |
azure-monitor | Data Model Complete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-complete.md | + + Title: Application Insights telemetry data model +description: This article describes the Application Insights telemetry data model including Request, Dependency, Exception, Trace, Event, Metric, PageView, and Context. ++documentationcenter: .net +++ ibiza + Last updated : 03/17/2023+++# Application Insights telemetry data model ++[Application Insights](./app-insights-overview.md) sends telemetry from your web application to the Azure portal so that you can analyze the performance and usage of your application. The telemetry model is standardized, so it's possible to create platform and language-independent monitoring. ++Data collected by Application Insights models this typical application execution pattern. ++ ++The following types of telemetry are used to monitor the execution of your app. Three types are automatically collected by the Application Insights SDK from the web application framework: ++* [Request](#request): Generated to log a request received by your app. For example, the Application Insights web SDK automatically generates a Request telemetry item for each HTTP request that your web app receives. ++ An *operation* is made up of the threads of execution that process a request. You can also [write code](./api-custom-events-metrics.md#trackrequest) to monitor other types of operation, such as a "wake up" in a web job or function that periodically processes data. Each operation has an ID. The ID can be used to [group](./correlation.md) all telemetry generated while your app is processing the request. Each operation either succeeds or fails and has a duration of time. +* [Exception](#exception): Typically represents an exception that causes an operation to fail. +* [Dependency](#dependency): Represents a call from your app to an external service or storage, such as a REST API or SQL. In ASP.NET, dependency calls to SQL are defined by `System.Data`. Calls to HTTP endpoints are defined by `System.Net`. ++Application Insights provides three data types for custom telemetry: ++* [Trace](#trace): Used either directly or through an adapter to implement diagnostics logging by using an instrumentation framework that's familiar to you, such as `Log4Net` or `System.Diagnostics`. +* [Event](#event): Typically used to capture user interaction with your service to analyze usage patterns. +* [Metric](#metric): Used to report periodic scalar measurements. ++Every telemetry item can define the [context information](#context) like application version or user session ID. Context is a set of strongly typed fields that unblocks certain scenarios. When application version is properly initialized, Application Insights can detect new patterns in application behavior correlated with redeployment. ++You can use session ID to calculate an outage or an issue impact on users. Calculating the distinct count of session ID values for a specific failed dependency, error trace, or critical exception gives you a good understanding of an impact. ++The Application Insights telemetry model defines a way to [correlate](./correlation.md) telemetry to the operation of which it's a part. For example, a request can make a SQL Database call and record diagnostics information. You can set the correlation context for those telemetry items that tie it back to the request telemetry. ++## Schema improvements ++The Application Insights data model is a basic yet powerful way to model your application telemetry. We strive to keep the model simple and slim to support essential scenarios and allow the schema to be extended for advanced use. ++To report data model or schema problems and suggestions, use our [GitHub repository](https://github.com/microsoft/ApplicationInsights-dotnet/issues/new/choose). ++## Request ++A request telemetry item in [Application Insights](./app-insights-overview.md) represents the logical sequence of execution triggered by an external request to your application. Every request execution is identified by a unique `id` and `url` that contain all the execution parameters. ++You can group requests by logical `name` and define the `source` of this request. Code execution can result in `success` or `fail` and has a certain `duration`. Both success and failure executions can be grouped further by `resultCode`. Start time for the request telemetry is defined on the envelope level. ++Request telemetry supports the standard extensibility model by using custom `properties` and `measurements`. +++### Name ++The name of the request represents the code path taken to process the request. A low cardinality value allows for better grouping of requests. For HTTP requests, it represents the HTTP method and URL path template like `GET /values/{id}` without the actual `id` value. ++The Application Insights web SDK sends a request name "as is" about letter case. Grouping on the UI is case sensitive, so `GET /Home/Index` is counted separately from `GET /home/INDEX` even though often they result in the same controller and action execution. The reason for that is that URLs in general are [case sensitive](https://www.w3.org/TR/WD-html40-970708/htmlweb.html). You might want to see if all `404` errors happened for URLs typed in uppercase. You can read more about request name collection by the ASP.NET web SDK in the [blog post](https://apmtips.com/posts/2015-02-23-request-name-and-url/). ++**Maximum length**: 1,024 characters ++### ID ++ID is the identifier of a request call instance. It's used for correlation between the request and other telemetry items. The ID should be globally unique. For more information, see [Telemetry correlation in Application Insights](./correlation.md). ++**Maximum length**: 128 characters ++### URL ++URL is the request URL with all query string parameters. ++**Maximum length**: 2,048 characters ++### Source ++Source is the source of the request. Examples are the instrumentation key of the caller or the IP address of the caller. For more information, see [Telemetry correlation in Application Insights](./correlation.md). ++**Maximum length**: 1,024 characters ++### Duration ++The request duration is formatted as `DD.HH:MM:SS.MMMMMM`. It must be positive and less than `1000` days. This field is required because request telemetry represents the operation with the beginning and the end. ++### Response code ++The response code is the result of a request execution. It's the HTTP status code for HTTP requests. It might be an `HRESULT` value or an exception type for other request types. ++**Maximum length**: 1,024 characters ++### Success ++Success indicates whether a call was successful or unsuccessful. This field is required. When a request isn't set explicitly to `false`, it's considered to be successful. Set this value to `false` if the operation was interrupted by an exception or a returned error result code. ++For web applications, Application Insights defines a request as successful when the response code is less than `400` or equal to `401`. However, there are cases when this default mapping doesn't match the semantics of the application. ++Response code `404` might indicate "no records," which can be part of regular flow. It also might indicate a broken link. For broken links, you can implement more advanced logic. You can mark broken links as failures only when those links are located on the same site by analyzing the URL referrer. Or you can mark them as failures when they're accessed from the company's mobile application. Similarly, `301` and `302` indicate failure when they're accessed from the client that doesn't support redirect. ++Partially accepted content `206` might indicate a failure of an overall request. For instance, an Application Insights endpoint might receive a batch of telemetry items as a single request. It returns `206` when some items in the batch weren't processed successfully. An increasing rate of `206` indicates a problem that needs to be investigated. Similar logic applies to `207` Multi-Status where the success might be the worst of separate response codes. ++You can read more about the request result code and status code in the [blog post](https://apmtips.com/posts/2016-12-03-request-success-and-response-code/). ++### Custom properties +++### Custom measurements +++## Dependency ++Dependency Telemetry (in [Application Insights](./app-insights-overview.md)) represents an interaction of the monitored component with a remote component such as SQL or an HTTP endpoint. ++### Name ++Name of the command initiated with this dependency call. Low cardinality value. Examples are stored procedure name and URL path template. ++### ID ++Identifier of a dependency call instance. Used for correlation with the request telemetry item corresponding to this dependency call. For more information, see [correlation](./correlation.md) page. ++### Data ++Command initiated by this dependency call. Examples are SQL statement and HTTP URL with all query parameters. ++### Type ++Dependency type name. Low cardinality value for logical grouping of dependencies and interpretation of other fields like commandName and resultCode. Examples are SQL, Azure table, and HTTP. ++### Target ++Target site of a dependency call. Examples are server name, host address. For more information, see [correlation](./correlation.md) page. ++### Duration ++Request duration in format: `DD.HH:MM:SS.MMMMMM`. Must be less than `1000` days. ++### Result code ++Result code of a dependency call. Examples are SQL error code and HTTP status code. ++### Success ++Indication of successful or unsuccessful call. ++### Custom properties +++### Custom measurements +++## Exception ++In [Application Insights](./app-insights-overview.md), an instance of Exception represents a handled or unhandled exception that occurred during execution of the monitored application. ++### Problem Id ++Identifier of where the exception was thrown in code. Used for exceptions grouping. Typically a combination of exception type and a function from the call stack. ++Max length: 1024 characters ++### Severity level ++Trace severity level. Value can be `Verbose`, `Information`, `Warning`, `Error`, `Critical`. ++### Exception details ++(To be extended) ++### Custom properties +++### Custom measurements +++## Trace ++Trace telemetry in [Application Insights](./app-insights-overview.md) represents `printf`-style trace statements that are text searched. `Log4Net`, `NLog`, and other text-based log file entries are translated into instances of this type. The trace doesn't have measurements as an extensibility. ++### Message ++Trace message. ++**Maximum length**: 32,768 characters ++### Severity level ++Trace severity level. ++**Values**: `Verbose`, `Information`, `Warning`, `Error`, and `Critical` ++### Custom properties +++## Event ++You can create event telemetry items (in [Application Insights](./app-insights-overview.md)) to represent an event that occurred in your application. Typically, it's a user interaction such as a button click or order checkout. It can also be an application lifecycle event like initialization or a configuration update. ++Semantically, events may or may not be correlated to requests. However, if used properly, event telemetry is more important than requests or traces. Events represent business telemetry and should be subject to separate, less aggressive [sampling](./api-filtering-sampling.md). ++### Name ++Event name: To allow proper grouping and useful metrics, restrict your application so that it generates a few separate event names. For example, don't use a separate name for each generated instance of an event. ++**Maximum length:** 512 characters ++### Custom properties +++### Custom measurements +++## Metric ++There are two types of metric telemetry supported by [Application Insights](./app-insights-overview.md): single measurement and pre-aggregated metric. Single measurement is just a name and value. Pre-aggregated metric specifies minimum and maximum value of the metric in the aggregation interval and standard deviation of it. ++Pre-aggregated metric telemetry assumes that aggregation period was one minute. ++There are several well-known metric names supported by Application Insights. These metrics placed into performanceCounters table. ++Metric representing system and process counters: ++| **.NET name** | **Platform agnostic name** | **Description** +| - | -- | - +| `\Processor(_Total)\% Processor Time` | Work in progress... | total machine CPU +| `\Memory\Available Bytes` | Work in progress... | Shows the amount of physical memory, in bytes, available to processes running on the computer. It is calculated by summing the amount of space on the zeroed, free, and standby memory lists. Free memory is ready for use; zeroed memory consists of pages of memory filled with zeros to prevent later processes from seeing data used by a previous process; standby memory is memory that has been removed from a process's working set (its physical memory) en route to disk but is still available to be recalled. See [Memory Object](/previous-versions/ms804008(v=msdn.10)) +| `\Process(??APP_WIN32_PROC??)\% Processor Time` | Work in progress... | CPU of the process hosting the application +| `\Process(??APP_WIN32_PROC??)\Private Bytes` | Work in progress... | memory used by the process hosting the application +| `\Process(??APP_WIN32_PROC??)\IO Data Bytes/sec` | Work in progress... | rate of I/O operations runs by process hosting the application +| `\ASP.NET Applications(??APP_W3SVC_PROC??)\Requests/Sec` | Work in progress... | rate of requests processed by application +| `\.NET CLR Exceptions(??APP_CLR_PROC??)\# of Exceps Thrown / sec` | Work in progress... | rate of exceptions thrown by application +| `\ASP.NET Applications(??APP_W3SVC_PROC??)\Request Execution Time` | Work in progress... | average requests execution time +| `\ASP.NET Applications(??APP_W3SVC_PROC??)\Requests In Application Queue` | Work in progress... | number of requests waiting for the processing in a queue ++See [Metrics - Get](/rest/api/application-insights/metrics/get) for more information on the Metrics REST API. ++### Name ++Name of the metric you'd like to see in Application Insights portal and UI. ++### Value ++Single value for measurement. Sum of individual measurements for the aggregation. ++### Count ++Metric weight of the aggregated metric. Should not be set for a measurement. ++### Min ++Minimum value of the aggregated metric. Should not be set for a measurement. ++### Max ++Maximum value of the aggregated metric. Should not be set for a measurement. ++### Standard deviation ++Standard deviation of the aggregated metric. Should not be set for a measurement. ++### Custom properties ++Metric with the custom property `CustomPerfCounter` set to `true` indicate that the metric represents the Windows performance counter. These metrics placed in performanceCounters table. Not in customMetrics. Also the name of this metric is parsed to extract category, counter, and instance names. +++## PageView ++PageView telemetry (in [Application Insights](./app-insights-overview.md)) is logged when an application user opens a new page of a monitored application. The `Page` in this context is a logical unit that is defined by the developer to be an application tab or a screen and isn't necessarily correlated to a browser webpage load or refresh action. This distinction can be further understood in the context of single-page applications (SPA) where the switch between pages isn't tied to browser page actions. [`pageViews.duration`](/azure/azure-monitor/reference/tables/pageviews) is the time it takes for the application to present the page to the user. ++> [!NOTE] +> * By default, Application Insights SDKs log single PageView events on each browser webpage load action, with [`pageViews.duration`](/azure/azure-monitor/reference/tables/pageviews) populated by [browser timing](#measuring-browsertiming-in-application-insights). Developers can extend additional tracking of PageView events by using the [trackPageView API call](./api-custom-events-metrics.md#page-views). +> * The default logs retention is 30 days and needs to be adjusted if you want to view page view statistics over a longer period of time. ++### Measuring browserTiming in Application Insights ++Modern browsers expose measurements for page load actions with the [Performance API](https://developer.mozilla.org/en-US/docs/Web/API/Performance_API). Application Insights simplifies these measurements by consolidating related timings into [standard browser metrics](../essentials/metrics-supported.md#microsoftinsightscomponents) as defined by these processing time definitions: ++* Client <--> DNS: Client reaches out to DNS to resolve website hostname, DNS responds with IP address. +* Client <--> Web Server: Client creates TCP then TLS handshakes with web server. +* Client <--> Web Server: Client sends request payload, waits for server to execute request, and receives first response packet. +* Client <--Web Server: Client receives the rest of the response payload bytes from the web server. +* Client: Client now has full response payload and has to render contents into browser and load the DOM. + +* `browserTimings/networkDuration` = #1 + #2 +* `browserTimings/sendDuration` = #3 +* `browserTimings/receiveDuration` = #4 +* `browserTimings/processingDuration` = #5 +* `browsertimings/totalDuration` = #1 + #2 + #3 + #4 + #5 +* `pageViews/duration` + * The PageView duration is from the browserΓÇÖs performance timing interface, [`PerformanceNavigationTiming.duration`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceEntry/duration). + * If `PerformanceNavigationTiming` is available that duration is used. + * If itΓÇÖs not, then the *deprecated* [`PerformanceTiming`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceTiming) interface is used and the delta between [`NavigationStart`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceTiming/navigationStart) and [`LoadEventEnd`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceTiming/loadEventEnd) is calculated. + * The developer specifies a duration value when logging custom PageView events using the [trackPageView API call](./api-custom-events-metrics.md#page-views). +++## Context ++Every telemetry item might have a strongly typed context field. Every field enables a specific monitoring scenario. Use the custom properties collection to store custom or application-specific contextual information. ++### Application version ++Information in the application context fields is always about the application that's sending the telemetry. The application version is used to analyze trend changes in the application behavior and its correlation to the deployments. ++**Maximum length:** 1,024 ++### Client IP address ++This field is the IP address of the client device. IPv4 and IPv6 are supported. When telemetry is sent from a service, the location context is about the user who initiated the operation in the service. Application Insights extract the geo-location information from the client IP and then truncate it. The client IP by itself can't be used as user identifiable information. ++**Maximum length:** 46 ++### Device type ++Originally, this field was used to indicate the type of the device the user of the application is using. Today it's used primarily to distinguish JavaScript telemetry with the device type `Browser` from server-side telemetry with the device type `PC`. ++**Maximum length:** 64 ++### Operation ID ++This field is the unique identifier of the root operation. This identifier allows grouping telemetry across multiple components. For more information, see [Telemetry correlation](./correlation.md). The operation ID is created by either a request or a page view. All other telemetry sets this field to the value for the containing request or page view. ++**Maximum length:** 128 ++### Parent operation ID ++This field is the unique identifier of the telemetry item's immediate parent. For more information, see [Telemetry correlation](./correlation.md). ++**Maximum length:** 128 ++### Operation name ++This field is the name (group) of the operation. The operation name is created by either a request or a page view. All other telemetry items set this field to the value for the containing request or page view. The operation name is used for finding all the telemetry items for a group of operations (for example, `GET Home/Index`). This context property is used to answer questions like What are the typical exceptions thrown on this page? ++**Maximum length:** 1,024 ++### Synthetic source of the operation ++This field is the name of the synthetic source. Some telemetry from the application might represent synthetic traffic. It might be the web crawler indexing the website, site availability tests, or traces from diagnostic libraries like the Application Insights SDK itself. ++**Maximum length:** 1,024 ++### Session ID ++Session ID is the instance of the user's interaction with the app. Information in the session context fields is always about the user. When telemetry is sent from a service, the session context is about the user who initiated the operation in the service. ++**Maximum length:** 64 ++### Anonymous user ID ++The anonymous user ID (User.Id) represents the user of the application. When telemetry is sent from a service, the user context is about the user who initiated the operation in the service. ++[Sampling](./sampling.md) is one of the techniques to minimize the amount of collected telemetry. A sampling algorithm attempts to either sample in or out all the correlated telemetry. An anonymous user ID is used for sampling score generation, so an anonymous user ID should be a random enough value. ++> [!NOTE] +> The count of anonymous user IDs isn't the same as the number of unique application users. The count of anonymous user IDs is typically higher because each time the user opens your app on a different device or browser, or cleans up browser cookies, a new unique anonymous user ID is allocated. This calculation might result in counting the same physical users multiple times. ++User IDs can be cross referenced with session IDs to provide unique telemetry dimensions and establish user activity over a session duration. ++Using an anonymous user ID to store a username is a misuse of the field. Use an authenticated user ID. ++**Maximum length:** 128 ++### Authenticated user ID ++An authenticated user ID is the opposite of an anonymous user ID. This field represents the user with a friendly name. This ID is only collected by default with the ASP.NET Framework SDK's [`AuthenticatedUserIdTelemetryInitializer`](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/WEB/Src/Web/Web/AuthenticatedUserIdTelemetryInitializer.cs). ++Use the Application Insights SDK to initialize the authenticated user ID with a value that identifies the user persistently across browsers and devices. In this way, all telemetry items are attributed to that unique ID. This ID enables querying for all telemetry collected for a specific user (subject to [sampling configurations](./sampling.md) and [telemetry filtering](./api-filtering-sampling.md)). ++User IDs can be cross referenced with session IDs to provide unique telemetry dimensions and establish user activity over a session duration. ++**Maximum length:** 1,024 ++### Account ID ++The account ID, in multi-tenant applications, is the tenant account ID or name that the user is acting with. It's used for more user segmentation when a user ID and an authenticated user ID aren't sufficient. Examples might be a subscription ID for the Azure portal or the blog name for a blogging platform. ++**Maximum length:** 1,024 ++### Cloud role ++This field is the name of the role of which the application is a part. It maps directly to the role name in Azure. It can also be used to distinguish micro services, which are part of a single application. ++**Maximum length:** 256 ++### Cloud role instance ++This field is the name of the instance where the application is running. For example, it's the computer name for on-premises or the instance name for Azure. ++**Maximum length:** 256 ++### Internal: SDK version ++For more information, see this [SDK version article](https://github.com/MohanGsk/ApplicationInsights-Home/blob/master/EndpointSpecs/SDK-VERSIONS.md). ++**Maximum length:** 64 ++### Internal: Node name ++This field represents the node name used for billing purposes. Use it to override the standard detection of nodes. ++**Maximum length:** 256 ++## Next steps ++Learn how to use [Application Insights API for custom events and metrics](./api-custom-events-metrics.md), including: +- [Custom request telemetry](./api-custom-events-metrics.md#trackrequest) +- [Custom dependency telemetry](./api-custom-events-metrics.md#trackdependency) +- [Custom trace telemetry](./api-custom-events-metrics.md#tracktrace) +- [Custom event telemetry](./api-custom-events-metrics.md#trackevent) +- [Custom metric telemetry](./api-custom-events-metrics.md#trackmetric) ++Set up dependency tracking for: +- [.NET](./asp-net-dependencies.md) +- [Java](./opentelemetry-enable.md?tabs=java) ++Learn more: ++- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights. +- Check out standard context properties collection [configuration](./configuration-with-applicationinsights-config.md#telemetry-initializers-aspnet). +- Explore [.NET trace logs in Application Insights](./asp-net-trace-logs.md). +- Explore [Java trace logs in Application Insights](./opentelemetry-enable.md?tabs=java#logs). +- Learn about [Azure Functions' built-in integration with Application Insights](../../azure-functions/functions-monitoring.md?toc=/azure/azure-monitor/toc.json) to monitor functions executions. +- Learn how to [configure an ASP.NET Core](./asp-net.md) application with Application Insights. +- Learn how to [diagnose exceptions in your web apps with Application Insights](./asp-net-exceptions.md). +- Learn how to [extend and filter telemetry](./api-filtering-sampling.md). +- Use [sampling](./sampling.md) to minimize the amount of telemetry based on data model. |
azure-monitor | Data Model Context | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-context.md | - Title: 'Application Insights telemetry data model: Telemetry context | Microsoft Docs' -description: Learn about the Application Insights telemetry context data model. - Previously updated : 05/15/2017----# Telemetry context: Application Insights data model --Every telemetry item might have a strongly typed context field. Every field enables a specific monitoring scenario. Use the custom properties collection to store custom or application-specific contextual information. --## Application version --Information in the application context fields is always about the application that's sending the telemetry. The application version is used to analyze trend changes in the application behavior and its correlation to the deployments. --**Maximum length:** 1,024 --## Client IP address --This field is the IP address of the client device. IPv4 and IPv6 are supported. When telemetry is sent from a service, the location context is about the user who initiated the operation in the service. Application Insights extract the geo-location information from the client IP and then truncate it. The client IP by itself can't be used as user identifiable information. --**Maximum length:** 46 --## Device type --Originally, this field was used to indicate the type of the device the user of the application is using. Today it's used primarily to distinguish JavaScript telemetry with the device type `Browser` from server-side telemetry with the device type `PC`. --**Maximum length:** 64 --## Operation ID --This field is the unique identifier of the root operation. This identifier allows grouping telemetry across multiple components. For more information, see [Telemetry correlation](./correlation.md). The operation ID is created by either a request or a page view. All other telemetry sets this field to the value for the containing request or page view. --**Maximum length:** 128 --## Parent operation ID --This field is the unique identifier of the telemetry item's immediate parent. For more information, see [Telemetry correlation](./correlation.md). --**Maximum length:** 128 --## Operation name --This field is the name (group) of the operation. The operation name is created by either a request or a page view. All other telemetry items set this field to the value for the containing request or page view. The operation name is used for finding all the telemetry items for a group of operations (for example, `GET Home/Index`). This context property is used to answer questions like What are the typical exceptions thrown on this page? --**Maximum length:** 1,024 --## Synthetic source of the operation --This field is the name of the synthetic source. Some telemetry from the application might represent synthetic traffic. It might be the web crawler indexing the website, site availability tests, or traces from diagnostic libraries like the Application Insights SDK itself. --**Maximum length:** 1,024 --## Session ID --Session ID is the instance of the user's interaction with the app. Information in the session context fields is always about the user. When telemetry is sent from a service, the session context is about the user who initiated the operation in the service. --**Maximum length:** 64 --## Anonymous user ID --The anonymous user ID (User.Id) represents the user of the application. When telemetry is sent from a service, the user context is about the user who initiated the operation in the service. --[Sampling](./sampling.md) is one of the techniques to minimize the amount of collected telemetry. A sampling algorithm attempts to either sample in or out all the correlated telemetry. An anonymous user ID is used for sampling score generation, so an anonymous user ID should be a random enough value. --> [!NOTE] -> The count of anonymous user IDs isn't the same as the number of unique application users. The count of anonymous user IDs is typically higher because each time the user opens your app on a different device or browser, or cleans up browser cookies, a new unique anonymous user ID is allocated. This calculation might result in counting the same physical users multiple times. --User IDs can be cross referenced with session IDs to provide unique telemetry dimensions and establish user activity over a session duration. --Using an anonymous user ID to store a username is a misuse of the field. Use an authenticated user ID. --**Maximum length:** 128 --## Authenticated user ID --An authenticated user ID is the opposite of an anonymous user ID. This field represents the user with a friendly name. This ID is only collected by default with the ASP.NET Framework SDK's [`AuthenticatedUserIdTelemetryInitializer`](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/WEB/Src/Web/Web/AuthenticatedUserIdTelemetryInitializer.cs). --Use the Application Insights SDK to initialize the authenticated user ID with a value that identifies the user persistently across browsers and devices. In this way, all telemetry items are attributed to that unique ID. This ID enables querying for all telemetry collected for a specific user (subject to [sampling configurations](./sampling.md) and [telemetry filtering](./api-filtering-sampling.md)). --User IDs can be cross referenced with session IDs to provide unique telemetry dimensions and establish user activity over a session duration. --**Maximum length:** 1,024 --## Account ID --The account ID, in multi-tenant applications, is the tenant account ID or name that the user is acting with. It's used for more user segmentation when a user ID and an authenticated user ID aren't sufficient. Examples might be a subscription ID for the Azure portal or the blog name for a blogging platform. --**Maximum length:** 1,024 --## Cloud role --This field is the name of the role of which the application is a part. It maps directly to the role name in Azure. It can also be used to distinguish micro services, which are part of a single application. --**Maximum length:** 256 --## Cloud role instance --This field is the name of the instance where the application is running. For example, it's the computer name for on-premises or the instance name for Azure. --**Maximum length:** 256 --## Internal: SDK version --For more information, see this [SDK version article](https://github.com/MohanGsk/ApplicationInsights-Home/blob/master/EndpointSpecs/SDK-VERSIONS.md). --**Maximum length:** 64 --## Internal: Node name --This field represents the node name used for billing purposes. Use it to override the standard detection of nodes. --**Maximum length:** 256 --## Next steps --- Learn how to [extend and filter telemetry](./api-filtering-sampling.md).-- See the [Application Insights telemetry data model](data-model.md) for Application Insights types and data model.-- Check out standard context properties collection [configuration](./configuration-with-applicationinsights-config.md#telemetry-initializers-aspnet). |
azure-monitor | Data Model Dependency Telemetry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-dependency-telemetry.md | - Title: Azure Monitor Application Insights Dependency Data Model -description: Application Insights data model for dependency telemetry - Previously updated : 04/17/2017---# Dependency telemetry: Application Insights data model --Dependency Telemetry (in [Application Insights](./app-insights-overview.md)) represents an interaction of the monitored component with a remote component such as SQL or an HTTP endpoint. --## Name --Name of the command initiated with this dependency call. Low cardinality value. Examples are stored procedure name and URL path template. --## ID --Identifier of a dependency call instance. Used for correlation with the request telemetry item corresponding to this dependency call. For more information, see [correlation](./correlation.md) page. --## Data --Command initiated by this dependency call. Examples are SQL statement and HTTP URL with all query parameters. --## Type --Dependency type name. Low cardinality value for logical grouping of dependencies and interpretation of other fields like commandName and resultCode. Examples are SQL, Azure table, and HTTP. --## Target --Target site of a dependency call. Examples are server name, host address. For more information, see [correlation](./correlation.md) page. --## Duration --Request duration in format: `DD.HH:MM:SS.MMMMMM`. Must be less than `1000` days. --## Result code --Result code of a dependency call. Examples are SQL error code and HTTP status code. --## Success --Indication of successful or unsuccessful call. --## Custom properties ---## Custom measurements ----## Next steps --- Set up dependency tracking for [.NET](./asp-net-dependencies.md).-- Set up dependency tracking for [Java](./opentelemetry-enable.md?tabs=java).-- [Write custom dependency telemetry](./api-custom-events-metrics.md#trackdependency)-- See [data model](data-model.md) for Application Insights types and data model.-- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.- |
azure-monitor | Data Model Event Telemetry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-event-telemetry.md | - Title: Application Insights telemetry data model - Event telemetry | Microsoft Docs -description: Learn about the Application Insights data model for event telemetry. - Previously updated : 04/25/2017----# Event telemetry: Application Insights data model --You can create event telemetry items (in [Application Insights](./app-insights-overview.md)) to represent an event that occurred in your application. Typically, it's a user interaction such as a button click or order checkout. It can also be an application lifecycle event like initialization or a configuration update. --Semantically, events may or may not be correlated to requests. However, if used properly, event telemetry is more important than requests or traces. Events represent business telemetry and should be subject to separate, less aggressive [sampling](./api-filtering-sampling.md). --## Name --Event name: To allow proper grouping and useful metrics, restrict your application so that it generates a few separate event names. For example, don't use a separate name for each generated instance of an event. --**Maximum length:** 512 characters --## Custom properties ---## Custom measurements ---## Next steps --- See [Data model](data-model.md) for Application Insights types and data models.-- [Write custom event telemetry](./api-custom-events-metrics.md#trackevent).-- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights. |
azure-monitor | Data Model Exception Telemetry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-exception-telemetry.md | - Title: Azure Application Insights Exception Telemetry Data model -description: Application Insights data model for exception telemetry - Previously updated : 04/25/2017----# Exception telemetry: Application Insights data model --In [Application Insights](./app-insights-overview.md), an instance of Exception represents a handled or unhandled exception that occurred during execution of the monitored application. --## Problem Id --Identifier of where the exception was thrown in code. Used for exceptions grouping. Typically a combination of exception type and a function from the call stack. --Max length: 1024 characters --## Severity level --Trace severity level. Value can be `Verbose`, `Information`, `Warning`, `Error`, `Critical`. --## Exception details --(To be extended) --## Custom properties ---## Custom measurements ---## Next steps --- See [data model](data-model.md) for Application Insights types and data model.-- Learn how to [diagnose exceptions in your web apps with Application Insights](./asp-net-exceptions.md).-- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.- |
azure-monitor | Data Model Metric Telemetry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-metric-telemetry.md | - Title: Data model for metric telemetry - Azure Application Insights -description: Application Insights data model for metric telemetry - Previously updated : 01/24/2023----# Metric telemetry: Application Insights data model --There are two types of metric telemetry supported by [Application Insights](./app-insights-overview.md): single measurement and pre-aggregated metric. Single measurement is just a name and value. Pre-aggregated metric specifies minimum and maximum value of the metric in the aggregation interval and standard deviation of it. --Pre-aggregated metric telemetry assumes that aggregation period was one minute. --There are several well-known metric names supported by Application Insights. These metrics placed into performanceCounters table. --Metric representing system and process counters: --| **.NET name** | **Platform agnostic name** | **Description** -| - | -- | - -| `\Processor(_Total)\% Processor Time` | Work in progress... | total machine CPU -| `\Memory\Available Bytes` | Work in progress... | Shows the amount of physical memory, in bytes, available to processes running on the computer. It is calculated by summing the amount of space on the zeroed, free, and standby memory lists. Free memory is ready for use; zeroed memory consists of pages of memory filled with zeros to prevent later processes from seeing data used by a previous process; standby memory is memory that has been removed from a process's working set (its physical memory) en route to disk but is still available to be recalled. See [Memory Object](/previous-versions/ms804008(v=msdn.10)) -| `\Process(??APP_WIN32_PROC??)\% Processor Time` | Work in progress... | CPU of the process hosting the application -| `\Process(??APP_WIN32_PROC??)\Private Bytes` | Work in progress... | memory used by the process hosting the application -| `\Process(??APP_WIN32_PROC??)\IO Data Bytes/sec` | Work in progress... | rate of I/O operations runs by process hosting the application -| `\ASP.NET Applications(??APP_W3SVC_PROC??)\Requests/Sec` | Work in progress... | rate of requests processed by application -| `\.NET CLR Exceptions(??APP_CLR_PROC??)\# of Exceps Thrown / sec` | Work in progress... | rate of exceptions thrown by application -| `\ASP.NET Applications(??APP_W3SVC_PROC??)\Request Execution Time` | Work in progress... | average requests execution time -| `\ASP.NET Applications(??APP_W3SVC_PROC??)\Requests In Application Queue` | Work in progress... | number of requests waiting for the processing in a queue --See [Metrics - Get](/rest/api/application-insights/metrics/get) for more information on the Metrics REST API. --## Name --Name of the metric you'd like to see in Application Insights portal and UI. --## Value --Single value for measurement. Sum of individual measurements for the aggregation. --## Count --Metric weight of the aggregated metric. Should not be set for a measurement. --## Min --Minimum value of the aggregated metric. Should not be set for a measurement. --## Max --Maximum value of the aggregated metric. Should not be set for a measurement. --## Standard deviation --Standard deviation of the aggregated metric. Should not be set for a measurement. --## Custom properties --Metric with the custom property `CustomPerfCounter` set to `true` indicate that the metric represents the Windows performance counter. These metrics placed in performanceCounters table. Not in customMetrics. Also the name of this metric is parsed to extract category, counter, and instance names. ---## Next steps --- Learn how to use [Application Insights API for custom events and metrics](./api-custom-events-metrics.md#trackmetric).-- See [data model](data-model.md) for Application Insights types and data model.-- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights. |
azure-monitor | Data Model Pageview Telemetry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-pageview-telemetry.md | - Title: Azure Application Insights Data Model - PageView Telemetry -description: Application Insights data model for page view telemetry - Previously updated : 09/07/2022----# PageView telemetry: Application Insights data model --PageView telemetry (in [Application Insights](./app-insights-overview.md)) is logged when an application user opens a new page of a monitored application. The `Page` in this context is a logical unit that is defined by the developer to be an application tab or a screen and isn't necessarily correlated to a browser webpage load or refresh action. This distinction can be further understood in the context of single-page applications (SPA) where the switch between pages isn't tied to browser page actions. [`pageViews.duration`](/azure/azure-monitor/reference/tables/pageviews) is the time it takes for the application to present the page to the user. --> [!NOTE] -> * By default, Application Insights SDKs log single PageView events on each browser webpage load action, with [`pageViews.duration`](/azure/azure-monitor/reference/tables/pageviews) populated by [browser timing](#measuring-browsertiming-in-application-insights). Developers can extend additional tracking of PageView events by using the [trackPageView API call](./api-custom-events-metrics.md#page-views). -> * The default logs retention is 30 days and needs to be adjusted if you want to view page view statistics over a longer period of time. --## Measuring browserTiming in Application Insights --Modern browsers expose measurements for page load actions with the [Performance API](https://developer.mozilla.org/en-US/docs/Web/API/Performance_API). Application Insights simplifies these measurements by consolidating related timings into [standard browser metrics](../essentials/metrics-supported.md#microsoftinsightscomponents) as defined by these processing time definitions: --1. Client <--> DNS: Client reaches out to DNS to resolve website hostname, DNS responds with IP address. -1. Client <--> Web Server: Client creates TCP then TLS handshakes with web server. -1. Client <--> Web Server: Client sends request payload, waits for server to execute request, and receives first response packet. -1. Client <--Web Server: Client receives the rest of the response payload bytes from the web server. -1. Client: Client now has full response payload and has to render contents into browser and load the DOM. - -* `browserTimings/networkDuration` = #1 + #2 -* `browserTimings/sendDuration` = #3 -* `browserTimings/receiveDuration` = #4 -* `browserTimings/processingDuration` = #5 -* `browsertimings/totalDuration` = #1 + #2 + #3 + #4 + #5 -* `pageViews/duration` - * The PageView duration is from the browserΓÇÖs performance timing interface, [`PerformanceNavigationTiming.duration`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceEntry/duration). - * If `PerformanceNavigationTiming` is available that duration is used. - * If itΓÇÖs not, then the *deprecated* [`PerformanceTiming`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceTiming) interface is used and the delta between [`NavigationStart`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceTiming/navigationStart) and [`LoadEventEnd`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceTiming/loadEventEnd) is calculated. - * The developer specifies a duration value when logging custom PageView events using the [trackPageView API call](./api-custom-events-metrics.md#page-views). -- |
azure-monitor | Data Model Request Telemetry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-request-telemetry.md | - Title: Data model for request telemetry - Application Insights -description: This article describes the Application Insights data model for request telemetry. - Previously updated : 01/07/2019----# Request telemetry: Application Insights data model --A request telemetry item in [Application Insights](./app-insights-overview.md) represents the logical sequence of execution triggered by an external request to your application. Every request execution is identified by a unique `id` and `url` that contain all the execution parameters. --You can group requests by logical `name` and define the `source` of this request. Code execution can result in `success` or `fail` and has a certain `duration`. Both success and failure executions can be grouped further by `resultCode`. Start time for the request telemetry is defined on the envelope level. --Request telemetry supports the standard extensibility model by using custom `properties` and `measurements`. ---## Name --The name of the request represents the code path taken to process the request. A low cardinality value allows for better grouping of requests. For HTTP requests, it represents the HTTP method and URL path template like `GET /values/{id}` without the actual `id` value. --The Application Insights web SDK sends a request name "as is" about letter case. Grouping on the UI is case sensitive, so `GET /Home/Index` is counted separately from `GET /home/INDEX` even though often they result in the same controller and action execution. The reason for that is that URLs in general are [case sensitive](https://www.w3.org/TR/WD-html40-970708/htmlweb.html). You might want to see if all `404` errors happened for URLs typed in uppercase. You can read more about request name collection by the ASP.NET web SDK in the [blog post](https://apmtips.com/posts/2015-02-23-request-name-and-url/). --**Maximum length**: 1,024 characters --## ID --ID is the identifier of a request call instance. It's used for correlation between the request and other telemetry items. The ID should be globally unique. For more information, see [Telemetry correlation in Application Insights](./correlation.md). --**Maximum length**: 128 characters --## URL --URL is the request URL with all query string parameters. --**Maximum length**: 2,048 characters --## Source --Source is the source of the request. Examples are the instrumentation key of the caller or the IP address of the caller. For more information, see [Telemetry correlation in Application Insights](./correlation.md). --**Maximum length**: 1,024 characters --## Duration --The request duration is formatted as `DD.HH:MM:SS.MMMMMM`. It must be positive and less than `1000` days. This field is required because request telemetry represents the operation with the beginning and the end. --## Response code --The response code is the result of a request execution. It's the HTTP status code for HTTP requests. It might be an `HRESULT` value or an exception type for other request types. --**Maximum length**: 1,024 characters --## Success --Success indicates whether a call was successful or unsuccessful. This field is required. When a request isn't set explicitly to `false`, it's considered to be successful. Set this value to `false` if the operation was interrupted by an exception or a returned error result code. --For web applications, Application Insights defines a request as successful when the response code is less than `400` or equal to `401`. However, there are cases when this default mapping doesn't match the semantics of the application. --Response code `404` might indicate "no records," which can be part of regular flow. It also might indicate a broken link. For broken links, you can implement more advanced logic. You can mark broken links as failures only when those links are located on the same site by analyzing the URL referrer. Or you can mark them as failures when they're accessed from the company's mobile application. Similarly, `301` and `302` indicate failure when they're accessed from the client that doesn't support redirect. --Partially accepted content `206` might indicate a failure of an overall request. For instance, an Application Insights endpoint might receive a batch of telemetry items as a single request. It returns `206` when some items in the batch weren't processed successfully. An increasing rate of `206` indicates a problem that needs to be investigated. Similar logic applies to `207` Multi-Status where the success might be the worst of separate response codes. --You can read more about the request result code and status code in the [blog post](https://apmtips.com/posts/2016-12-03-request-success-and-response-code/). --## Custom properties ---## Custom measurements ---## Next steps --- [Write custom request telemetry](./api-custom-events-metrics.md#trackrequest).-- See the [data model](data-model.md) for Application Insights types and data models.-- Learn how to [configure an ASP.NET Core](./asp-net.md) application with Application Insights.-- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights. |
azure-monitor | Data Model Trace Telemetry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-trace-telemetry.md | - Title: 'Application Insights data model: Trace telemetry' -description: Application Insights data model for trace telemetry. - Previously updated : 04/25/2017---# Trace telemetry: Application Insights data model --Trace telemetry in [Application Insights](./app-insights-overview.md) represents `printf`-style trace statements that are text searched. `Log4Net`, `NLog`, and other text-based log file entries are translated into instances of this type. The trace doesn't have measurements as an extensibility. --## Message --Trace message. --**Maximum length**: 32,768 characters --## Severity level --Trace severity level. --**Values**: `Verbose`, `Information`, `Warning`, `Error`, and `Critical` --## Custom properties ---## Next steps --- Explore [.NET trace logs in Application Insights](./asp-net-trace-logs.md).-- Explore [Java trace logs in Application Insights](./opentelemetry-enable.md?tabs=java#logs).-- See [data model](data-model.md) for Application Insights types and data model.-- Write [custom trace telemetry](./api-custom-events-metrics.md#tracktrace).-- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights. |
azure-monitor | Data Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model.md | - Title: Application Insights telemetry data model | Microsoft Docs -description: This article presents an overview of the Application Insights telemetry data model. ---- Previously updated : 10/14/2019---# Application Insights telemetry data model --[Application Insights](./app-insights-overview.md) sends telemetry from your web application to the Azure portal so that you can analyze the performance and usage of your application. The telemetry model is standardized, so it's possible to create platform and language-independent monitoring. --Data collected by Application Insights models this typical application execution pattern. -- --The following types of telemetry are used to monitor the execution of your app. Three types are automatically collected by the Application Insights SDK from the web application framework: --* [Request](data-model-request-telemetry.md): Generated to log a request received by your app. For example, the Application Insights web SDK automatically generates a Request telemetry item for each HTTP request that your web app receives. -- An *operation* is made up of the threads of execution that process a request. You can also [write code](./api-custom-events-metrics.md#trackrequest) to monitor other types of operation, such as a "wake up" in a web job or function that periodically processes data. Each operation has an ID. The ID can be used to [group](./correlation.md) all telemetry generated while your app is processing the request. Each operation either succeeds or fails and has a duration of time. -* [Exception](data-model-exception-telemetry.md): Typically represents an exception that causes an operation to fail. -* [Dependency](data-model-dependency-telemetry.md): Represents a call from your app to an external service or storage, such as a REST API or SQL. In ASP.NET, dependency calls to SQL are defined by `System.Data`. Calls to HTTP endpoints are defined by `System.Net`. --Application Insights provides three data types for custom telemetry: --* [Trace](data-model-trace-telemetry.md): Used either directly or through an adapter to implement diagnostics logging by using an instrumentation framework that's familiar to you, such as `Log4Net` or `System.Diagnostics`. -* [Event](data-model-event-telemetry.md): Typically used to capture user interaction with your service to analyze usage patterns. -* [Metric](data-model-metric-telemetry.md): Used to report periodic scalar measurements. --Every telemetry item can define the [context information](data-model-context.md) like application version or user session ID. Context is a set of strongly typed fields that unblocks certain scenarios. When application version is properly initialized, Application Insights can detect new patterns in application behavior correlated with redeployment. --You can use session ID to calculate an outage or an issue impact on users. Calculating the distinct count of session ID values for a specific failed dependency, error trace, or critical exception gives you a good understanding of an impact. --The Application Insights telemetry model defines a way to [correlate](./correlation.md) telemetry to the operation of which it's a part. For example, a request can make a SQL Database call and record diagnostics information. You can set the correlation context for those telemetry items that tie it back to the request telemetry. --## Schema improvements --The Application Insights data model is a basic yet powerful way to model your application telemetry. We strive to keep the model simple and slim to support essential scenarios and allow the schema to be extended for advanced use. --To report data model or schema problems and suggestions, use our [GitHub repository](https://github.com/microsoft/ApplicationInsights-dotnet/issues/new/choose). --## Next steps --- [Write custom telemetry](./api-custom-events-metrics.md).-- Learn how to [extend and filter telemetry](./api-filtering-sampling.md).-- Use [sampling](./sampling.md) to minimize the amount of telemetry based on data model.-- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights. |
azure-monitor | Sampling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md | In [`ApplicationInsights.config`](./configuration-with-applicationinsights-confi * `<ExcludedTypes>type;type</ExcludedTypes>` - A semi-colon delimited list of types that you don't want to be subject to sampling. Recognized types are: [`Dependency`](data-model-dependency-telemetry.md), [`Event`](data-model-event-telemetry.md), [`Exception`](data-model-exception-telemetry.md), [`PageView`](data-model-pageview-telemetry.md), [`Request`](data-model-request-telemetry.md), [`Trace`](data-model-trace-telemetry.md). All telemetry of the specified types is transmitted; the types that aren't specified will be sampled. + A semi-colon delimited list of types that you don't want to be subject to sampling. Recognized types are: [`Dependency`](data-model-complete.md#dependency), [`Event`](data-model-complete.md#event), [`Exception`](data-model-complete.md#exception), [`PageView`](data-model-complete.md#pageview), [`Request`](data-model-complete.md#request), [`Trace`](data-model-complete.md#trace). All telemetry of the specified types is transmitted; the types that aren't specified will be sampled. * `<IncludedTypes>type;type</IncludedTypes>` - A semi-colon delimited list of types that you do want to subject to sampling. Recognized types are: [`Dependency`](data-model-dependency-telemetry.md), [`Event`](data-model-event-telemetry.md), [`Exception`](data-model-exception-telemetry.md), [`PageView`](data-model-pageview-telemetry.md), [`Request`](data-model-request-telemetry.md), [`Trace`](data-model-trace-telemetry.md). The specified types will be sampled; all telemetry of the other types will always be transmitted. + A semi-colon delimited list of types that you do want to subject to sampling. Recognized types are: [`Dependency`](data-model-complete.md#dependency), [`Event`](data-model-complete.md#event), [`Exception`](data-model-complete.md#exception), [`PageView`](data-model-complete.md#pageview), [`Request`](data-model-complete.md#request), [`Trace`](data-model-complete.md#trace). The specified types will be sampled; all telemetry of the other types will always be transmitted. **To switch off** adaptive sampling, remove the `AdaptiveSamplingTelemetryProcessor` node(s) from `ApplicationInsights.config`. |
azure-monitor | Transaction Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/transaction-diagnostics.md | This behavior is by design. All the related items, across all components, are al ### Is there a way to see fewer events per transaction when I use the Application Insights JavaScript SDK? -The transaction diagnostics experience shows all telemetry in a [single operation](correlation.md#data-model-for-telemetry-correlation) that shares an [Operation ID](data-model-context.md#operation-id). By default, the Application Insights SDK for JavaScript creates a new operation for each unique page view. In a single-page application (SPA), only one page view event will be generated and a single Operation ID will be used for all telemetry generated. As a result, many events might be correlated to the same operation. +The transaction diagnostics experience shows all telemetry in a [single operation](correlation.md#data-model-for-telemetry-correlation) that shares an [Operation ID](data-model-complete.md#operation-id). By default, the Application Insights SDK for JavaScript creates a new operation for each unique page view. In a single-page application (SPA), only one page view event will be generated and a single Operation ID will be used for all telemetry generated. As a result, many events might be correlated to the same operation. In these scenarios, you can use Automatic Route Tracking to automatically create new operations for navigation in your SPA. You must turn on [enableAutoRouteTracking](javascript.md#single-page-applications) so that a page view is generated every time the URL route is updated (logical page view occurs). If you want to manually refresh the Operation ID, call `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`. Manually triggering a PageView event also resets the Operation ID. |
azure-monitor | Usage Segmentation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-segmentation.md | Three of the **Usage** panes use the same tool to slice and dice telemetry from A custom event represents one occurrence of something happening in your app. It's often a user interaction like a button selection or the completion of a task. You insert code in your app to [generate custom events](./api-custom-events-metrics.md#trackevent). > [!NOTE]-> For information on an alternatives to using [anonymous IDs](./data-model-context.md#anonymous-user-id) and ensuring an accurate count, see the documentation for [authenticated IDs](./data-model-context.md#authenticated-user-id). +> For information on an alternatives to using [anonymous IDs](./data-model-complete.md#anonymous-user-id) and ensuring an accurate count, see the documentation for [authenticated IDs](./data-model-complete.md#authenticated-user-id). ## Query for certain users |
azure-monitor | Data Collection Endpoint Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-endpoint-overview.md | Title: Data collection endpoints in Azure Monitor description: Overview of data collection endpoints (DCEs) in Azure Monitor, including their contents and structure and how you can create and work with them. ++ Last updated 03/16/2022 ms.reviwer: nikeist |
azure-monitor | Data Collection Endpoint Sample | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-endpoint-sample.md | Title: Sample data collection endpoint description: Sample data collection endpoint below is for virtual machines with Azure Monitor agent ++ Last updated 03/16/2022 |
azure-monitor | Data Collection Rule Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-overview.md | Title: Data collection rules in Azure Monitor description: Overview of data collection rules (DCRs) in Azure Monitor including their contents and structure and how you can create and work with them. ++ Last updated 07/15/2022 |
azure-monitor | Data Collection Transformations Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations-structure.md | Title: Structure of transformation in Azure Monitor description: Structure of transformation in Azure Monitor including limitations of KQL allowed in a transformation.++ Last updated 06/29/2022 ms.reviwer: nikeist |
azure-monitor | Data Collection Transformations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations.md | Title: Data collection transformations description: Use transformations in a data collection rule in Azure Monitor to filter and modify incoming data. ++ Last updated 06/29/2022 ms.reviwer: nikeist |
azure-monitor | Diagnostics Settings Policies Deployifnotexists | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostics-settings-policies-deployifnotexists.md | -Assign policies to enable resource logs and to send them to destinations according to your needs. Send logs to Event Hubs for third-party SIEM systems, enabling continuous security operations. Send logs to storage accounts for longer term storage or the fulfillment of regulatory compliance. +Assign policies to enable resource logs and to send them to destinations according to your needs. Send logs to event hubs for third-party SIEM systems, enabling continuous security operations. Send logs to storage accounts for longer term storage or the fulfillment of regulatory compliance. -A set of built-in policies and initiatives exists to direct resource logs to Log Analytics Workspaces, Event Hubs, and Storage Accounts. The policies enable audit logging, sending logs belonging to the **audit** log category group to an Event Hub, Log Analytics workspace or Storage Account. The policies' `effect` is `DeployIfNotExists`, which deploys the policy as a default if there aren't other settings defined. +A set of built-in policies and initiatives exists to direct resource logs to Log Analytics Workspaces, Event Hubs, and Storage Accounts. The policies enable audit logging, sending logs belonging to the **audit** log category group to an event hub, Log Analytics workspace or Storage Account. The policies' `effect` is `DeployIfNotExists`, which deploys the policy as a default if there aren't other settings defined. ## Deploy policies. The following steps show how to apply the policy to send audit logs to for key v 1. Select **Monitoring** from the Category dropdown 1. Enter *keyvault* in the **Search** field. 1. Select the **Enable logging by category group for Key vaults (microsoft.keyvault/vaults) to Log Analytics** policy,- :::image type="content" source="./media/diagnostics-settings-policies-deployifnotexists/policy-definitions.png" alt-text="A screenshot of the policy definitions page."::: + :::image type="content" source="./media/diagnostics-settings-policies-deployifnotexists/policy-definitions.png" lightbox="./media/diagnostics-settings-policies-deployifnotexists/policy-definitions.png" alt-text="A screenshot of the policy definitions page."::: 1. From the policy definition page, select **Assign** 1. Select the **Parameters** tab. 1. Select the Log Analytics Workspace that you want to send the audit logs to. Find the role in the policy definition by searching for *roleDefinitionIds* ```azurecli az policy assignment identity assign --system-assigned --resource-group rg-001 --role 92aaf0da-9dab-42b6-94a3-d43ce8d16293 --identity-scope /subscriptions/12345678-aaaa-bbbb-cccc-1234567890ab/resourceGroups/rg001 --name policy-assignment-1 ```+ + When assigning policies that send logs to event hubs, you must manually add the *Azure Event Hubs Data Owner* role for the event hub to your policy assigned identity. + + ```azurecli + az role assignment create --assignee <Principal ID> --role "Azure Event Hubs Data Owner" --scope /subscriptions/<subscription ID>/resourceGroups/<event hub's resource group> + ``` 1. Trigger a scan to find existing resources using [`az policy state trigger-scan`](https://learn.microsoft.com/cli/azure/policy/state?view=azure-cli-latest#az-policy-state-trigger-scan). ```azurecli To apply a policy using the PowerShell, use the following commands: New-AzRoleAssignment -Scope $rg.ResourceId -ObjectId $policyAssignment.Identity.PrincipalId -RoleDefinitionId $roleDefId } ```+ When assigning policies that send logs to event hubs, you must manually add the *Azure Event Hubs Data Owner* role for the event hub to your system assigned Managed Identity. + ```azurepowershell + New-AzRoleAssignment -Scope /subscriptions/<subscription ID>/resourceGroups/<event hub's resource group> -ObjectId $policyAssignment.Identity.PrincipalId -RoleDefinitionId "Azure Event Hubs Data Owner" + ``` 1. Scan for compliance, then create a remediation task to force compliance for existing resources. ```azurepowershell To apply a policy using the PowerShell, use the following commands: Get-AzPolicyState -PolicyAssignmentName $policyAssignment.Name -ResourceGroupName $policyAssignment.ResourceGroupName|select-object IsCompliant , ResourceID ``` ++> [!Note] +> When assigning policies that send logs to event hubs, you must manually add the *Azure Event Hubs Data Owner* role for the event hub to your policy assigned identity. +> Use the `az role assignment create` Azure CLI command. +> ```azurecli +> az role assignment create --assignee <Principal ID> --role "Azure Event Hubs Data Owner" --scope /subscriptions/<subscription ID>/resourceGroups/<event hub's resource group> +>``` +> For example: +> ```azurecli +> az role assignment create --assignee xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --role "Azure Event Hubs Data Owner" --scope /subscriptions/yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy/resourceGroups/myResourceGroup +>``` +> +> Find your principal ID on the **Policy Assignment** page, **Managed Identity** tab. +> :::image type="content" source="./media/diagnostics-settings-policies-deployifnotexists/find-principal.png" alt-text="A screenshot showing the policy assignment page, managed identity tab."::: + + ## Remediation tasks Policies are applied to new resources when they're created. To apply a policy to existing resources, create a remediation task. Remediation tasks bring resources into compliance with a policy. The following table describes the common parameters for each set of policies. ### Event Hubs policy parameters -This policy deploys a diagnostic setting using a category group to route logs to an Event Hub. +This policy deploys a diagnostic setting using a category group to route logs to an event hub. |Parameter| Description| Valid Values|Default| ||||| |resourceLocation|Resource Location must be the same location as the event hub Namespace|Supported locations||-|eventHubAuthorizationRuleId|Event Hub Authorization Rule ID. The authorization rule is at event hub namespace level. For example, /subscriptions/{subscription ID}/resourceGroups/{resource group}/providers/Microsoft.EventHub/namespaces/{Event Hub namespace}/authorizationrules/{authorization rule}||| -|eventHubName|Event Hub Name||Monitoring| +|eventHubAuthorizationRuleId|Event hub Authorization Rule ID. The authorization rule is at event hub namespace level. For example, /subscriptions/{subscription ID}/resourceGroups/{resource group}/providers/Microsoft.EventHub/namespaces/{Event Hub namespace}/authorizationrules/{authorization rule}||| +|eventHubName|Event hub name||Monitoring| ### Storage Accounts policy parameters |
azure-monitor | Prometheus Self Managed Grafana Azure Active Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-self-managed-grafana-azure-active-directory.md | Last updated 11/04/2022 For information on using Grafana with managed system identity, see [Configure Grafana using managed system identity](./prometheus-grafana.md). ## Azure Active Directory authentication -To set up Azure Active Directory authentication, follow the steps bellow: +To set up Azure Active Directory authentication, follow the steps below: 1. Register an app with Azure Active Directory. 1. Grant access for the app to your Azure Monitor workspace. 1. Configure your self-hosted Grafana with the app's credentials. Grafana now supports connecting to Azure-managed Prometheus using the \https://g - [Configure Grafana using managed system identity](./prometheus-grafana.md). - [Collect Prometheus metrics for your AKS cluster](../essentials/prometheus-metrics-enable.md). - [Configure Prometheus alerting and recording rules groups](prometheus-rule-groups.md).-- [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md). +- [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md). |
azure-monitor | Tables Feature Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tables-feature-support.md | Title: Tables that support ingestion-time transformations in Azure Monitor Logs (preview) description: Reference for tables that support ingestion-time transformations in Azure Monitor Logs (preview). ++ na Last updated 07/10/2022 The following list identifies the tables in a [Log Analytics workspace](log-anal | [EmailEvents](/azure/azure-monitor/reference/tables/emailevents) | | | [EmailPostDeliveryEvents](/azure/azure-monitor/reference/tables/emailpostdeliveryevents) | | | [EmailUrlInfo](/azure/azure-monitor/reference/tables/emailurlinfo) | |-| [Event](/azure/azure-monitor/reference/tables/event) | Partial support ΓÇô data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported. Data arriving via Diagnostics Extension agent is collected though storage while this path isnΓÇÖt supported. | +| [Event](/azure/azure-monitor/reference/tables/event) | Partial support . Data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported. Data arriving from Diagnostics Extension is collected through Azure storage. This path isnΓÇÖt supported. | | [ExchangeAssessmentRecommendation](/azure/azure-monitor/reference/tables/exchangeassessmentrecommendation) | | | [FailedIngestion](/azure/azure-monitor/reference/tables/failedingestion) | | | [FunctionAppLogs](/azure/azure-monitor/reference/tables/functionapplogs) | | |
azure-monitor | Tutorial Logs Ingestion Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-api.md | Title: 'Tutorial: Send data to Azure Monitor Logs using REST API (Resource Manager templates)' description: Tutorial on how to send data to a Log Analytics workspace in Azure Monitor by using the REST API Azure Resource Manager template version. ++ Last updated 02/01/2023 |
azure-monitor | Tutorial Logs Ingestion Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-portal.md | Title: 'Tutorial: Send data to Azure Monitor Logs by using a REST API (Azure portal)' description: Tutorial on how to send data to a Log Analytics workspace in Azure Monitor by using a REST API (Azure portal version).++ Last updated 07/15/2022- |
azure-monitor | Tutorial Workspace Transformations Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-workspace-transformations-api.md | Title: Tutorial - Add ingestion-time transformation to Azure Monitor Logs using resource manager templates -description: Describes how to add a custom transformation to data flowing through Azure Monitor Logs using resource manager templates. + Title: Tutorial - Add ingestion-time transformation to Azure Monitor Logs using Resource Manager templates +description: Describes how to add a custom transformation to data flowing through Azure Monitor Logs using Resource Manager templates. ++ Last updated 07/01/2022 -# Tutorial: Add transformation in workspace data collection rule to Azure Monitor using resource manager templates -This tutorial walks you through configuration of a sample [transformation in a workspace data collection rule](../essentials/data-collection-transformations.md) using resource manager templates. [Transformations](../essentials/data-collection-transformations.md) in Azure Monitor allow you to filter or modify incoming data before it's sent to its destination. Workspace transformations provide support for [ingestion-time transformations](../essentials/data-collection-transformations.md) for workflows that don't yet use the [Azure Monitor data ingestion pipeline](../essentials/data-collection.md). +# Tutorial: Add transformation in workspace data collection rule to Azure Monitor using Resource Manager templates +This tutorial walks you through configuration of a sample [transformation in a workspace data collection rule](../essentials/data-collection-transformations.md) using Resource Manager templates. [Transformations](../essentials/data-collection-transformations.md) in Azure Monitor allow you to filter or modify incoming data before it's sent to its destination. Workspace transformations provide support for [ingestion-time transformations](../essentials/data-collection-transformations.md) for workflows that don't yet use the [Azure Monitor data ingestion pipeline](../essentials/data-collection.md). -Workspace transformations are stored together in a single [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) for the workspace, called the workspace DCR. Each transformation is associated with a particular table. The transformation will be applied to all data sent to this table from any workflow not using a DCR. +Workspace transformations are stored together in a single [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) for the workspace, called the workspace DCR. Each transformation is associated with a particular table. The transformation is applied to all data sent to this table from any workflow not using a DCR. > [!NOTE]-> This tutorial uses resource manager templates and REST API to configure a workspace transformation. See [Tutorial: Add transformation in workspace data collection rule to Azure Monitor using the Azure portal](tutorial-workspace-transformations-portal.md) for the same tutorial using the Azure portal. +> This tutorial uses Resource Manager templates and REST API to configure a workspace transformation. See [Tutorial: Add transformation in workspace data collection rule to Azure Monitor using the Azure portal](tutorial-workspace-transformations-portal.md) for the same tutorial using the Azure portal. In this tutorial, you learn to: In this tutorial, you learn to: > [!NOTE]-> This tutorial uses PowerShell from Azure Cloud Shell to make REST API calls using the Azure Monitor **Tables** API and the Azure portal to install resource manager templates. You can use any other method to make these calls. +> This tutorial uses PowerShell from Azure Cloud Shell to make REST API calls using the Azure Monitor **Tables** API and the Azure portal to install Resource Manager templates. You can use any other method to make these calls. ## Prerequisites To complete this tutorial, you need the following: Use the **Tables - Update** API to configure the table with the PowerShell code 1. Click the **Cloud Shell** button in the Azure portal and ensure the environment is set to **PowerShell**. - :::image type="content" source="media/tutorial-workspace-transformations-api/open-cloud-shell.png" lightbox="media/tutorial-workspace-transformations-api/open-cloud-shell.png" alt-text="Screenshot of opening cloud shell."::: + :::image type="content" source="media/tutorial-workspace-transformations-api/open-cloud-shell.png" lightbox="media/tutorial-workspace-transformations-api/open-cloud-shell.png" alt-text="Screenshot of opening Cloud Shell."::: 2. Copy the following PowerShell code and replace the **Path** parameter with the details for your workspace. Use the **Tables - Update** API to configure the table with the PowerShell code Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourcegroup}/providers/microsoft.operationalinsights/workspaces/{workspace}/tables/LAQueryLogs?api-version=2021-12-01-preview" -Method PUT -payload $tableParams ``` -3. Paste the code into the cloud shell prompt to run it. +3. Paste the code into the Cloud Shell prompt to run it. - :::image type="content" source="media/tutorial-workspace-transformations-api/cloud-shell-script.png" lightbox="media/tutorial-workspace-transformations-api/cloud-shell-script.png" alt-text="Screenshot of script in cloud shell."::: + :::image type="content" source="media/tutorial-workspace-transformations-api/cloud-shell-script.png" lightbox="media/tutorial-workspace-transformations-api/cloud-shell-script.png" alt-text="Screenshot of script in Cloud Shell."::: 4. You can verify that the column was added by going to the **Log Analytics workspace** menu in the Azure portal. Select **Logs** to open Log Analytics and then expand the `LAQueryLogs` table to view its columns. Since this is the first transformation in the workspace, you need to create a [w :::image type="content" source="media/tutorial-workspace-transformations-api/build-custom-template.png" lightbox="media/tutorial-workspace-transformations-api/build-custom-template.png" alt-text="Screenshot to build template in the editor."::: -3. Paste the resource manager template below into the editor and then click **Save**. This template defines the DCR and contains the transformation query. You don't need to modify this template since it will collect values for its parameters. +3. Paste the Resource Manager template below into the editor and then click **Save**. This template defines the DCR and contains the transformation query. You don't need to modify this template since it will collect values for its parameters. - :::image type="content" source="media/tutorial-workspace-transformations-api/edit-template.png" lightbox="media/tutorial-workspace-transformations-api/edit-template.png" alt-text="Screenshot to edit resource manager template."::: + :::image type="content" source="media/tutorial-workspace-transformations-api/edit-template.png" lightbox="media/tutorial-workspace-transformations-api/edit-template.png" alt-text="Screenshot to edit Resource Manager template."::: ```json The final step to enable the transformation is to link the DCR to the workspace. Use the **Workspaces - Update** API to configure the table with the PowerShell code below. -1. Click the **Cloud shell** button to open cloud shell again. Copy the following PowerShell code and replace the parameters with values for your workspace and DCR. +1. Click the **Cloud Shell** button to open Cloud Shell again. Copy the following PowerShell code and replace the parameters with values for your workspace and DCR. ```PowerShell $defaultDcrParams = @' Use the **Workspaces - Update** API to configure the table with the PowerShell c Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourcegroup}/providers/microsoft.operationalinsights/workspaces/{workspace}?api-version=2021-12-01-preview" -Method PATCH -payload $defaultDcrParams ``` -2. Paste the code into the cloud shell prompt to run it. +2. Paste the code into the Cloud Shell prompt to run it. :::image type="content" source="media/tutorial-workspace-transformations-api/cloud-shell-script-link-workspace.png" lightbox="media/tutorial-workspace-transformations-api/cloud-shell-script-link-workspace.png" alt-text="Screenshot of script to link workspace to DCR."::: |
azure-monitor | Tutorial Workspace Transformations Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-workspace-transformations-portal.md | Title: 'Tutorial: Add a workspace transformation to Azure Monitor Logs by using the Azure portal' description: Describes how to add a custom transformation to data flowing through Azure Monitor Logs by using the Azure portal. ++ Last updated 07/01/2022 |
azure-monitor | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md | Snapshot-Debugger|[Configure Bring Your Own Storage (BYOS) for Application Insig Snapshot-Debugger|[Release notes for Microsoft.ApplicationInsights.SnapshotCollector](snapshot-debugger/snapshot-collector-release-notes.md)|Removing the TSG from the AzMon TOC and adding to the support TOC| Snapshot-Debugger|[Enable Snapshot Debugger for .NET apps in Azure App Service](snapshot-debugger/snapshot-debugger-app-service.md)|Removing the TSG from the AzMon TOC and adding to the support TOC| Snapshot-Debugger|[Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions](snapshot-debugger/snapshot-debugger-function-app.md)|Removing the TSG from the AzMon TOC and adding to the support TOC|-Snapshot-Debugger|[ Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots](/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot.md)|Removing the TSG from the AzMon TOC and adding to the support TOC| +Snapshot-Debugger|[ Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots](/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot)|Removing the TSG from the AzMon TOC and adding to the support TOC| Snapshot-Debugger|[Enable Snapshot Debugger for .NET apps in Azure Service Fabric, Cloud Service, and Virtual Machines](snapshot-debugger/snapshot-debugger-vm.md)|Removing the TSG from the AzMon TOC and adding to the support TOC| Snapshot-Debugger|[Debug snapshots on exceptions in .NET apps](snapshot-debugger/snapshot-debugger.md)|Removing the TSG from the AzMon TOC and adding to the support TOC| Virtual-Machines|[Monitor virtual machines with Azure Monitor: Analyze monitoring data](vm/monitor-virtual-machine-analyze.md)|New article| Application-Insights|[Live Metrics: Monitor and diagnose with 1-second latency]( Application-Insights|[Application Insights for Azure VMs and Virtual Machine Scale Sets](app/azure-vm-vmss-apps.md)|Easily monitor your IIS-hosted .NET Framework and .NET Core applications running on Azure VMs and Virtual Machine Scale Sets using a new App Insights Extension.| Application-Insights|[Sampling in Application Insights](app/sampling.md)|We've added embedded links to assist with looking up type definitions. (Dependency, Event, Exception, PageView, Request, Trace)| Application-Insights|[Configuration options: Azure Monitor Application Insights for Java](app/java-standalone-config.md)|Instructions are now available on how to set the http proxy using an environment variable, which overrides the JSON configuration. We've also provided a sample to configure connection string at runtime.|-Application-Insights|[Application Insights for Java 2.x](/previous-versions/azure/azure-monitor/app/deprecated-java-2x)|The Java 2.x retirement notice is available at https://azure.microsoft.com/updates/application-insights-java-2x-retirement.| +Application-Insights|[Application Insights for Java 2.x](/previous-versions/azure/azure-monitor/app/deprecated-java-2x)|The Java 2.x retirement notice is available at [https://azure.microsoft.com/updates/application-insights-java-2x-retirement](https://azure.microsoft.com/updates/application-insights-java-2x-retirement).| Autoscale|[Diagnostic settings in Autoscale](autoscale/autoscale-diagnostics.md)|Updated and expanded content| Autoscale|[Overview of common autoscale patterns](autoscale/autoscale-common-scale-patterns.md)|Clarification of weekend profiles| Autoscale|[Autoscale with multiple profiles](autoscale/autoscale-multiprofile.md)|Added clarifications for profile end times| Visualizations|[Azure Workbooks](./visualize/workbooks-overview.md)|New video to | [Application Insights SDK support guidance](app/sdk-support-guidance.md) | Updated SDK supportability guidance. | | [Azure AD authentication for Application Insights](app/azure-ad-authentication.md) | Azure AD authenticated telemetry ingestion has been reached general availability.| | [Azure Application Insights for JavaScript web apps](app/javascript.md) | Our Java on-premises page has been retired and redirected to [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](app/opentelemetry-enable.md?tabs=java).|-| [Azure Application Insights Telemetry Data Model - Telemetry Context](app/data-model-context.md) | Clarified that Anonymous User ID is simply User.Id for easy selection in Intellisense.| +| [Azure Application Insights Telemetry Data Model - Telemetry Context](app/data-model-complete.md#context) | Clarified that Anonymous User ID is simply User.Id for easy selection in Intellisense.| | [Continuous export of telemetry from Application Insights](/previous-versions/azure/azure-monitor/app/export-telemetry) | On February 29, 2024, continuous export will be deprecated as part of the classic Application Insights deprecation.| | [Dependency Tracking in Azure Application Insights](app/asp-net-dependencies.md) | The Event Hubs Client SDK and ServiceBus Client SDK information has been updated.| | [Monitor Azure app services performance .NET Core](app/azure-web-apps-net-core.md) | Updated Linux troubleshooting guidance. | |
azure-netapp-files | Large Volumes Requirements Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/large-volumes-requirements-considerations.md | To enroll in the preview for large volumes, use the [large volumes preview sign- ## Requirements and considerations * Existing regular volumes can't be resized over 100 TiB.- * You cannot convert regular Azure NetApp Files volumes to large volumes. +* You can't convert regular Azure NetApp Files volumes to large volumes. * You must create a large volume at a size greater than 100 TiB. A single volume can't exceed 500 TiB. * You can't resize a large volume to less than 100 TiB.- * You can only resize a large volume up to 30% of lowest provisioned size. -* Large volumes aren't currently supported with Azure NetApp Files backup. -* Large volumes aren't currently supported with cross-region replication. +* You can only resize a large to be volume up to 30%, of lowest provisioned size. +* Large volumes are currently not supported with Azure NetApp Files backup. +* Large volumes are not currently supported with cross-region replication. * You can't create a large volume with application volume groups. * Large volumes aren't currently supported with cross-zone replication. * The SDK for large volumes isn't currently available. +* Large volumes aren't currently supported with cool access tier. * Throughput ceilings for the three performance tiers (Standard, Premium, and Ultra) of large volumes are based on the existing 100-TiB maximum capacity targets. You're able to grow to 500 TiB with the throughput ceiling per the following table: | Capacity tier | Volume size (TiB) | Throughput (MiB/s) | |
cognitive-services | Create Translator Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/create-translator-resource.md | The Translator service can be accessed through two different resource types: 1. **Resource Group**. You can create a new resource group or add your resource to a pre-existing resource group that shares the same lifecycle, permissions, and policies. -1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using the Document Translation feature with managed identity authentication, choose a non-global region. +1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using the Document Translation feature with [managed identity authorization](document-translation/how-to-guides/create-use-managed-identities.md), choose a geographic region such as **East US**. 1. **Name**. Enter the name you have chosen for your resource. The name you choose must be unique within Azure. |
cognitive-services | Create Manage Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to/create-manage-workspace.md | -* Change the resource key for global regions. If you're using a regional specific resource, you can't change your resource key. +* Change the resource key if the region is **Global**. If you're using a region-specific resource such as **East US**, you can't change your resource key. * Change the workspace name. |
cognitive-services | Create Use Managed Identities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/create-use-managed-identities.md | -Managed identities for Azure resources are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources: +Managed identities for Azure resources are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources. Managed identities are a safer way to grant access to data without having SAS tokens included with your HTTP requests. :::image type="content" source="../media/managed-identity-rbac-flow.png" alt-text="Screenshot of managed identity flow (RBAC)."::: -* You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications. Managed identities eliminate the need for you to include shared access signature tokens (SAS) with your HTTP requests. +* You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications. * To grant access to an Azure resource, assign an Azure role to a managed identity using [Azure role-based access control (`Azure RBAC`)](../../../../role-based-access-control/overview.md). * There's no added cost to use managed identities in Azure. --- > [!IMPORTANT] >-> * When using managed identities, don't include a SAS token URL with your HTTP requestsΓÇöyour requests will fail. +> * When using managed identities, don't include a SAS token URL with your HTTP requestsΓÇöyour requests will fail. Using managed identities replaces the requirement for you to include shared access signature tokens (SAS) with your [source and target URLs](#post-request-body). >-> * Currently, Document Translation doesn't support managed identity in the global region. If you intend to use managed identities for Document Translation operations, [create your Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in a non-global Azure region. +> * To use managed identities for Document Translation operations, you must [create your Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in a specific geographic Azure region such as **East US**. If your Translator resource region is set to **Global**, then you can't use managed identity for Document Translation. You can still use [Shared Access Signature tokens (SAS)](create-sas-tokens.md) for Document Translation. > > * Document Translation is **only** available in the S1 Standard Service Plan (Pay-as-you-go) or in the D3 Volume Discount Plan. _See_ [Cognitive Services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/). >-> * Managed identities are a safer way to grant access to data without having SAS tokens included with your HTTP requests. ## Prerequisites To get started, you need: * An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/)ΓÇöif you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). -* A [**single-service Translator**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (not a multi-service Cognitive Services) resource assigned to a **non-global** region. For detailed steps, _see_ [Create a Cognitive Services resource using the Azure portal](../../../cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows). +* A [**single-service Translator**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (not a multi-service Cognitive Services) resource assigned to a **geographical** region such as **West US**. For detailed steps, _see_ [Create a Cognitive Services resource using the Azure portal](../../../cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows). * A brief understanding of [**Azure role-based access control (`Azure RBAC`)**](../../../../role-based-access-control/role-assignments-portal.md) using the Azure portal. The following headers are included with each Document Translation API request: * The `targetUrl` for each target language must be unique. >[!NOTE]-> If a file with the same name already exists in the destination, the job will fail. +> If a file with the same name already exists in the destination, the job will fail. When using managed identities, don't include a SAS token URL with your HTTP requests. Otherwise your requests will fail. <!-- markdownlint-disable MD024 --> ### Translate all documents in a container |
cognitive-services | Use Rest Api Programmatically | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/use-rest-api-programmatically.md | - Document Translation is a cloud-based feature of the [Azure Translator](../../translator-overview.md) service. You can use the Document Translation API to asynchronously translate whole documents in [supported languages](../../language-support.md) and various [file formats](../overview.md#supported-document-formats) while preserving source document structure and text formatting. In this how-to guide, you'll learn to use Document Translation APIs with a programming language of your choice and the HTTP REST API. + Document Translation is a cloud-based feature of the [Azure Translator](../../translator-overview.md) service. You can use the Document Translation API to asynchronously translate whole documents in [supported languages](../../language-support.md) and various [file formats](../overview.md#supported-document-formats) while preserving source document structure and text formatting. In this how-to guide, you learn to use Document Translation APIs with a programming language of your choice and the HTTP REST API. ## Prerequisites -To get started, you'll need: +To get started, you need: * An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). -* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You'll create containers to store and organize your blob data within your storage account. +* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You create containers to store and organize your blob data within your storage account. * A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Cognitive Services resource): To get started, you'll need: 1. **Resource Group**. You can create a new resource group or add your resource to a pre-existing resource group that shares the same lifecycle, permissions, and policies. - 1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using a [system-assigned managed identity](create-use-managed-identities.md) for authentication, choose a **non-global** region. + 1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using a [system-assigned managed identity](create-use-managed-identities.md) for authentication, choose a **geographic** region like **West US**. 1. **Name**. Enter the name you have chosen for your resource. The name you choose must be unique within Azure. Requests to the Translator service require a read-only key for authenticating ac 1. If you've created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page. 1. In the left rail, under *Resource Management*, select **Keys and Endpoint**. 1. Copy and paste your key in a convenient location, such as *Microsoft Notepad*.-1. You'll paste it into the code ample to authenticate your request to the Document Translation service. +1. You paste it into the code sample to authenticate your request to the Document Translation service. :::image type="content" source="../../media/translator-keys.png" alt-text="Image of the get your key field in Azure portal."::: ## Create Azure blob storage containers -You'll need to [**create containers**](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) for source and target files. +You need to [**create containers**](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) for source and target files. * **Source container**. This container is where you upload your files for translation (required).-* **Target container**. This container is where your translated files will be stored (required). +* **Target container**. This container is where your translated files are stored (required). > [!NOTE] > Document Translation supports glossaries as blobs in target containers (not separate glossary containers). If want to include a custom glossary, add it to the target container and include the` glossaryUrl` with the request. If the translation language pair is not present in the glossary, it will not be applied. *See* [Translate documents using a custom glossary](#translate-documents-using-a-custom-glossary) The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Share ## HTTP requests -A batch Document Translation request is submitted to your Translator service endpoint via a POST request. If successful, the POST method returns a `202 Accepted` response code and the batch request is created by the service. The translated documents will be listed in your target container. +A batch Document Translation request is submitted to your Translator service endpoint via a POST request. If successful, the POST method returns a `202 Accepted` response code and the service creates a batch request. The translated documents are listed in your target container. ### HTTP headers gradle init --type basic * When prompted to choose a **DSL**, select **Kotlin**. -* Update the `build.gradle.kts` file. Keep in mind that you'll need to update your `mainClassName` depending on the sample: +* Update the `build.gradle.kts` file. Keep in mind that you need to update your `mainClassName` depending on the sample: ```java plugins { gradle run #### Locating the `id` value -* You'll find the job `id` in the POST method response Header `Operation-Location` URL value. The last parameter of the URL is the operation's job **`id`**: +* You find the job `id` in the POST method response Header `Operation-Location` URL value. The last parameter of the URL is the operation's job **`id`**: |**Response header**|**Result URL**| |--|-| func main() { ### Brief overview -Cancel currently processing or queued job. Only documents for which translation hasn't started will be canceled. +Cancel currently processing or queued job. Only documents for which translation hasn't started are canceled. ### [C#](#tab/csharp) |
cognitive-services | Language Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/language-studio.md | recommendations: false > [!IMPORTANT] > Document Translation in Language Studio is currently in Public Preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback. - Document Translation in [**Azure Cognitive Services Language Studio**](https://language.cognitive.azure.com/home) is a no-code user interface that lets you interactively translate documents from local or Azure blob storage . + Document Translation in [**Azure Cognitive Services Language Studio**](https://language.cognitive.azure.com/home) is a no-code user interface that lets you interactively translate documents from local or Azure blob storage. ## Prerequisites Document Translation in Language Studio requires the following resources: * A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Cognitive Services resource) with [**system-assigned managed identity**](how-to-guides/create-use-managed-identities.md#enable-a-system-assigned-managed-identity) enabled and a [**Storage Blob Data Contributor**](how-to-guides/create-use-managed-identities.md#grant-access-to-your-storage-account) role assigned. For more information, *see* [**Managed identities for Document Translation**](how-to-guides/create-use-managed-identities.md). Also, make sure the region and pricing sections are completed as follows: - * **Resource Region**. For this project, choose a **non-global** region. For Document Translation, [system-assigned managed identity](how-to-guides/create-use-managed-identities.md) isn't supported in the global region. + * **Resource Region**. For this project, choose a geographic region such as **East US**. For Document Translation, [system-assigned managed identity](how-to-guides/create-use-managed-identities.md) isn't supported for the **Global** region. + * **Pricing tier**. Select Standard S1 or D3 to try the service. Document Translation isn't supported in the free tier. * An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). An active Azure blob storage account is required to use Document Translation in the Language Studio. Now that you've completed the prerequisites, let's start translating documents! ## Get started -At least one **source document** is required. You can download our [document translation sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/Translator/document-translation-sample.docx). +At least one **source document** is required. You can download our [document translation sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/Translator/document-translation-sample.docx). The source language is English. 1. Navigate to [Language Studio](https://language.cognitive.azure.com/home). |
cognitive-services | Get Started With Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/quickstarts/get-started-with-rest-api.md | zone_pivot_groups: programming-languages-set-translator # Get started with Document Translation - Document Translation is a cloud-based feature of the [Azure Translator](../../translator-overview.md) service that asynchronously translates whole documents in [supported languages](../../language-support.md) and various [file formats](../overview.md#supported-document-formats). In this quickstart, learn to use Document Translation with a programming language of your choice to translate a source document into a target language while preserving structure and text formatting. +Document Translation is a cloud-based feature of the [Azure Translator](../../translator-overview.md) service that asynchronously translates whole documents in [supported languages](../../language-support.md) and various [file formats](../overview.md#supported-document-formats). In this quickstart, learn to use Document Translation with a programming language of your choice to translate a source document into a target language while preserving structure and text formatting. ## Prerequisites To get started, you need: 1. **Resource Group**. You can create a new resource group or add your resource to a pre-existing resource group that shares the same lifecycle, permissions, and policies. - 1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using a [system-assigned managed identity](../how-to-guides/create-use-managed-identities.md) for authentication, choose a **non-global** region. + 1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using a [system-assigned managed identity](../how-to-guides/create-use-managed-identities.md) for authentication, choose a **geographic** region like **West US**. 1. **Name**. Enter the name you have chosen for your resource. The name you choose must be unique within Azure. The custom domain endpoint is a URL formatted with your resource name, hostname, > * **All API requests to the Document Translation service require a custom domain endpoint**. > * Don't use the Text Translation endpoint found on your Azure portal resource *Keys and Endpoint* page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation. - > [!div class="nextstepaction"] - > [I ran into an issue with the prerequisites.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Prerequisites) +> [!div class="nextstepaction"] +> [I ran into an issue with the prerequisites.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Prerequisites) ### Retrieve your key and endpoint Requests to the Translator service require a read-only key and custom endpoint t :::image type="content" source="../media/document-translation-key-endpoint.png" alt-text="Screenshot showing the get your key field in Azure portal."::: - > [!div class="nextstepaction"] - > [I ran into an issue retrieving my key and endpoint.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Retrieve-your-keys-and-endpoint) +> [!div class="nextstepaction"] +> [I ran into an issue retrieving my key and endpoint.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Retrieve-your-keys-and-endpoint) ## Create Azure blob storage containers -You need to [**create containers**](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) for source and target files. +You need to [**create containers**](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) for source and target files. * **Source container**. This container is where you upload your files for translation (required). * **Target container**. This container is where your translated files are stored (required). The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Share > * If you're translating a **single** file (blob) in an operation, **delegate SAS access at the blob level**. > * As an alternative to SAS tokens, you can use a [**system-assigned managed identity**](../how-to-guides/create-use-managed-identities.md) for authentication. - > [!div class="nextstepaction"] - > [I ran into an issue creating blob storage containers with authentication.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Create-blob-storage-containers) +> [!div class="nextstepaction"] +> [I ran into an issue creating blob storage containers with authentication.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Create-blob-storage-containers) ### Sample document -For this project, you need a **source document** uploaded to your **source container**. You can download our [document translation sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/Translator/document-translation-sample.docx) for this quickstart. +For this project, you need a **source document** uploaded to your **source container**. You can download our [document translation sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/Translator/document-translation-sample.docx) for this quickstart. The source language is English. ## HTTP request |
cognitive-services | Quickstart Translator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/quickstart-translator.md | ms.devlang: csharp, golang, java, javascript, python # Quickstart: Azure Cognitive Services Translator -Try the latest version of Azure Translator. In this quickstart, you'll get started using the Translator service to [translate text](reference/v3-0-translate.md) using a programming language of your choice or the REST API. For this project, we recommend using the free pricing tier (F0), while you're learning the technology, and later upgrading to a paid tier for production. +Try the latest version of Azure Translator. In this quickstart, get started using the Translator service to [translate text](reference/v3-0-translate.md) using a programming language of your choice or the REST API. For this project, we recommend using the free pricing tier (F0), while you're learning the technology, and later upgrading to a paid tier for production. ## Prerequisites -To get started, you'll need an active Azure subscription. If you don't have an Azure subscription, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/) +You need an active Azure subscription. If you don't have an Azure subscription, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/) * Once you have your Azure subscription, create a [Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal. * After your resource deploys, select **Go to resource** and retrieve your key and endpoint. - * You need the key and endpoint from the resource to connect your application to the Translator service. You'll paste your key and endpoint into the code later in the quickstart. You can find these values on the Azure portal **Keys and Endpoint** page: + * You need the key and endpoint from the resource to connect your application to the Translator service. You paste your key and endpoint into the code later in the quickstart. You can find these values on the Azure portal **Keys and Endpoint** page: :::image type="content" source="media/quickstarts/keys-and-endpoint-portal.png" alt-text="Screenshot: Azure portal keys and endpoint page."::: To get started, you'll need an active Azure subscription. If you don't have an A ## Headers -To call the Translator service via the [REST API](reference/rest-api-guide.md), you'll need to include the following headers with each request. Don't worry, we'll include the headers for you in the sample code for each programming language. +To call the Translator service via the [REST API](reference/rest-api-guide.md), you need to include the following headers with each request. Don't worry, we include the headers for you in the sample code for each programming language. For more information on Translator authentication options, _see_ the [Translator v3 reference](./reference/v3-0-reference.md#authentication) guide. Header|Value| Condition | | |: |:| |**Ocp-Apim-Subscription-Key** |Your Translator service key from the Azure portal.|• ***Required***|-|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |• ***Required*** when using a multi-service Cognitive Services or regional (non-global) resource.</br>• ***Optional*** when using a single-service global Translator Resource. +|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |• ***Required*** when using a multi-service Cognitive Services or regional (geographic) resource like **West US**.</br>• ***Optional*** when using a single-service global Translator Resource. |**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.|• **Required**| |**Content-Length**|The **length of the request** body.|• ***Optional***| Header|Value| Condition | ## Translate text -The core operation of the Translator service is translating text. In this quickstart, you'll build a request using a programming language of your choice that takes a single source (`from`) and provides two outputs (`to`). Then we'll review some parameters that can be used to adjust both the request and the response. +The core operation of the Translator service is translating text. In this quickstart, you build a request using a programming language of your choice that takes a single source (`from`) and provides two outputs (`to`). Then we review some parameters that can be used to adjust both the request and the response. ### [C#: Visual Studio](#tab/csharp) After a successful call, you should see the following response: mkdir translator-text-app; cd translator-text-app ``` -1. Run the `gradle init` command from the translator-text-app directory. This command will create essential build files for Gradle, including _build.gradle.kts_, which is used at runtime to create and configure your application. +1. Run the `gradle init` command from the translator-text-app directory. This command creates essential build files for Gradle, including _build.gradle.kts_, which is used at runtime to create and configure your application. ```console gradle init --type basic After a successful call, you should see the following response: mkdir -p src/main/java ``` - You'll create the following directory structure: + You create the following directory structure: :::image type="content" source="media/quickstarts/java-directories-2.png" alt-text="Screenshot: Java directory structure."::: |
cognitive-services | Translator Text Apis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-text-apis.md | To call the Translator service via the [REST API](reference/rest-api-guide.md), |Header|Value| Condition | | |: |:| |**Ocp-Apim-Subscription-Key** |Your Translator service key from the Azure portal.|<ul><li>***Required***</li></ul> |-|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |<ul><li>***Required*** when using a multi-service Cognitive Services or regional (non-global) resource.</li><li> ***Optional*** when using a single-service Translator Resource.</li></ul>| +|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |<ul><li>***Required*** when using a multi-service Cognitive Services or regional (geographic) resource like **West US**.</li><li> ***Optional*** when using a single-service Translator Resource.</li></ul>| |**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.|<ul><li>***Required***</li></ul>| |**Content-Length**|The **length of the request** body.|<ul><li>***Optional***</li></ul> | |**X-ClientTraceId**|A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named ClientTraceId.|<ul><li>***Optional***</li></ul> |
communication-services | Call Recording | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md | -For example, you can record 1:1 or 1:N scenarios for audio and video calls enabled by [Calling Client SDK](https://learn.microsoft.com/azure/communication-services/concepts/voice-video-calling/calling-sdk-features). +For example, you can record 1:1 or 1:N scenarios for audio and video calls enabled by [Calling Client SDK](./calling-sdk-features.md).  -But also, you can use Call Recording to record complex PSTN or VoIP inbound and outbound calling workflows managed by [Call Automation](https://learn.microsoft.com/azure/communication-services/concepts/voice-video-calling/call-automation). -Regardless of how you establish the call, Call Recording allows you to produce mixed or unmixed media files that are stored for 48 hours on a built-in temporary storage. You can retrieve the files and take them to the long-term storage solution of your choice. Call Recording supports all Azure Communication Services data regions. +But also, you can use Call Recording to record complex PSTN or VoIP inbound and outbound calling workflows managed by [Call Automation](../call-automation/call-automation.md). +Regardless of how you established the call, Call Recording allows you to produce mixed or unmixed media files that are stored for 48 hours on a built-in temporary storage. You can retrieve the files and take them to the long-term storage solution of your choice. Call Recording supports all Azure Communication Services data regions. +  A `recordingId` is returned when recording is started, which is then used for fo ## Event Grid notifications-Call Recording uses [Azure Event Grid](https://learn.microsoft.com/azure/event-grid/event-schema-communication-services) to provide you with notifications related to media and metadata. ++Call Recording use [Azure Event Grid](../../../event-grid/event-schema-communication-services.md) to provide you with notifications related to media and metadata. + > [!NOTE] > Azure Communication Services provides short term media storage for recordings. **Recordings will be available to download for 48 hours.** After 48 hours, recordings will no longer be available. Regulations around the maintenance of personal data require the ability to expor ## Known Issues -It's possible that when a call is created using Call Automation, you don't get a value in the `serverCallId`. If that's the case, get the `serverCallId` from the `CallConnected` event method described in [Get serverCallId](https://learn.microsoft.com/azure/communication-services/quickstarts/voice-video-calling/callflows-for-customer-interactions?pivots=programming-language-csharp#configure-programcs-to-answer-the-call). +It's possible that when a call is created using Call Automation, you won't get a value in the `serverCallId`. If that's the case, get the `serverCallId` from the `CallConnected` event method described in [Get serverCallId](../../quickstarts/call-automation/callflows-for-customer-interactions.md). ## Next steps For more information, see the following articles: - Learn more about Call recording, check out the [Call Recording Quickstart](../../quickstarts/voice-video-calling/get-started-call-recording.md).-- Learn more about [Call Automation](https://learn.microsoft.com/azure/communication-services/quickstarts/voice-video-calling/callflows-for-customer-interactions?pivots=programming-language-csharp).-- Learn more about [Video Calling](https://learn.microsoft.com/azure/communication-services/quickstarts/voice-video-calling/get-started-with-video-calling?pivots=platform-web).+- Learn more about [Call Automation](../../quickstarts/call-automation/callflows-for-customer-interactions.md). +- Learn more about [Video Calling](../../quickstarts/voice-video-calling/get-started-with-video-calling.md). |
communication-services | Get Started Call Recording | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-call-recording.md | -This quickstart gets you started with Call Recording for voice and video calls. To start using the Call Recording APIs, you must have a call in place. Make sure you're familiar with [Calling client SDK](get-started-with-video-calling.md) and/or [Call Automation](https://learn.microsoft.com/azure/communication-services/quickstarts/voice-video-calling/callflows-for-customer-interactions?pivots=programming-language-csharp#configure-programcs-to-answer-the-call) to build the end-user calling experience. +This quickstart gets you started with Call Recording for voice and video calls. To start using the Call Recording APIs, you must have a call in place. Make sure you're familiar with [Calling client SDK](get-started-with-video-calling.md) and/or [Call Automation](../call-automation/callflows-for-customer-interactions.md#build-a-customer-interaction-workflow-using-call-automation) to build the end-user calling experience. ::: zone pivot="programming-language-csharp" [!INCLUDE [Test Call Recording with C#](./includes/call-recording-samples/call-recording-csharp.md)] This quickstart gets you started with Call Recording for voice and video calls. [!INCLUDE [Test Call Recording with Java](./includes/call-recording-samples/call-recording-java.md)] ::: zone-end + ## Clean up resources If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources). For more information, see the following articles: - Download our [Java](https://github.com/Azure-Samples/communication-services-java-quickstarts/tree/main/ServerRecording) and [.NET](https://github.com/Azure-Samples/communication-services-dotnet-quickstarts/tree/main/ServerRecording) call recording sample apps - Learn more about [Call Recording](../../concepts/voice-video-calling/call-recording.md)-- Learn more about [Call Automation](https://learn.microsoft.com/azure/communication-services/concepts/voice-video-calling/call-automation)+- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md) |
container-apps | Azure Resource Manager Api Spec | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md | properties: value: "startup probe" initialDelaySeconds: 3 periodSeconds: 3+ volumeMounts: + - mountPath: /myempty + volumeName: myempty + - mountPath: /myfiles + volumeName: azure-files-volume scale: minReplicas: 1 maxReplicas: 3+ volumes: + - name: myempty + storageType: EmptyDir + - name: azure-files-volume + storageType: AzureFile + storageName: myazurefiles ``` |
container-apps | Dapr Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md | Now that you've learned about Dapr and some of the challenges it solves: - Try [Deploying a Dapr application to Azure Container Apps using the Azure CLI][dapr-quickstart] or [Azure Resource Manager][dapr-arm-quickstart]. - Walk through a tutorial [using GitHub Actions to automate changes for a multi-revision, Dapr-enabled container app][dapr-github-actions].+- Learn how to [perform event-driven work using Dapr bindings][dapr-bindings-tutorial] <!-- Links Internal --> [dapr-quickstart]: ./microservices-dapr.md [dapr-arm-quickstart]: ./microservices-dapr-azure-resource-manager.md [dapr-github-actions]: ./dapr-github-actions.md+[dapr-bindings-tutorial]: ./microservices-dapr-bindings.md <!-- Links External --> |
container-apps | Microservices Dapr Bindings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-bindings.md | + + Title: "Event-driven work using Dapr Bindings" +description: Deploy a sample Dapr Bindings application to Azure Container Apps. ++++ Last updated : 03/08/2023+zone_pivot_group_filename: container-apps/dapr-zone-pivot-groups.json +zone_pivot_groups: dapr-languages-set +++# Event-driven work using Dapr Bindings ++In this tutorial, you create a microservice to demonstrate [Dapr's Bindings API](https://docs.dapr.io/developing-applications/building-blocks/bindings/bindings-overview/) to work with external systems as inputs and outputs. You'll: +> [!div class="checklist"] +> * Run the application locally. +> * Deploy the application to Azure Container Apps via the Azure Developer CLI with the provided Bicep. ++The service listens to input binding events from a system CRON and then outputs the contents of local data to a PostreSql output binding. +++> [!NOTE] +> This tutorial uses [Azure Developer CLI (`azd`)](/azure/developer/azure-developer-cli/overview), which is currently in preview. Preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. The `azd` previews are partially covered by customer support on a best-effort basis. ++## Prerequisites ++- Install [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) +- [Install](https://docs.dapr.io/getting-started/install-dapr-cli/) and [init](https://docs.dapr.io/getting-started/install-dapr-selfhost/) Dapr +- [Docker Desktop](https://www.docker.com/products/docker-desktop/) +- Install [Git](https://git-scm.com/downloads) +++## Run the Node.js application locally ++Before deploying the application to Azure Container Apps, start by running the PostgreSQL container and JavaScript service locally with [Docker Compose](https://docs.docker.com/compose/) and Dapr. ++### Prepare the project ++1. Clone the [sample Dapr application](https://github.com/Azure-Samples/bindings-dapr-nodejs-cron-postgres) to your local machine. ++ ```bash + git clone https://github.com/Azure-Samples/bindings-dapr-nodejs-cron-postgres.git + ``` ++1. Navigate into the sample's root directory. ++ ```bash + cd bindings-dapr-nodejs-cron-postgres + ``` ++### Run the Dapr application using the Dapr CLI ++1. From the sample's root directory, change directories to `db`. ++ ```bash + cd db + ``` +1. Run the PostgreSQL container with Docker Compose. ++ ```bash + docker compose up -d + ``` ++1. Open a new terminal window and navigate into `/batch` in the sample directory. ++ ```bash + cd bindings-dapr-nodejs-cron-postgres/batch + ``` ++1. Install the dependencies. ++ ```bash + npm install + ``` ++1. Run the JavaScript service application with Dapr. ++ ```bash + dapr run --app-id batch-sdk --app-port 5002 --dapr-http-port 3500 --resources-path ../components -- node index.js + ``` ++ The `dapr run` command runs the Dapr binding application locally. Once the application is running successfully, the terminal window shows the output binding data. ++ #### Expected output + + The batch service listens to input binding events from a system CRON and then outputs the contents of local data to a PostgreSQL output binding. + + ``` + == APP == {"sql": "insert into orders (orderid, customer, price) values (1, 'John Smith', 100.32);"} + == APP == {"sql": "insert into orders (orderid, customer, price) values (2, 'Jane Bond', 15.4);"} + == APP == {"sql": "insert into orders (orderid, customer, price) values (3, 'Tony James', 35.56);"} + == APP == Finished processing batch + == APP == {"sql": "insert into orders (orderid, customer, price) values (1, 'John Smith', 100.32);"} + == APP == {"sql": "insert into orders (orderid, customer, price) values (2, 'Jane Bond', 15.4);"} + == APP == {"sql": "insert into orders (orderid, customer, price) values (3, 'Tony James', 35.56);"} + == APP == Finished processing batch + == APP == {"sql": "insert into orders (orderid, customer, price) values (1, 'John Smith', 100.32);"} + == APP == {"sql": "insert into orders (orderid, customer, price) values (2, 'Jane Bond', 15.4);"} + == APP == {"sql": "insert into orders (orderid, customer, price) values (3, 'Tony James', 35.56);"} + == APP == Finished processing batch + ``` ++1. In the `./db` terminal, stop the PostgreSQL container. ++ ```bash + docker compose stop + ``` ++## Deploy the Dapr application template using Azure Developer CLI ++Now that you've run the application locally, let's deploy the Dapr bindings application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview). During deployment, we will swap the local containerized PostgreSQL for an Azure PostgreSQL component. ++### Prepare the project ++Navigate into the [sample's](https://github.com/Azure-Samples/bindings-dapr-nodejs-cron-postgres) root directory. ++```bash +cd bindings-dapr-nodejs-cron-postgres +``` ++### Provision and deploy using Azure Developer CLI ++1. Run `azd init` to initialize the project. ++ ```azdeveloper + azd init + ``` ++1. When prompted in the terminal, provide the following parameters. ++ | Parameter | Description | + | | -- | + | Environment Name | Prefix for the resource group created to hold all Azure resources. | + | Azure Location | The Azure location for your resources. [Make sure you select a location available for Azure PostgreSQL](../postgresql/flexible-server/overview.md#azure-regions). | + | Azure Subscription | The Azure subscription for your resources. | ++1. Run `azd up` to provision the infrastructure and deploy the Dapr application to Azure Container Apps in a single command. ++ ```azdeveloper + azd up + ``` ++ This process may take some time to complete. As the `azd up` command completes, the CLI output displays two Azure portal links to monitor the deployment progress. The output also demonstrates how `azd up`: ++ - Creates and configures all necessary Azure resources via the provided Bicep files in the `./infra` directory using `azd provision`. Once provisioned by Azure Developer CLI, you can access these resources via the Azure portal. The files that provision the Azure resources include: + - `main.parameters.json` + - `main.bicep` + - An `app` resources directory organized by functionality + - A `core` reference library that contains the Bicep modules used by the `azd` template + - Deploys the code using `azd deploy` +++ #### Expected output + + ```azdeveloper + Initializing a new project (azd init) + + + Provisioning Azure resources (azd provision) + Provisioning Azure resources can take some time + + You can view detailed progress in the Azure Portal: + https://portal.azure.com/#blade/HubsExtension/DeploymentDetailsBlade/overview + + (Γ£ô) Done: Resource group: resource-group-name + (Γ£ô) Done: Log Analytics workspace: log-analytics-name + (Γ£ô) Done: Application Insights: app-insights-name + (Γ£ô) Done: Portal dashboard: dashboard-name + (Γ£ô) Done: Azure Database for PostgreSQL flexible server: postgres-server + (Γ£ô) Done: Key vault: key-vault-name + (Γ£ô) Done: Container Apps Environment: container-apps-env-name + (Γ£ô) Done: Container App: container-app-name + + + Deploying services (azd deploy) + + (Γ£ô) Done: Deploying service api + - Endpoint: https://your-container-app-endpoint.region.azurecontainerapps.io/ + + SUCCESS: Your Azure app has been deployed! + You can view the resources created under the resource group resource-group-name in Azure Portal: + https://portal.azure.com/#@/resource/subscriptions/your-subscription-ID/resourceGroups/your-resource-group/overview + ``` ++### Confirm successful deployment ++In the Azure portal, verify the batch container app is logging each insert into Azure PostgreSQL every 10 seconds. ++1. Copy the Container App name from the terminal output. ++1. Navigate to the [Azure portal](https://ms.portal.azure.com) and search for the Container App resource by name. ++1. In the Container App dashboard, select **Monitoring** > **Log stream**. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-menu.png" alt-text="Screenshot of the navigating to the log streams from the Azure Container Apps side menu."::: ++1. Confirm the container is logging the same output as in the terminal earlier. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-portal-view.png" alt-text="Screenshot of the container app's log stream in the Azure portal."::: ++## What happened? ++Upon successful completion of the `azd up` command: ++- Azure Developer CLI provisioned the Azure resources referenced in the [sample project's `./infra` directory](https://github.com/Azure-Samples/bindings-dapr-nodejs-cron-postgres-/tree/master/infra) to the Azure subscription you specified. You can now view those Azure resources via the Azure portal. +- The app deployed to Azure Container Apps. From the portal, you can browse the fully functional app. ++++## Run the Python application locally ++### Prepare the project ++1. Clone the [sample Dapr application](https://github.com/Azure-Samples/bindings-dapr-python-cron-postgres) to your local machine. ++ ```bash + git clone https://github.com/Azure-Samples/bindings-dapr-python-cron-postgres.git + ``` ++1. Navigate into the sample's root directory. ++ ```bash + cd bindings-dapr-python-cron-postgres + ``` ++### Run the Dapr application using the Dapr CLI ++Before deploying the application to Azure Container Apps, start by running the PostgreSQL container and Python service locally with [Docker Compose](https://docs.docker.com/compose/) and Dapr. ++1. From the sample's root directory, change directories to `db`. ++ ```bash + cd db + ``` +1. Run the PostgreSQL container with Docker Compose. ++ ```bash + docker compose up -d + ``` ++1. Open a new terminal window and navigate into `/batch` in the sample directory. ++ ```bash + cd bindings-dapr-python-cron-postgres/batch + ``` ++1. Install the dependencies. ++ ```bash + pip install -r requirements.txt + ``` ++1. Run the Python service application with Dapr. ++ ```bash + dapr run --app-id batch-sdk --app-port 5001 --dapr-http-port 3500 --resources-path ../components -- python3 app.py + ``` ++ The `dapr run` command runs the Dapr binding application locally. Once the application is running successfully, the terminal window shows the output binding data. ++ #### Expected output + + The batch service listens to input binding events from a system CRON and then outputs the contents of local data to a PostgreSQL output binding. + + ``` + == APP == {"sql": "insert into orders (orderid, customer, price) values (1, 'John Smith', 100.32);"} + == APP == {"sql": "insert into orders (orderid, customer, price) values (2, 'Jane Bond', 15.4);"} + == APP == {"sql": "insert into orders (orderid, customer, price) values (3, 'Tony James', 35.56);"} + == APP == Finished processing batch + == APP == {"sql": "insert into orders (orderid, customer, price) values (1, 'John Smith', 100.32);"} + == APP == {"sql": "insert into orders (orderid, customer, price) values (2, 'Jane Bond', 15.4);"} + == APP == {"sql": "insert into orders (orderid, customer, price) values (3, 'Tony James', 35.56);"} + == APP == Finished processing batch + == APP == {"sql": "insert into orders (orderid, customer, price) values (1, 'John Smith', 100.32);"} + == APP == {"sql": "insert into orders (orderid, customer, price) values (2, 'Jane Bond', 15.4);"} + == APP == {"sql": "insert into orders (orderid, customer, price) values (3, 'Tony James', 35.56);"} + == APP == Finished processing batch + ``` ++1. In the `./db` terminal, stop the PostgreSQL container. ++ ```bash + docker compose stop + ``` ++## Deploy the Dapr application template using Azure Developer CLI ++Now that you've run the application locally, let's deploy the Dapr bindings application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview). During deployment, we will swap the local containerized PostgreSQL for an Azure PostgreSQL component. ++### Prepare the project ++Navigate into the [sample's](https://github.com/Azure-Samples/bindings-dapr-python-cron-postgres) root directory. ++```bash +cd bindings-dapr-python-cron-postgres +``` ++### Provision and deploy using Azure Developer CLI ++1. Run `azd init` to initialize the project. ++ ```azdeveloper + azd init + ``` ++1. When prompted in the terminal, provide the following parameters. ++ | Parameter | Description | + | | -- | + | Environment Name | Prefix for the resource group created to hold all Azure resources. | + | Azure Location | The Azure location for your resources. [Make sure you select a location available for Azure PostgreSQL](../postgresql/flexible-server/overview.md#azure-regions). | + | Azure Subscription | The Azure subscription for your resources. | ++1. Run `azd up` to provision the infrastructure and deploy the Dapr application to Azure Container Apps in a single command. ++ ```azdeveloper + azd up + ``` ++ This process may take some time to complete. As the `azd up` command completes, the CLI output displays two Azure portal links to monitor the deployment progress. The output also demonstrates how `azd up`: ++ - Creates and configures all necessary Azure resources via the provided Bicep files in the `./infra` directory using `azd provision`. Once provisioned by Azure Developer CLI, you can access these resources via the Azure portal. The files that provision the Azure resources include: + - `main.parameters.json` + - `main.bicep` + - An `app` resources directory organized by functionality + - A `core` reference library that contains the Bicep modules used by the `azd` template + - Deploys the code using `azd deploy` ++ #### Expected output + + ```azdeveloper + Initializing a new project (azd init) + + + Provisioning Azure resources (azd provision) + Provisioning Azure resources can take some time + + You can view detailed progress in the Azure Portal: + https://portal.azure.com/#blade/HubsExtension/DeploymentDetailsBlade/overview + + (Γ£ô) Done: Resource group: resource-group-name + (Γ£ô) Done: Log Analytics workspace: log-analytics-name + (Γ£ô) Done: Application Insights: app-insights-name + (Γ£ô) Done: Portal dashboard: dashboard-name + (Γ£ô) Done: Azure Database for PostgreSQL flexible server: postgres-server + (Γ£ô) Done: Key vault: key-vault-name + (Γ£ô) Done: Container Apps Environment: container-apps-env-name + (Γ£ô) Done: Container App: container-app-name + + + Deploying services (azd deploy) + + (Γ£ô) Done: Deploying service api + - Endpoint: https://your-container-app-endpoint.region.azurecontainerapps.io/ + + SUCCESS: Your Azure app has been deployed! + You can view the resources created under the resource group resource-group-name in Azure Portal: + https://portal.azure.com/#@/resource/subscriptions/your-subscription-ID/resourceGroups/your-resource-group/overview + ``` ++### Confirm successful deployment ++In the Azure portal, verify the batch container app is logging each insert into Azure PostgreSQL every 10 seconds. ++1. Copy the Container App name from the terminal output. ++1. Navigate to the [Azure portal](https://ms.portal.azure.com) and search for the Container App resource by name. ++1. In the Container App dashboard, select **Monitoring** > **Log stream**. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-menu.png" alt-text="Screenshot of the navigating to the log streams from the Azure Container Apps side menu."::: ++1. Confirm the container is logging the same output as in the terminal earlier. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-portal-view.png" alt-text="Screenshot of the container app's log stream in the Azure portal."::: ++## What happened? ++Upon successful completion of the `azd up` command: ++- Azure Developer CLI provisioned the Azure resources referenced in the [sample project's `./infra` directory](https://github.com/Azure-Samples/bindings-dapr-nodejs-cron-postgres-/tree/master/infra) to the Azure subscription you specified. You can now view those Azure resources via the Azure portal. +- The app deployed to Azure Container Apps. From the portal, you can browse the fully functional app. +++## Run the .NET application locally ++### Prepare the project ++1. Clone the [sample Dapr application](https://github.com/Azure-Samples/bindings-dapr-csharp-cron-postgres) to your local machine. ++ ```bash + git clone https://github.com/Azure-Samples/bindings-dapr-csharp-cron-postgres.git + ``` ++1. Navigate into the sample's root directory. ++ ```bash + cd bindings-dapr-csharp-cron-postgres + ``` ++### Run the Dapr application using the Dapr CLI ++Before deploying the application to Azure Container Apps, start by running the PostgreSQL container and .NET service locally with [Docker Compose](https://docs.docker.com/compose/) and Dapr. ++1. From the sample's root directory, change directories to `db`. ++ ```bash + cd db + ``` +1. Run the PostgreSQL container with Docker Compose. ++ ```bash + docker compose up -d + ``` ++1. Open a new terminal window and navigate into `/batch` in the sample directory. ++ ```bash + cd bindings-dapr-csharp-cron-postgres/batch + ``` ++1. Install the dependencies. ++ ```bash + dotnet build + ``` ++1. Run the .NET service application with Dapr. ++ ```bash + dapr run --app-id batch-sdk --app-port 7002 --resources-path ../components -- dotnet run + ``` ++ The `dapr run` command runs the Dapr binding application locally. Once the application is running successfully, the terminal window shows the output binding data. ++ #### Expected output + + The batch service listens to input binding events from a system CRON and then outputs the contents of local data to a PostgreSQL output binding. + + ``` + == APP == {"sql": "insert into orders (orderid, customer, price) values (1, 'John Smith', 100.32);"} + == APP == {"sql": "insert into orders (orderid, customer, price) values (2, 'Jane Bond', 15.4);"} + == APP == {"sql": "insert into orders (orderid, customer, price) values (3, 'Tony James', 35.56);"} + == APP == Finished processing batch + == APP == {"sql": "insert into orders (orderid, customer, price) values (1, 'John Smith', 100.32);"} + == APP == {"sql": "insert into orders (orderid, customer, price) values (2, 'Jane Bond', 15.4);"} + == APP == {"sql": "insert into orders (orderid, customer, price) values (3, 'Tony James', 35.56);"} + == APP == Finished processing batch + == APP == {"sql": "insert into orders (orderid, customer, price) values (1, 'John Smith', 100.32);"} + == APP == {"sql": "insert into orders (orderid, customer, price) values (2, 'Jane Bond', 15.4);"} + == APP == {"sql": "insert into orders (orderid, customer, price) values (3, 'Tony James', 35.56);"} + == APP == Finished processing batch + ``` ++1. In the `./db` terminal, stop the PostgreSQL container. ++ ```bash + docker compose stop + ``` ++## Deploy the Dapr application template using Azure Developer CLI ++Now that you've run the application locally, let's deploy the Dapr bindings application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview). During deployment, we will swap the local containerized PostgreSQL for an Azure PostgreSQL component. ++### Prepare the project ++Navigate into the [sample's](https://github.com/Azure-Samples/bindings-dapr-csharp-cron-postgres) root directory. ++```bash +cd bindings-dapr-csharp-cron-postgres +``` ++### Provision and deploy using Azure Developer CLI ++1. Run `azd init` to initialize the project. ++ ```azdeveloper + azd init + ``` ++1. When prompted in the terminal, provide the following parameters. ++ | Parameter | Description | + | | -- | + | Environment Name | Prefix for the resource group created to hold all Azure resources. | + | Azure Location | The Azure location for your resources. [Make sure you select a location available for Azure PostgreSQL](../postgresql/flexible-server/overview.md#azure-regions). | + | Azure Subscription | The Azure subscription for your resources. | ++1. Run `azd up` to provision the infrastructure and deploy the Dapr application to Azure Container Apps in a single command. ++ ```azdeveloper + azd up + ``` ++ This process may take some time to complete. As the `azd up` command completes, the CLI output displays two Azure portal links to monitor the deployment progress. The output also demonstrates how `azd up`: ++ - Creates and configures all necessary Azure resources via the provided Bicep files in the `./infra` directory using `azd provision`. Once provisioned by Azure Developer CLI, you can access these resources via the Azure portal. The files that provision the Azure resources include: + - `main.parameters.json` + - `main.bicep` + - An `app` resources directory organized by functionality + - A `core` reference library that contains the Bicep modules used by the `azd` template + - Deploys the code using `azd deploy` ++ #### Expected output + + ```azdeveloper + Initializing a new project (azd init) + + + Provisioning Azure resources (azd provision) + Provisioning Azure resources can take some time + + You can view detailed progress in the Azure Portal: + https://portal.azure.com/#blade/HubsExtension/DeploymentDetailsBlade/overview + + (Γ£ô) Done: Resource group: resource-group-name + (Γ£ô) Done: Log Analytics workspace: log-analytics-name + (Γ£ô) Done: Application Insights: app-insights-name + (Γ£ô) Done: Portal dashboard: dashboard-name + (Γ£ô) Done: Azure Database for PostgreSQL flexible server: postgres-server + (Γ£ô) Done: Key vault: key-vault-name + (Γ£ô) Done: Container Apps Environment: container-apps-env-name + (Γ£ô) Done: Container App: container-app-name + + + Deploying services (azd deploy) + + (Γ£ô) Done: Deploying service api + - Endpoint: https://your-container-app-endpoint.region.azurecontainerapps.io/ + + SUCCESS: Your Azure app has been deployed! + You can view the resources created under the resource group resource-group-name in Azure Portal: + https://portal.azure.com/#@/resource/subscriptions/your-subscription-ID/resourceGroups/your-resource-group/overview + ``` ++### Confirm successful deployment ++In the Azure portal, verify the batch container app is logging each insert into Azure PostgreSQL every 10 seconds. ++1. Copy the Container App name from the terminal output. ++1. Navigate to the [Azure portal](https://ms.portal.azure.com) and search for the Container App resource by name. ++1. In the Container App dashboard, select **Monitoring** > **Log stream**. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-menu.png" alt-text="Screenshot of the navigating to the log streams from the Azure Container Apps side menu."::: ++1. Confirm the container is logging the same output as in the terminal earlier. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-portal-view.png" alt-text="Screenshot of the container app's log stream in the Azure portal."::: ++## What happened? ++Upon successful completion of the `azd up` command: ++- Azure Developer CLI provisioned the Azure resources referenced in the [sample project's `./infra` directory](https://github.com/Azure-Samples/bindings-dapr-nodejs-cron-postgres-/tree/master/infra) to the Azure subscription you specified. You can now view those Azure resources via the Azure portal. +- The app deployed to Azure Container Apps. From the portal, you can browse the fully functional app. +++++## Clean up resources ++If you're not going to continue to use this application, delete the Azure resources you've provisioned with the following command. ++```azdeveloper +azd down +``` ++## Next steps ++- Learn more about [deploying Dapr applications to Azure Container Apps](./microservices-dapr.md). +- Learn more about [Azure Developer CLI](/azure/developer/azure-developer-cli/overview) and [making your applications compatible with `azd`](/azure/developer/azure-developer-cli/make-azd-compatible). |
container-apps | Microservices Dapr Pubsub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-pubsub.md | + + Title: "Microservices communication using Dapr Pub/sub messaging" +description: Enable two sample Dapr applications to send and receive messages and leverage Azure Container Apps. ++++ Last updated : 03/16/2023+zone_pivot_group_filename: container-apps/dapr-zone-pivot-groups.json +zone_pivot_groups: dapr-languages-set +++# Microservices communication using Dapr Pub/sub messaging ++In this tutorial, you'll: +> [!div class="checklist"] +> * Create a publisher microservice and a subscriber microservice that leverage the [Dapr pub/sub API](https://docs.dapr.io/developing-applications/building-blocks/pubsub/pubsub-overview/) to communicate using messages for event-driven architectures. +> * Deploy the application to Azure Container Apps via the Azure Developer CLI with provided Bicep. ++The sample pub/sub project includes: +1. A message generator (publisher) `checkout` service that generates messages of a specific topic. +1. An (subscriber) `order-processor` service that listens for messages from the `checkout` service of a specific topic. +++> [!NOTE] +> This tutorial uses [Azure Developer CLI (`azd`)](/azure/developer/azure-developer-cli/overview), which is currently in preview. Preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. The `azd` previews are partially covered by customer support on a best-effort basis. ++## Prerequisites ++- Install [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) +- [Install](https://docs.dapr.io/getting-started/install-dapr-cli/) and [init](https://docs.dapr.io/getting-started/install-dapr-selfhost/) Dapr +- [Docker Desktop](https://www.docker.com/products/docker-desktop/) +- Install [Git](https://git-scm.com/downloads) +++## Run the Node.js applications locally ++Before deploying the application to Azure Container Apps, run the `order-processor` and `checkout` services locally with Dapr and Azure Service Bus. ++### Prepare the project ++1. Clone the [sample Dapr application](https://github.com/Azure-Samples/pubsub-dapr-nodejs-servicebus) to your local machine. ++ ```bash + git clone https://github.com/Azure-Samples/pubsub-dapr-nodejs-servicebus.git + ``` ++1. Navigate into the sample's root directory. ++ ```bash + cd pubsub-dapr-nodejs-servicebus + ``` ++### Run the Dapr applications using the Dapr CLI ++Start by running the `order-processor` subscriber service with Dapr. ++1. From the sample's root directory, change directories to `order-processor`. ++ ```bash + cd order-processor + ``` +1. Install the dependencies. ++ ```bash + npm install + ``` ++1. Run the `order-processor` service with Dapr. ++ ```bash + dapr run --app-port 5001 --app-id order-processing --app-protocol http --dapr-http-port 3501 --resources-path ../components -- npm run start + ``` ++1. In a new terminal window, from the sample's root directory, navigate to the `checkout` publisher service. ++ ```bash + cd checkout + ``` ++1. Install the dependencies. ++ ```bash + npm install + ``` ++1. Run the `checkout` service with Dapr. ++ ```bash + dapr run --app-id checkout --app-protocol http --resources-path ../components -- npm run start + ``` ++ #### Expected output ++ In both terminals, the `checkout` service publishes 10 messages received by the `order-processor` service before exiting. ++ `checkout` output: ++ ``` + == APP == Published data: {"orderId":1} + == APP == Published data: {"orderId":2} + == APP == Published data: {"orderId":3} + == APP == Published data: {"orderId":4} + == APP == Published data: {"orderId":5} + == APP == Published data: {"orderId":6} + == APP == Published data: {"orderId":7} + == APP == Published data: {"orderId":8} + == APP == Published data: {"orderId":9} + == APP == Published data: {"orderId":10} + ``` ++ `order-processor` output: ++ ``` + == APP == Subscriber received: {"orderId":1} + == APP == Subscriber received: {"orderId":2} + == APP == Subscriber received: {"orderId":3} + == APP == Subscriber received: {"orderId":4} + == APP == Subscriber received: {"orderId":5} + == APP == Subscriber received: {"orderId":6} + == APP == Subscriber received: {"orderId":7} + == APP == Subscriber received: {"orderId":8} + == APP == Subscriber received: {"orderId":9} + == APP == Subscriber received: {"orderId":10} + ``` ++1. Make sure both applications have stopped by running the following commands. In the checkout terminal: ++ ```sh + dapr stop --app-id checkout + ``` ++ In the order-processor terminal: ++ ```sh + dapr stop --app-id order-processor + ``` ++## Deploy the Dapr application template using Azure Developer CLI ++Deploy the Dapr application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview). ++### Prepare the project ++In a new terminal window, navigate into the [sample's](https://github.com/Azure-Samples/pubsub-dapr-nodejs-servicebus) root directory. ++```bash +cd pubsub-dapr-nodejs-servicebus +``` ++### Provision and deploy using Azure Developer CLI ++1. Run `azd init` to initialize the project. ++ ```azdeveloper + azd init + ``` ++1. When prompted in the terminal, provide the following parameters. ++ | Parameter | Description | + | | -- | + | Environment Name | Prefix for the resource group created to hold all Azure resources. | + | Azure Location | The Azure location for your resources. | + | Azure Subscription | The Azure subscription for your resources. | ++1. Run `azd up` to provision the infrastructure and deploy the Dapr application to Azure Container Apps in a single command. ++ ```azdeveloper + azd up + ``` ++ This process may take some time to complete. As the `azd up` command completes, the CLI output displays two Azure portal links to monitor the deployment progress. The output also demonstrates how `azd up`: ++ - Creates and configures all necessary Azure resources via the provided Bicep files in the `./infra` directory using `azd provision`. Once provisioned by Azure Developer CLI, you can access these resources via the Azure portal. The files that provision the Azure resources include: + - `main.parameters.json` + - `main.bicep` + - An `app` resources directory organized by functionality + - A `core` reference library that contains the Bicep modules used by the `azd` template + - Deploys the code using `azd deploy` ++ #### Expected output ++ ```azdeveloper + Initializing a new project (azd init) + + + Provisioning Azure resources (azd provision) + Provisioning Azure resources can take some time + + You can view detailed progress in the Azure Portal: + https://portal.azure.com + + (Γ£ô) Done: Resource group: resource-group-name + (Γ£ô) Done: Application Insights: app-insights-name + (Γ£ô) Done: Portal dashboard: portal-dashboard-name + (Γ£ô) Done: Log Analytics workspace: log-analytics-name + (Γ£ô) Done: Key vault: key-vault-name + (Γ£ô) Done: Container Apps Environment: ca-env-name + (Γ£ô) Done: Container App: ca-checkout-name + (Γ£ô) Done: Container App: ca-orders-name + + + Deploying services (azd deploy) + + (Γ£ô) Done: Deploying service checkout + (Γ£ô) Done: Deploying service orders + - Endpoint: https://ca-orders-name.endpoint.region.azurecontainerapps.io/ + + SUCCESS: Your Azure app has been deployed! + You can view the resources created under the resource group resource-group-name in Azure Portal: + https://portal.azure.com/#@/resource/subscriptions/subscription-id/resourceGroups/resource-group-name/overview + ``` ++### Confirm successful deployment ++In the Azure portal, verify the `checkout` service is publishing messages to the Azure Service Bus topic. ++1. Copy the `checkout` container app name from the terminal output. ++1. Go to the [Azure portal](https://ms.portal.azure.com) and search for the container app resource by name. ++1. In the Container Apps dashboard, select **Monitoring** > **Log stream**. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-menu.png" alt-text="Screenshot of navigating to the Log stream page in the Azure portal."::: +++1. Confirm the `checkout` container is logging the same output as in the terminal earlier. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-portal-view-checkout-pubsub.png" alt-text="Screenshot of the checkout service container's log stream in the Azure portal."::: ++1. Do the same for the `order-processor` service. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-portal-view-order-processor-pubsub.png" alt-text="Screenshot of the order processor service container's log stream in the Azure portal."::: ++## What happened? ++Upon successful completion of the `azd up` command: ++- Azure Developer CLI provisioned the Azure resources referenced in the [sample project's `./infra` directory](https://github.com/Azure-Samples/pubsub-dapr-nodejs-servicebus/tree/main/infra) to the Azure subscription you specified. You can now view those Azure resources via the Azure portal. +- The app deployed to Azure Container Apps. From the portal, you can browse to the fully functional app. +++++## Run the Python applications locally ++Before deploying the application to Azure Container Apps, run the `order-processor` and `checkout` services locally with Dapr and Azure Service Bus. ++### Prepare the project ++1. Clone the [sample Dapr application](https://github.com/Azure-Samples/pubsub-dapr-python-servicebus) to your local machine. ++ ```bash + git clone https://github.com/Azure-Samples/pubsub-dapr-python-servicebus.git + ``` ++1. Navigate into the sample's root directory. ++ ```bash + cd pubsub-dapr-python-servicebus + ``` ++### Run the Dapr applications using the Dapr CLI ++Start by running the `order-processor` subscriber service with Dapr. ++1. From the sample's root directory, change directories to `order-processor`. ++ ```bash + cd order-processor + ``` +1. Install the dependencies. ++ ```bash + pip3 install -r requirements.txt + ``` ++1. Run the `order-processor` service with Dapr. ++ ```bash + dapr run --app-id order-processor --resources-path ../components/ --app-port 5001 -- python3 app.py + ``` ++1. In a new terminal window, from the sample's root directory, navigate to the `checkout` publisher service. ++ ```bash + cd checkout + ``` ++1. Install the dependencies. ++ ```bash + pip3 install -r requirements.txt + ``` ++1. Run the `checkout` service with Dapr. ++ ```bash + dapr run --app-id checkout --resources-path ../components/ -- python3 app.py + ``` ++ #### Expected output ++ In both terminals, the `checkout` service publishes 10 messages received by the `order-processor` service before exiting. ++ `checkout` output: ++ ``` + == APP == Published data: {"orderId":1} + == APP == Published data: {"orderId":2} + == APP == Published data: {"orderId":3} + == APP == Published data: {"orderId":4} + == APP == Published data: {"orderId":5} + == APP == Published data: {"orderId":6} + == APP == Published data: {"orderId":7} + == APP == Published data: {"orderId":8} + == APP == Published data: {"orderId":9} + == APP == Published data: {"orderId":10} + ``` ++ `order-processor` output: ++ ``` + == APP == Subscriber received: {"orderId":1} + == APP == Subscriber received: {"orderId":2} + == APP == Subscriber received: {"orderId":3} + == APP == Subscriber received: {"orderId":4} + == APP == Subscriber received: {"orderId":5} + == APP == Subscriber received: {"orderId":6} + == APP == Subscriber received: {"orderId":7} + == APP == Subscriber received: {"orderId":8} + == APP == Subscriber received: {"orderId":9} + == APP == Subscriber received: {"orderId":10} + ``` ++1. Make sure both applications have stopped by running the following commands. In the checkout terminal: ++ ```sh + dapr stop --app-id checkout + ``` ++ In the order-processor terminal: ++ ```sh + dapr stop --app-id order-processor + ``` ++## Deploy the Dapr application template using Azure Developer CLI ++Deploy the Dapr application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview). ++### Prepare the project ++In a new terminal window, navigate into the [sample's](https://github.com/Azure-Samples/pubsub-dapr-python-servicebus) root directory. ++```bash +cd pubsub-dapr-python-servicebus +``` ++### Provision and deploy using Azure Developer CLI ++1. Run `azd init` to initialize the project. ++ ```azdeveloper + azd init + ``` ++1. When prompted in the terminal, provide the following parameters. ++ | Parameter | Description | + | | -- | + | Environment Name | Prefix for the resource group created to hold all Azure resources. | + | Azure Location | The Azure location for your resources. | + | Azure Subscription | The Azure subscription for your resources. | ++1. Run `azd up` to provision the infrastructure and deploy the Dapr application to Azure Container Apps in a single command. ++ ```azdeveloper + azd up + ``` ++ This process may take some time to complete. As the `azd up` command completes, the CLI output displays two Azure portal links to monitor the deployment progress. The output also demonstrates how `azd up`: ++ - Creates and configures all necessary Azure resources via the provided Bicep files in the `./infra` directory using `azd provision`. Once provisioned by Azure Developer CLI, you can access these resources via the Azure portal. The files that provision the Azure resources include: + - `main.parameters.json` + - `main.bicep` + - An `app` resources directory organized by functionality + - A `core` reference library that contains the Bicep modules used by the `azd` template + - Deploys the code using `azd deploy` ++ #### Expected output ++ ```azdeveloper + Initializing a new project (azd init) + + + Provisioning Azure resources (azd provision) + Provisioning Azure resources can take some time + + You can view detailed progress in the Azure Portal: + https://portal.azure.com + + (Γ£ô) Done: Resource group: resource-group-name + (Γ£ô) Done: Application Insights: app-insights-name + (Γ£ô) Done: Portal dashboard: portal-dashboard-name + (Γ£ô) Done: Log Analytics workspace: log-analytics-name + (Γ£ô) Done: Key vault: key-vault-name + (Γ£ô) Done: Container Apps Environment: ca-env-name + (Γ£ô) Done: Container App: ca-checkout-name + (Γ£ô) Done: Container App: ca-orders-name + + + Deploying services (azd deploy) + + (Γ£ô) Done: Deploying service checkout + (Γ£ô) Done: Deploying service orders + - Endpoint: https://ca-orders-name.endpoint.region.azurecontainerapps.io/ + + SUCCESS: Your Azure app has been deployed! + You can view the resources created under the resource group resource-group-name in Azure Portal: + https://portal.azure.com/#@/resource/subscriptions/subscription-id/resourceGroups/resource-group-name/overview + ``` ++### Confirm successful deployment ++In the Azure portal, verify the `checkout` service is publishing messages to the Azure Service Bus topic. ++1. Copy the `checkout` container app name from the terminal output. ++1. Go to the [Azure portal](https://ms.portal.azure.com) and search for the container app resource by name. ++1. In the Container Apps dashboard, select **Monitoring** > **Log stream**. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-menu.png" alt-text="Screenshot of navigating to the Log stream page in the Azure portal."::: +++1. Confirm the `checkout` container is logging the same output as in the terminal earlier. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-portal-view-checkout-pubsub.png" alt-text="Screenshot of the checkout service container's log stream in the Azure portal."::: ++1. Do the same for the `order-processor` service. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-portal-view-order-processor-pubsub.png" alt-text="Screenshot of the order processor service container's log stream in the Azure portal."::: ++## What happened? ++Upon successful completion of the `azd up` command: ++- Azure Developer CLI provisioned the Azure resources referenced in the [sample project's `./infra` directory](https://github.com/Azure-Samples/pubsub-dapr-python-servicebus/tree/main/infra) to the Azure subscription you specified. You can now view those Azure resources via the Azure portal. +- The app deployed to Azure Container Apps. From the portal, you can browse to the fully functional app. +++++## Run the .NET applications locally ++Before deploying the application to Azure Container Apps, run the `order-processor` and `checkout` services locally with Dapr and Azure Service Bus. ++### Prepare the project ++1. Clone the [sample Dapr application](https://github.com/Azure-Samples/pubsub-dapr-csharp-servicebus) to your local machine. ++ ```bash + git clone https://github.com/Azure-Samples/pubsub-dapr-csharp-servicebus.git + ``` ++1. Navigate into the sample's root directory. ++ ```bash + cd pubsub-dapr-csharp-servicebus + ``` ++### Run the Dapr applications using the Dapr CLI ++Start by running the `order-processor` subscriber service with Dapr. ++1. From the sample's root directory, change directories to `order-processor`. ++ ```bash + cd order-processor + ``` +1. Install the dependencies. ++ ```bash + dotnet build + ``` ++1. Run the `order-processor` service with Dapr. ++ ```bash + dapr run --app-id order-processor --resources-path ../components/ --app-port 7001 -- dotnet run --project . + ``` ++1. In a new terminal window, from the sample's root directory, navigate to the `checkout` publisher service. ++ ```bash + cd checkout + ``` ++1. Install the dependencies. ++ ```bash + dotnet build + ``` ++1. Run the `checkout` service with Dapr. ++ ```bash + dapr run --app-id checkout --resources-path ../components/ -- dotnet run --project . + ``` ++ #### Expected output ++ In both terminals, the `checkout` service publishes 10 messages received by the `order-processor` service before exiting. ++ `checkout` output: ++ ``` + == APP == Published data: {"orderId":1} + == APP == Published data: {"orderId":2} + == APP == Published data: {"orderId":3} + == APP == Published data: {"orderId":4} + == APP == Published data: {"orderId":5} + == APP == Published data: {"orderId":6} + == APP == Published data: {"orderId":7} + == APP == Published data: {"orderId":8} + == APP == Published data: {"orderId":9} + == APP == Published data: {"orderId":10} + ``` ++ `order-processor` output: ++ ``` + == APP == Subscriber received: {"orderId":1} + == APP == Subscriber received: {"orderId":2} + == APP == Subscriber received: {"orderId":3} + == APP == Subscriber received: {"orderId":4} + == APP == Subscriber received: {"orderId":5} + == APP == Subscriber received: {"orderId":6} + == APP == Subscriber received: {"orderId":7} + == APP == Subscriber received: {"orderId":8} + == APP == Subscriber received: {"orderId":9} + == APP == Subscriber received: {"orderId":10} + ``` ++1. Make sure both applications have stopped by running the following commands. In the checkout terminal. ++ ```sh + dapr stop --app-id checkout + ``` ++ In the order-processor terminal: ++ ```sh + dapr stop --app-id order-processor + ``` ++## Deploy the Dapr application template using Azure Developer CLI ++Deploy the Dapr application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview). ++### Prepare the project ++In a new terminal window, navigate into the [sample's](https://github.com/Azure-Samples/pubsub-dapr-csharp-servicebus) root directory. ++```bash +cd pubsub-dapr-csharp-servicebus +``` ++### Provision and deploy using Azure Developer CLI ++1. Run `azd init` to initialize the project. ++ ```azdeveloper + azd init + ``` ++1. When prompted in the terminal, provide the following parameters. ++ | Parameter | Description | + | | -- | + | Environment Name | Prefix for the resource group created to hold all Azure resources. | + | Azure Location | The Azure location for your resources. | + | Azure Subscription | The Azure subscription for your resources. | ++1. Run `azd up` to provision the infrastructure and deploy the Dapr application to Azure Container Apps in a single command. ++ ```azdeveloper + azd up + ``` ++ This process may take some time to complete. As the `azd up` command completes, the CLI output displays two Azure portal links to monitor the deployment progress. The output also demonstrates how `azd up`: ++ - Creates and configures all necessary Azure resources via the provided Bicep files in the `./infra` directory using `azd provision`. Once provisioned by Azure Developer CLI, you can access these resources via the Azure portal. The files that provision the Azure resources include: + - `main.parameters.json` + - `main.bicep` + - An `app` resources directory organized by functionality + - A `core` reference library that contains the Bicep modules used by the `azd` template + - Deploys the code using `azd deploy` ++ #### Expected output ++ ```azdeveloper + Initializing a new project (azd init) + + + Provisioning Azure resources (azd provision) + Provisioning Azure resources can take some time + + You can view detailed progress in the Azure Portal: + https://portal.azure.com + + (Γ£ô) Done: Resource group: resource-group-name + (Γ£ô) Done: Application Insights: app-insights-name + (Γ£ô) Done: Portal dashboard: portal-dashboard-name + (Γ£ô) Done: Log Analytics workspace: log-analytics-name + (Γ£ô) Done: Key vault: key-vault-name + (Γ£ô) Done: Container Apps Environment: ca-env-name + (Γ£ô) Done: Container App: ca-checkout-name + (Γ£ô) Done: Container App: ca-orders-name + + + Deploying services (azd deploy) + + (Γ£ô) Done: Deploying service checkout + (Γ£ô) Done: Deploying service orders + - Endpoint: https://ca-orders-name.endpoint.region.azurecontainerapps.io/ + + SUCCESS: Your Azure app has been deployed! + You can view the resources created under the resource group resource-group-name in Azure Portal: + https://portal.azure.com/#@/resource/subscriptions/subscription-id/resourceGroups/resource-group-name/overview + ``` ++### Confirm successful deployment ++In the Azure portal, verify the `checkout` service is publishing messages to the Azure Service Bus topic. ++1. Copy the `checkout` container app name from the terminal output. ++1. Go to the [Azure portal](https://ms.portal.azure.com) and search for the container app resource by name. ++1. In the Container Apps dashboard, select **Monitoring** > **Log stream**. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-menu.png" alt-text="Screenshot of navigating to the Log stream page in the Azure portal."::: +++1. Confirm the `checkout` container is logging the same output as in the terminal earlier. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-portal-view-checkout-pubsub.png" alt-text="Screenshot of the checkout service container's log stream in the Azure portal."::: ++1. Do the same for the `order-processor` service. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-portal-view-order-processor-pubsub.png" alt-text="Screenshot of the order processor service container's log stream in the Azure portal."::: ++## What happened? ++Upon successful completion of the `azd up` command: ++- Azure Developer CLI provisioned the Azure resources referenced in the [sample project's `./infra` directory](https://github.com/Azure-Samples/pubsub-dapr-csharp-servicebus/tree/main/infra) to the Azure subscription you specified. You can now view those Azure resources via the Azure portal. +- The app deployed to Azure Container Apps. From the portal, you can browse to the fully functional app. +++## Clean up resources ++If you're not going to continue to use this application, delete the Azure resources you've provisioned with the following command: ++```azdeveloper +azd down +``` ++## Next steps ++- Learn more about [deploying Dapr applications to Azure Container Apps](./microservices-dapr.md). +- Learn more about [Azure Developer CLI](/azure/developer/azure-developer-cli/overview) and [making your applications compatible with `azd`](/azure/developer/azure-developer-cli/make-azd-compatible). |
container-apps | Microservices Dapr Service Invoke | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-service-invoke.md | + + Title: "Microservices communication using Dapr Service Invocation" ++description: Enable two sample Dapr applications to communicate and leverage Azure Container Apps. ++++ Last updated : 02/06/2023+zone_pivot_group_filename: container-apps/dapr-zone-pivot-groups.json +zone_pivot_groups: dapr-languages-set +++# Microservices communication using Dapr Service Invocation ++In this tutorial, you'll: +> [!div class="checklist"] +> * Create and run locally two microservices that communicate securely using auto-mTLS and reliably using built-in retries via [Dapr's Service Invocation API](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/service-invocation-overview/). +> * Deploy the application to Azure Container Apps via the Azure Developer CLI with the provided Bicep. ++The sample service invocation project includes: +1. A `checkout` service that uses Dapr's HTTP proxying capability on a loop to invoke a request on the `order-processor` service. +1. A `order-processor` service that receives the request from the `checkout` service. +++> [!NOTE] +> This tutorial uses [Azure Developer CLI (`azd`)](/azure/developer/azure-developer-cli/overview), which is currently in preview. Preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. The `azd` previews are partially covered by customer support on a best-effort basis. ++## Prerequisites ++- Install [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) +- [Install](https://docs.dapr.io/getting-started/install-dapr-cli/) and [init](https://docs.dapr.io/getting-started/install-dapr-selfhost/) Dapr +- [Docker Desktop](https://www.docker.com/products/docker-desktop/) +- Install [Git](https://git-scm.com/downloads) +++## Run the Node.js applications locally ++Before deploying the application to Azure Container Apps, start by running the `order-processor` and `checkout` services locally with Dapr. ++### Prepare the project ++1. Clone the [sample Dapr application](https://github.com/Azure-Samples/svc-invoke-dapr-nodejs) to your local machine. ++ ```bash + git clone https://github.com/Azure-Samples/svc-invoke-dapr-nodejs.git + ``` ++1. Navigate into the sample's root directory. ++ ```bash + cd svc-invoke-dapr-nodejs + ``` ++### Run the Dapr applications using the Dapr CLI ++Start by running the `order-processor` service. ++1. From the sample's root directory, change directories to `order-processor`. ++ ```bash + cd order-processor + ``` +1. Install the dependencies. ++ ```bash + npm install + ``` ++1. Run the `order-processor` service with Dapr. ++ ```bash + dapr run --app-port 5001 --app-id order-processor --app-protocol http --dapr-http-port 3501 -- npm start + ``` ++1. In a new terminal window, from the sample's root directory, navigate to the `checkout` caller service. ++ ```bash + cd checkout + ``` ++1. Install the dependencies. ++ ```bash + npm install + ``` ++1. Run the `checkout` service with Dapr. ++ ```bash + dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- npm start + ``` ++ #### Expected output ++ In both terminals, the `checkout` service is calling orders to the `order-processor` service in a loop. ++ `checkout` output: ++ ``` + == APP == Order passed: {"orderId":1} + == APP == Order passed: {"orderId":2} + == APP == Order passed: {"orderId":3} + == APP == Order passed: {"orderId":4} + == APP == Order passed: {"orderId":5} + == APP == Order passed: {"orderId":6} + == APP == Order passed: {"orderId":7} + == APP == Order passed: {"orderId":8} + == APP == Order passed: {"orderId":9} + == APP == Order passed: {"orderId":10} + == APP == Order passed: {"orderId":11} + == APP == Order passed: {"orderId":12} + == APP == Order passed: {"orderId":13} + == APP == Order passed: {"orderId":14} + == APP == Order passed: {"orderId":15} + == APP == Order passed: {"orderId":16} + == APP == Order passed: {"orderId":17} + == APP == Order passed: {"orderId":18} + == APP == Order passed: {"orderId":19} + == APP == Order passed: {"orderId":20} + ``` ++ `order-processor` output: ++ ``` + == APP == Order received: { orderId: 1 } + == APP == Order received: { orderId: 2 } + == APP == Order received: { orderId: 3 } + == APP == Order received: { orderId: 4 } + == APP == Order received: { orderId: 5 } + == APP == Order received: { orderId: 6 } + == APP == Order received: { orderId: 7 } + == APP == Order received: { orderId: 8 } + == APP == Order received: { orderId: 9 } + == APP == Order received: { orderId: 10 } + == APP == Order received: { orderId: 11 } + == APP == Order received: { orderId: 12 } + == APP == Order received: { orderId: 13 } + == APP == Order received: { orderId: 14 } + == APP == Order received: { orderId: 15 } + == APP == Order received: { orderId: 16 } + == APP == Order received: { orderId: 17 } + == APP == Order received: { orderId: 18 } + == APP == Order received: { orderId: 19 } + == APP == Order received: { orderId: 20 } + ``` ++1. Press <kbd>Cmd/Ctrl</kbd> + <kbd>C</kbd> in both terminals to exit out of the service-to-service invocation. ++## Deploy the Dapr application template using Azure Developer CLI ++Deploy the Dapr application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview). ++### Prepare the project ++In a new terminal window, navigate into the sample's root directory. ++ ```bash + cd svc-invoke-dapr-nodejs + ``` ++### Provision and deploy using Azure Developer CLI ++1. Run `azd init` to initialize the project. ++ ```azdeveloper + azd init + ``` ++1. When prompted in the terminal, provide the following parameters. ++ | Parameter | Description | + | | -- | + | Environment Name | Prefix for the resource group created to hold all Azure resources. | + | Azure Location | The Azure location for your resources. | + | Azure Subscription | The Azure subscription for your resources. | ++1. Run `azd up` to provision the infrastructure and deploy the Dapr application to Azure Container Apps in a single command. ++ ```azdeveloper + azd up + ``` ++ This process may take some time to complete. As the `azd up` command completes, the CLI output displays two Azure portal links to monitor the deployment progress. The output also demonstrates how `azd up`: ++ - Creates and configures all necessary Azure resources via the provided Bicep files in the `./infra` directory using `azd provision`. Once provisioned by Azure Developer CLI, you can access these resources via the Azure portal. The files that provision the Azure resources include: + - `main.parameters.json` + - `main.bicep` + - An `app` resources directory organized by functionality + - A `core` reference library that contains the Bicep modules used by the `azd` template + - Deploys the code using `azd deploy` ++ #### Expected output ++ ```azdeveloper + Initializing a new project (azd init) + + Provisioning Azure resources (azd provision) + Provisioning Azure resources can take some time + + You can view detailed progress in the Azure Portal: + https://portal.azure.com + + (Γ£ô) Done: Resource group: resource-group-name + (Γ£ô) Done: Log Analytics workspace: log-analytics-name + (Γ£ô) Done: Application Insights: app-insights-name + (Γ£ô) Done: Portal dashboard: dashboard-name + (Γ£ô) Done: Container Apps Environment: container-apps-env-name + (Γ£ô) Done: Container App: ca-checkout-name + (Γ£ô) Done: Container App: ca-order-processor-name + + + Deploying services (azd deploy) + + (Γ£ô) Done: Deploying service api + - Endpoint: https://ca-order-processor-name.eastus.azurecontainerapps.io/ + (Γ£ô) Done: Deploying service worker + + SUCCESS: Your Azure app has been deployed! + You can view the resources created under the resource group resource-group-name in Azure Portal: + https://portal.azure.com/#@/resource/subscriptions/<your-azure-subscription>/resourceGroups/resource-group-name/overview + ``` ++### Confirm successful deployment ++In the Azure portal, verify the `checkout` service is passing orders to the `order-processor` service. ++1. Copy the `checkout` container app's name from the terminal output. ++1. Navigate to the [Azure portal](https://ms.portal.azure.com) and search for the container app resource by name. ++1. In the Container Apps dashboard, select **Monitoring** > **Log stream**. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-menu.png" alt-text="Screenshot of navigating to the Log stream page in the Azure portal."::: ++1. Confirm the `checkout` container is logging the same output as in the terminal earlier. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-portal-view-checkout-svc.png" alt-text="Screenshot of the checkout service container's log stream in the Azure portal."::: ++1. Do the same for the `order-processor` service. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-portal-view-order-processor-svc.png" alt-text="Screenshot of the order processor service container's log stream in the Azure portal."::: +++## What happened? ++Upon successful completion of the `azd up` command: ++- Azure Developer CLI provisioned the Azure resources referenced in the [sample project's `./infra` directory](https://github.com/Azure-Samples/svc-invoke-dapr-nodejs/tree/main/infra) to the Azure subscription you specified. You can now view those Azure resources via the Azure portal. +- The app deployed to Azure Container Apps. From the portal, you can browse the fully functional app. +++++## Run the Python applications locally ++Before deploying the application to Azure Container Apps, start by running the `order-processor` and `checkout` services locally with Dapr. ++### Prepare the project ++1. Clone the [sample Dapr application](https://github.com/Azure-Samples/svc-invoke-dapr-python) to your local machine. ++ ```bash + git clone https://github.com/Azure-Samples/svc-invoke-dapr-python.git + ``` ++1. Navigate into the sample's root directory. ++ ```bash + cd svc-invoke-dapr-python + ``` ++### Run the Dapr applications using the Dapr CLI ++Start by running the `order-processor` service. ++1. From the sample's root directory, change directories to `order-processor`. ++ ```bash + cd order-processor + ``` +1. Install the dependencies. ++ ```bash + pip3 install -r requirements.txt + ``` ++1. Run the `order-processor` service with Dapr. ++ ```bash + dapr run --app-port 8001 --app-id order-processor --app-protocol http --dapr-http-port 3501 -- python3 app.py + ``` ++1. In a new terminal window, from the sample's root directory, navigate to the `checkout` caller service. ++ ```bash + cd checkout + ``` ++1. Install the dependencies. ++ ```bash + pip3 install -r requirements.txt + ``` ++1. Run the `checkout` service with Dapr. ++ ```bash + dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- python3 app.py + ``` ++ #### Expected output ++ In both terminals, the `checkout` service is calling orders to the `order-processor` service in a loop. ++ `checkout` output: ++ ``` + == APP == Order passed: {"orderId":1} + == APP == Order passed: {"orderId":2} + == APP == Order passed: {"orderId":3} + == APP == Order passed: {"orderId":4} + == APP == Order passed: {"orderId":5} + == APP == Order passed: {"orderId":6} + == APP == Order passed: {"orderId":7} + == APP == Order passed: {"orderId":8} + == APP == Order passed: {"orderId":9} + == APP == Order passed: {"orderId":10} + == APP == Order passed: {"orderId":11} + == APP == Order passed: {"orderId":12} + == APP == Order passed: {"orderId":13} + == APP == Order passed: {"orderId":14} + == APP == Order passed: {"orderId":15} + == APP == Order passed: {"orderId":16} + == APP == Order passed: {"orderId":17} + == APP == Order passed: {"orderId":18} + == APP == Order passed: {"orderId":19} + == APP == Order passed: {"orderId":20} + ``` ++ `order-processor` output: ++ ``` + == APP == Order received: { orderId: 1 } + == APP == Order received: { orderId: 2 } + == APP == Order received: { orderId: 3 } + == APP == Order received: { orderId: 4 } + == APP == Order received: { orderId: 5 } + == APP == Order received: { orderId: 6 } + == APP == Order received: { orderId: 7 } + == APP == Order received: { orderId: 8 } + == APP == Order received: { orderId: 9 } + == APP == Order received: { orderId: 10 } + == APP == Order received: { orderId: 11 } + == APP == Order received: { orderId: 12 } + == APP == Order received: { orderId: 13 } + == APP == Order received: { orderId: 14 } + == APP == Order received: { orderId: 15 } + == APP == Order received: { orderId: 16 } + == APP == Order received: { orderId: 17 } + == APP == Order received: { orderId: 18 } + == APP == Order received: { orderId: 19 } + == APP == Order received: { orderId: 20 } + ``` ++1. Press <kbd>Cmd/Ctrl</kbd> + <kbd>C</kbd> in both terminals to exit out of the service-to-service invocation ++## Deploy the Dapr application template using Azure Developer CLI ++Deploy the Dapr application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview). ++### Prepare the project ++1. In a new terminal window, navigate into the [sample's](https://github.com/Azure-Samples/svc-invoke-dapr-python) root directory. ++ ```bash + cd svc-invoke-dapr-python + ``` ++### Provision and deploy using Azure Developer CLI ++1. Run `azd init` to initialize the project. ++ ```azdeveloper + azd init + ``` ++1. When prompted in the terminal, provide the following parameters. ++ | Parameter | Description | + | | -- | + | Environment Name | Prefix for the resource group created to hold all Azure resources. | + | Azure Location | The Azure location for your resources. | + | Azure Subscription | The Azure subscription for your resources. | ++1. Run `azd up` to provision the infrastructure and deploy the Dapr application to Azure Container Apps in a single command. ++ ```azdeveloper + azd up + ``` ++ This process may take some time to complete. As the `azd up` command completes, the CLI output displays two Azure portal links to monitor the deployment progress. The output also demonstrates how `azd up`: ++ - Creates and configures all necessary Azure resources via the provided Bicep files in the `./infra` directory using `azd provision`. Once provisioned by Azure Developer CLI, you can access these resources via the Azure portal. The files that provision the Azure resources include: + - `main.parameters.json` + - `main.bicep` + - An `app` resources directory organized by functionality + - A `core` reference library that contains the Bicep modules used by the `azd` template + - Deploys the code using `azd deploy` ++ #### Expected output ++ ```azdeveloper + Initializing a new project (azd init) + + Provisioning Azure resources (azd provision) + Provisioning Azure resources can take some time + + You can view detailed progress in the Azure Portal: + https://portal.azure.com + + (Γ£ô) Done: Resource group: resource-group-name + (Γ£ô) Done: Log Analytics workspace: log-analytics-name + (Γ£ô) Done: Application Insights: app-insights-name + (Γ£ô) Done: Portal dashboard: dashboard-name + (Γ£ô) Done: Container Apps Environment: container-apps-env-name + (Γ£ô) Done: Container App: ca-checkout-name + (Γ£ô) Done: Container App: ca-order-processor-name + + + Deploying services (azd deploy) + + (Γ£ô) Done: Deploying service api + - Endpoint: https://ca-order-processor-name.eastus.azurecontainerapps.io/ + (Γ£ô) Done: Deploying service worker + + SUCCESS: Your Azure app has been deployed! + You can view the resources created under the resource group resource-group-name in Azure Portal: + https://portal.azure.com/#@/resource/subscriptions/<your-azure-subscription>/resourceGroups/resource-group-name/overview + ``` ++### Confirm successful deployment ++In the Azure portal, verify the `checkout` service is passing orders to the `order-processor` service. ++1. Copy the `checkout` container app's name from the terminal output. ++1. Navigate to the [Azure portal](https://ms.portal.azure.com) and search for the container app resource by name. ++1. In the Container Apps dashboard, select **Monitoring** > **Log stream**. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-menu.png" alt-text="Screenshot of navigating to the Log stream page in the Azure portal."::: ++1. Confirm the `checkout` container is logging the same output as in the terminal earlier. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-portal-view-checkout-svc.png" alt-text="Screenshot of the checkout service container's log stream in the Azure portal."::: ++1. Do the same for the `order-processor` service. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-portal-view-order-processor-svc.png" alt-text="Screenshot of the order processor service container's log stream in the Azure portal."::: +++## What happened? ++Upon successful completion of the `azd up` command: ++- Azure Developer CLI provisioned the Azure resources referenced in the [sample project's `./infra` directory](https://github.com/Azure-Samples/svc-invoke-dapr-python/tree/main/infra) to the Azure subscription you specified. You can now view those Azure resources via the Azure portal. +- The app deployed to Azure Container Apps. From the portal, you can browse the fully functional app. ++++## Run the .NET applications locally ++Before deploying the application to Azure Container Apps, start by running the `order-processor` and `checkout` services locally with Dapr. ++### Prepare the project ++1. Clone the [sample Dapr application](https://github.com/Azure-Samples/svc-invoke-dapr-csharp) to your local machine. ++ ```bash + git clone https://github.com/Azure-Samples/svc-invoke-dapr-csharp.git + ``` ++1. Navigate into the sample's root directory. ++ ```bash + cd svc-invoke-dapr-csharp + ``` ++### Run the Dapr applications using the Dapr CLI ++Start by running the `order-processor` callee service with Dapr. ++1. From the sample's root directory, change directories to `order-processor`. ++ ```bash + cd order-processor + ``` +1. Install the dependencies. ++ ```bash + dotnet build + ``` ++1. Run the `order-processor` service with Dapr. ++ ```bash + dapr run --app-port 7001 --app-id order-processor --app-protocol http --dapr-http-port 3501 -- dotnet run + ``` ++1. In a new terminal window, from the sample's root directory, navigate to the `checkout` caller service. ++ ```bash + cd checkout + ``` ++1. Install the dependencies. ++ ```bash + dotnet build + ``` ++1. Run the `checkout` service with Dapr. ++ ```bash + dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- dotnet run + ``` ++ #### Expected output ++ In both terminals, the `checkout` service is calling orders to the `order-processor` service in a loop. ++ `checkout` output: ++ ``` + == APP == Order passed: {"orderId":1} + == APP == Order passed: {"orderId":2} + == APP == Order passed: {"orderId":3} + == APP == Order passed: {"orderId":4} + == APP == Order passed: {"orderId":5} + == APP == Order passed: {"orderId":6} + == APP == Order passed: {"orderId":7} + == APP == Order passed: {"orderId":8} + == APP == Order passed: {"orderId":9} + == APP == Order passed: {"orderId":10} + == APP == Order passed: {"orderId":11} + == APP == Order passed: {"orderId":12} + == APP == Order passed: {"orderId":13} + == APP == Order passed: {"orderId":14} + == APP == Order passed: {"orderId":15} + == APP == Order passed: {"orderId":16} + == APP == Order passed: {"orderId":17} + == APP == Order passed: {"orderId":18} + == APP == Order passed: {"orderId":19} + == APP == Order passed: {"orderId":20} + ``` ++ `order-processor` output: ++ ``` + == APP == Order received: { orderId: 1 } + == APP == Order received: { orderId: 2 } + == APP == Order received: { orderId: 3 } + == APP == Order received: { orderId: 4 } + == APP == Order received: { orderId: 5 } + == APP == Order received: { orderId: 6 } + == APP == Order received: { orderId: 7 } + == APP == Order received: { orderId: 8 } + == APP == Order received: { orderId: 9 } + == APP == Order received: { orderId: 10 } + == APP == Order received: { orderId: 11 } + == APP == Order received: { orderId: 12 } + == APP == Order received: { orderId: 13 } + == APP == Order received: { orderId: 14 } + == APP == Order received: { orderId: 15 } + == APP == Order received: { orderId: 16 } + == APP == Order received: { orderId: 17 } + == APP == Order received: { orderId: 18 } + == APP == Order received: { orderId: 19 } + == APP == Order received: { orderId: 20 } + ``` ++1. Press <kbd>Cmd/Ctrl</kbd> + <kbd>C</kbd> in both terminals to exit out of the service-to-service invocation. ++## Deploy the Dapr application template using Azure Developer CLI ++Deploy the Dapr application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview). ++### Prepare the project ++In a new terminal window, navigate into the [sample's](https://github.com/Azure-Samples/svc-invoke-dapr-csharp) root directory. ++ ```bash + cd svc-invoke-dapr-csharp + ``` ++### Provision and deploy using Azure Developer CLI ++1. Run `azd init` to initialize the project. ++ ```azdeveloper + azd init + ``` ++1. When prompted in the terminal, provide the following parameters. ++ | Parameter | Description | + | | -- | + | Environment Name | Prefix for the resource group created to hold all Azure resources. | + | Azure Location | The Azure location for your resources. | + | Azure Subscription | The Azure subscription for your resources. | ++1. Run `azd up` to provision the infrastructure and deploy the Dapr application to Azure Container Apps in a single command. ++ ```azdeveloper + azd up + ``` ++ This process may take some time to complete. As the `azd up` command completes, the CLI output displays two Azure portal links to monitor the deployment progress. The output also demonstrates how `azd up`: ++ - Creates and configures all necessary Azure resources via the provided Bicep files in the `./infra` directory using `azd provision`. Once provisioned by Azure Developer CLI, you can access these resources via the Azure portal. The files that provision the Azure resources include: + - `main.parameters.json` + - `main.bicep` + - An `app` resources directory organized by functionality + - A `core` reference library that contains the Bicep modules used by the `azd` template + - Deploys the code using `azd deploy` ++ #### Expected output ++ ```azdeveloper + Initializing a new project (azd init) + + Provisioning Azure resources (azd provision) + Provisioning Azure resources can take some time + + You can view detailed progress in the Azure Portal: + https://portal.azure.com + + (Γ£ô) Done: Resource group: resource-group-name + (Γ£ô) Done: Log Analytics workspace: log-analytics-name + (Γ£ô) Done: Application Insights: app-insights-name + (Γ£ô) Done: Portal dashboard: dashboard-name + (Γ£ô) Done: Container Apps Environment: container-apps-env-name + (Γ£ô) Done: Container App: ca-checkout-name + (Γ£ô) Done: Container App: ca-order-processor-name + + + Deploying services (azd deploy) + + (Γ£ô) Done: Deploying service api + - Endpoint: https://ca-order-processor-name.eastus.azurecontainerapps.io/ + (Γ£ô) Done: Deploying service worker + + SUCCESS: Your Azure app has been deployed! + You can view the resources created under the resource group resource-group-name in Azure Portal: + https://portal.azure.com/#@/resource/subscriptions/<your-azure-subscription>/resourceGroups/resource-group-name/overview + ``` ++### Confirm successful deployment ++In the Azure portal, verify the `checkout` service is passing orders to the `order-processor` service. ++1. Copy the `checkout` container app's name from the terminal output. ++1. Navigate to the [Azure portal](https://ms.portal.azure.com) and search for the container app resource by name. ++1. In the Container Apps dashboard, select **Monitoring** > **Log stream**. + + :::image type="content" source="media/microservices-dapr-azd/log-streams-menu.png" alt-text="Screenshot of navigating to the Log stream page in the Azure portal."::: +++1. Confirm the `checkout` container is logging the same output as in the terminal earlier. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-portal-view-checkout-svc.png" alt-text="Screenshot of the checkout service container's log stream in the Azure portal."::: ++1. Do the same for the `order-processor` service. ++ :::image type="content" source="media/microservices-dapr-azd/log-streams-portal-view-order-processor-svc.png" alt-text="Screenshot of the order processor service container's log stream in the Azure portal."::: +++## What happened? ++Upon successful completion of the `azd up` command: ++- Azure Developer CLI provisioned the Azure resources referenced in the [sample project's `./infra` directory](https://github.com/Azure-Samples/svc-invoke-dapr-csharp/tree/main/infra) to the Azure subscription you specified. You can now view those Azure resources via the Azure portal. +- The app deployed to Azure Container Apps. From the portal, you can browse the fully functional app. ++++## Clean up resources ++If you're not going to continue to use this application, delete the Azure resources you've provisioned with the following command: ++```azdeveloper +azd down +``` ++## Next steps ++- Learn more about [deploying Dapr applications to Azure Container Apps](./microservices-dapr.md). +- Learn more about [Azure Developer CLI](/azure/developer/azure-developer-cli/overview) and [making your applications compatible with `azd`](/azure/developer/azure-developer-cli/make-azd-compatible). |
container-apps | Storage Mounts Azure Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/storage-mounts-azure-files.md | Title: "Tutorial: Create an Azure Files storage mount in Azure Container Apps" + Title: "Tutorial: Create an Azure Files volume mount in Azure Container Apps" description: Learn to create an Azure Files storage mount in Azure Container Apps Last updated 07/19/2022 -# Tutorial: Create an Azure Files storage mount in Azure Container Apps +# Tutorial: Create an Azure Files volume mount in Azure Container Apps -Learn to write to permanent storage in a container app using an Azure Files storage mount. --> [!NOTE] -> The volume mounting features in Azure Container Apps are in preview. +Learn to write to permanent storage in a container app using an Azure Files storage mount. For more information about storage mounts, see [Use storage mounts in Azure Container Apps](storage-mounts.md). In this tutorial, you learn how to: |
container-apps | Storage Mounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/storage-mounts.md | -zone_pivot_groups: container-apps-config-types +zone_pivot_groups: arm-azure-cli-portal # Use storage mounts in Azure Container Apps A container app has access to different types of storage. A single app can take | Storage type | Description | Usage examples | |--|--|--| | [Container file system](#container-file-system) | Temporary storage scoped to the local container | Writing a local app cache. |-| [Temporary storage](#temporary-storage) | Temporary storage scoped to an individual replica | Sharing files between containers in a replica. For instance, the main app container can write log files that are processed by a sidecar container. | +| [Ephemeral storage](#temporary-storage) | Temporary storage scoped to an individual replica | Sharing files between containers in a replica. For instance, the main app container can write log files that are processed by a sidecar container. | | [Azure Files](#azure-files) | Permanent storage | Writing files to a file share to make data accessible by other systems. | -> [!NOTE] -> The volume mounting features in Azure Container Apps are in preview. - ## Container file system A container can write to its own file system. Container file system storage has the following characteristics: * Files written to this storage are only visible to processes running in the current container. * There are no capacity guarantees. The available storage depends on the amount of disk space available in the container. -## Temporary storage +## <a name="temporary-storage"></a>Ephemeral volume -You can mount an ephemeral volume that is equivalent to [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) in Kubernetes. Temporary storage is scoped to a single replica. +You can mount an ephemeral, temporary volume that is equivalent to [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) in Kubernetes. Ephemeral storage is scoped to a single replica. -Temporary storage has the following characteristics: +Ephemeral storage has the following characteristics: * Files are persisted for the lifetime of the replica. * If a container in a replica restarts, the files in the volume remain. * Any containers in the replica can mount the same volume.-* A container can mount multiple temporary volumes. -* There are no capacity guarantees. The available storage depends on the amount of disk space available in the replica. +* A container can mount multiple ephemeral volumes. +* The available storage depends on the total amount of vCPUs allocated to the replica. ++ | vCPUs | Ephemeral storage | + |--|--| + | Up to 0.25 | 1 GiB | + | Up to 0.5 | 2 GiB | + | Up to 1 | 4 GiB | + | Over 1 | 8 GiB | -To configure temporary storage, first define an `EmptyDir` volume in the revision. Then define a volume mount in one or more containers in the revision. +To configure ephemeral storage, first define an `EmptyDir` volume in the revision. Then define a volume mount in one or more containers in the revision. ### Prerequisites To configure temporary storage, first define an `EmptyDir` volume in the revisio ### Configuration -When using temporary storage, you must use the Azure CLI with a YAML definition to create or update your container app. +When configuring ephemeral storage using the Azure CLI, you must use a YAML definition to create or update your container app. -1. To update an existing container app to use temporary storage, export your app's specification to a YAML file named *app.yaml*. +1. To update an existing container app to use ephemeral storage, export your app's specification to a YAML file named *app.yaml*. ```azure-cli az containerapp show -n <APP_NAME> -g <RESOURCE_GROUP_NAME> -o yaml > app.yaml When using temporary storage, you must use the Azure CLI with a YAML definition 1. Make the following changes to your container app specification. - - Add a `volumes` array to the `template` section of your container app definition and define a volume. + - Add a `volumes` array to the `template` section of your container app definition and define a volume. If you already have a `volumes` array, add a new volume to the array. - The `name` is an identifier for the volume. - Use `EmptyDir` as the `storageType`.- - For each container in the template that you want to mount temporary storage, add a `volumeMounts` array to the container definition and define a volume mount. + - For each container in the template that you want to mount the ephemeral volume, define a volume mount in the `volumeMounts` array of the container definition. - The `volumeName` is the name defined in the `volumes` array. - The `mountPath` is the path in the container to mount the volume. When using temporary storage, you must use the Azure CLI with a YAML definition --yaml app.yaml ``` +See the [YAML specification](azure-resource-manager-api-spec.md?tabs=yaml) for a full example. + ::: zone-end -To create a temporary volume and mount it in a container, make the following changes to the container apps resource in an ARM template: +To create an ephemeral volume and mount it in a container, make the following changes to the container apps resource in an ARM template: -- Add a `volumes` array to the `template` section of your container app definition and define a volume.+- Add a `volumes` array to the `template` section of your container app definition and define a volume. If you already have a `volumes` array, add a new volume to the array. - The `name` is an identifier for the volume. - Use `EmptyDir` as the `storageType`.-- For each container in the template that you want to mount temporary storage, add a `volumeMounts` array to the container definition and define a volume mount.+- For each container in the template that you want to mount temporary storage, define a volume mount in the `volumeMounts` array of the container definition. - The `volumeName` is the name defined in the `volumes` array. - The `mountPath` is the path in the container to mount the volume. See the [ARM template API specification](azure-resource-manager-api-spec.md) for ::: zone-end -## Azure Files -You can mount a file share from [Azure Files](../storage/files/index.yml) as a volume inside a container. -For a step-by-step tutorial, refer to [Create an Azure Files storage mount in Azure Container Apps](storage-mounts-azure-files.md). +To create an ephemeral volume and mount it in a container, deploy a new revision of your container app using the Azure portal. ++1. In the Azure portal, navigate to your container app. ++1. Select **Revision management** in the left menu. ++1. Select **Create new revision**. ++1. Select the container where you want to mount the volume. ++1. In the *Edit a container* context pane, select the **Volume mounts** tab. ++1. Under the *Ephemeral storage* section, create a new volume with the following information. ++ - **Volume name**: A name for the ephemeral volume. + - **Mount path**: The absolute path in the container to mount the volume. ++1. Select **Save** to save changes and exit the context pane. ++1. Select **Create** to create the new revision. +++## <a name="azure-files"></a>Azure Files volume ++You can mount a file share from [Azure Files](../storage/files/index.yml) as a volume in a container. Azure Files storage has the following characteristics: Azure Files storage has the following characteristics: To enable Azure Files storage in your container, you need to set up your container in the following ways: -* Create a storage definition of type `AzureFile` in the Container Apps environment. -* Define a storage volume in a revision. +* Create a storage definition in the Container Apps environment. +* Define a volume of type `AzureFile` in a revision. * Define a volume mount in one or more containers in the revision. #### Prerequisites To enable Azure Files storage in your container, you need to set up your contain ### Configuration -When using Azure Files, you must use the Azure CLI with a YAML definition to create or update your container app. +When configuring a container app to mount an Azure Files volume using the Azure CLI, you must use a YAML definition to create or update your container app. -1. Add a storage definition of type `AzureFile` to your Container Apps environment. +For a step-by-step tutorial, refer to [Create an Azure Files storage mount in Azure Container Apps](storage-mounts-azure-files.md). ++1. Add a storage definition to your Container Apps environment. ```azure-cli az containerapp env storage set --name my-env --resource-group my-group \ When using Azure Files, you must use the Azure CLI with a YAML definition to cre 1. Make the following changes to your container app specification. - - Add a `volumes` array to the `template` section of your container app definition and define a volume. + - Add a `volumes` array to the `template` section of your container app definition and define a volume. If you already have a `volumes` array, add a new volume to the array. - The `name` is an identifier for the volume. - For `storageType`, use `AzureFile`. - For `storageName`, use the name of the storage you defined in the environment.- - For each container in the template that you want to mount Azure Files storage, add a `volumeMounts` array to the container definition and define a volume mount. + - For each container in the template that you want to mount Azure Files storage, define a volume mount in the `volumeMounts` array of the container definition. - The `volumeName` is the name defined in the `volumes` array. - The `mountPath` is the path in the container to mount the volume. When using Azure Files, you must use the Azure CLI with a YAML definition to cre --yaml app.yaml ``` +See the [YAML specification](azure-resource-manager-api-spec.md?tabs=yaml) for a full example. + ::: zone-end The following ARM template snippets demonstrate how to add an Azure Files share to a Container Apps environment and use it in a container app. The following ARM template snippets demonstrate how to add an Azure Files share } ``` - - Add a `volumes` array to the `template` section of your container app definition and define a volume. + - Add a `volumes` array to the `template` section of your container app definition and define a volume. If you already have a `volumes` array, add a new volume to the array. - The `name` is an identifier for the volume. - For `storageType`, use `AzureFile`. - For `storageName`, use the name of the storage you defined in the environment.- - For each container in the template that you want to mount Azure Files storage, add a `volumeMounts` array to the container definition and define a volume mount. + - For each container in the template that you want to mount Azure Files storage, define a volume mount in the `volumeMounts` array of the container definition. - The `volumeName` is the name defined in the `volumes` array. - The `mountPath` is the path in the container to mount the volume. See the [ARM template API specification](azure-resource-manager-api-spec.md) for a full example. ::: zone-end+++To configure a volume mount for Azure Files storage in the Azure portal, add a file share to your Container Apps environment and then add a volume mount to your container app by creating a new revision. ++1. In the Azure portal, navigate to your Container Apps environment. ++1. Select **Azure Files** from the left menu. ++1. Select **Add**. ++1. In the *Add file share* context menu, enter the following information: ++ - **Name**: A name for the file share. + - **Storage account name**: The name of the storage account that contains the file share. + - **Storage account key**: The access key for the storage account. + - **File share**: The name of the file share. + - **Access mode**: The access mode for the file share. Valid values are "Read/Write" and "Read only". ++1. Select **Add** to exit the context pane. ++1. Select **Save** to commit the changes. ++1. Navigate to your container app. ++1. Select **Revision management** from the left menu. ++1. Select **Create new revision**. ++1. Select the container that you want to mount the volume in. ++1. In the *Edit a container* context pane, select the **Volume mounts** tab. ++1. Under the *File shares* section, create a new volume with the following information. ++ - **File share name**: The file share you added. + - **Mount path**: The absolute path in the container to mount the volume. ++1. Select **Save** to save changes and exit the context pane. ++1. Select **Create** to create the new revision. + |
cosmos-db | Troubleshoot Dotnet Sdk Slow Request | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-dotnet-sdk-slow-request.md | For multiple store results for a single request, be aware of the following: * Strong consistency and bounded staleness consistency always have at least two store results. * Check the status code of each `StoreResult`. The SDK retries automatically on multiple different [transient failures](troubleshoot-dotnet-sdk-request-timeout.md). The SDK is constantly improved to cover more scenarios. -### RntbdRequestStats +### RequestTimeline Show the time for the different stages of sending and receiving a request in the transport layer. -* `ChannelAcquisitionStarted`: The time to get or create a new connection. You can create new connections for numerous different regions. For example, let's say that a connection was unexpectedly closed, or too many requests were getting sent through the existing connections. You create a new connection. -* *Pipelined time is large* might be caused by a large request. -* *Transit time is large*, which leads to a networking problem. Compare this number to the `BELatencyInMs`. If `BELatencyInMs` is small, then the time was spent on the network, and not on the Azure Cosmos DB service. -* *Received time is large* might be caused by a thread starvation problem. This is the time between having the response and returning the result. +* `ChannelAcquisitionStarted`: The time to get or create a new connection. You can create new connections for numerous different regions. For example, let's say that a connection was unexpectedly closed, or too many requests were getting sent through the existing connections. You create a new connection. +* `Pipelined` time is large might be caused by a large request. +* `Transit time` is large, which leads to a networking problem. Compare this number to the `BELatencyInMs`. If `BELatencyInMs` is small, then the time was spent on the network, and not on the Azure Cosmos DB service. +* `Received` time is large might be caused by a thread starvation problem. This is the time between having the response and returning the result. ### ServiceEndpointStatistics |
databox-online | Azure Stack Edge Reset Reactivate Device | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-reset-reactivate-device.md | You can reset your device in the local web UI or in PowerShell. For PowerShell i ## Reactivate device -After you reset the device, you'll need to reactivate the device as a new resource. After placing a new order, you'll need to reconfigure and then reactivate the new resource. +After you reset the device, you must reactivate the device as a new management resource. After placing a new order, you must reconfigure and then reactivate the new resource. -To reactivate your existing device, follow these steps: +Use the following steps to create a new management resource for your existing device: -1. Create a new order for the existing device by following the steps in [Create a new resource](azure-stack-edge-gpu-deploy-prep.md?tabs=azure-portal#create-a-new-resource). On the **Shipping address** tab, select **I already have a device**. +1. On the **Azure services** page of Azure portal, select **Azure Stack Edge**. + + [](./media/azure-stack-edge-reset-reactivate-device/azure-stack-edge-select-azure-stack-edge-00.png#lightbox) -  +1. On the **Azure Stack Edge** page, select **+ Create**. ++ [](./media/azure-stack-edge-reset-reactivate-device/azure-stack-edge-create-new-resource-01.png#lightbox) ++1. On the **Manage Azure Stack Edge** page, select **Manage a device**. ++ [](./media/azure-stack-edge-reset-reactivate-device/azure-stack-edge-manage-device-02.png#lightbox) ++1. On the **Basics** tab, specify project details for your resource, and then select **Next: Tags**. ++ [](./media/azure-stack-edge-reset-reactivate-device/azure-stack-edge-create-management-resource-03.png#lightbox) ++1. On the **Tags** tab, specify **Name** and **Value** tags for your management resource, and then select **Review + create**. ++ [](./media/azure-stack-edge-reset-reactivate-device/azure-stack-edge-create-tags-04.png#lightbox) ++1. On the **Review + create** tab, review **Terms and conditions** and **Basics** for your management resource, and then review and accept the **Privacy terms**. To complete the operation, select **Create**. ++ [](./media/azure-stack-edge-reset-reactivate-device/azure-stack-edge-create-resource-05.png#lightbox) ++After you create the management resource for your device, use the following steps to complete device configuration. 1. [Get the activation key](azure-stack-edge-gpu-deploy-prep.md?tabs=azure-portal#get-the-activation-key). |
defender-for-cloud | Integration Defender For Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md | You'll deploy Defender for Endpoint to your Linux machines in one of these ways, - [Enable for multiple subscriptions in the Azure portal dashboard](#enable-for-multiple-subscriptions-in-the-azure-portal-dashboard) - Enable for multiple subscriptions with a PowerShell script +> [!NOTE] +> When you enable automatic deployment, Defender for Endpoint for Linux installation will abort on machines with pre-existing security solutions using [fanotify](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux#system-requirements). +> After you validate potential compatibility issues, we recommend that you manually install Defender for Endpoint on these servers. ##### Existing users with Defender for Cloud's enhanced security features enabled and Microsoft Defender for Endpoint for Windows If you've already enabled the integration with **Defender for Endpoint for Windo Microsoft Defender for Cloud will: - Automatically onboard your Linux machines to Defender for Endpoint- - Ignore any machines that are running other fanotify-based solutions (see details of the `fanotify` kernel option required in [Linux system requirements](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux#system-requirements)) - Detect any previous installations of Defender for Endpoint and reconfigure them to integrate with Defender for Cloud Microsoft Defender for Cloud will automatically onboard your machines to Microsoft Defender for Endpoint. Onboarding might take up to 12 hours. For new machines created after the integration has been enabled, onboarding takes up to an hour. If you've never enabled the integration for Windows, endpoint protection enables Microsoft Defender for Cloud will: - Automatically onboard your Windows and Linux machines to Defender for Endpoint- - Ignore any Linux machines that are running other fanotify-based solutions (see details of the `fanotify` kernel option required in [Linux system requirements](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux#system-requirements)) - Detect any previous installations of Defender for Endpoint and reconfigure them to integrate with Defender for Cloud Onboarding might take up to 1 hour. |
defender-for-cloud | Regulatory Compliance Dashboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/regulatory-compliance-dashboard.md | In this tutorial you'll learn how to: > [!div class="checklist"] > * Evaluate your regulatory compliance using the regulatory compliance dashboard-> * Check MicrosoftΓÇÖs compliance offerings for Azure, Dynamics 365 and Power Platform products +> * Check MicrosoftΓÇÖs compliance offerings (currently in preview) for Azure, Dynamics 365 and Power Platform products > * Improve your compliance posture by taking action on recommendations > * Download PDF/CSV reports as well as certification reports of your compliance status > * Setup alerts on changes to your compliance status Use the regulatory compliance dashboard to help focus your attention on the gaps - Automated assessments show the number of failed resources and resource types, and link you directly to the remediation experience to address those recommendations. (6) - The manual assessments can be manually attested, and evidence can be linked to demonstrate compliance. (7) -## Investigate your regulatory compliance issues +## Investigate regulatory compliance issues You can use the information in the regulatory compliance dashboard to investigate any issues that may be affecting your compliance posture. The regulatory compliance has automated and manual assessments that may need to ### Check compliance offerings status -Transparency provided by the compliance offerings, allows you to view the certification status for each of the services provided by Microsoft prior to adding your product to the Azure platform. +Transparency provided by the compliance offerings (currently in preview) , allows you to view the certification status for each of the services provided by Microsoft prior to adding your product to the Azure platform. **To check the compliance offerings status**: |
dev-box | Quickstart Configure Dev Box Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-service.md | -This quickstart describes how to configure the Microsoft Dev Box Preview service by using the Azure portal to enable development teams to self-serve dev boxes. +This quickstart describes how to configure Microsoft Dev Box Preview by using the Azure portal to enable development teams to self-serve their dev boxes. This quickstart takes you through the process of setting up your Dev Box environment. You create a dev center to organize your dev box resources, configure network components to enable dev boxes to connect to your organizational resources, and create a dev box definition that will form the basis of your dev boxes. You then create a project and a dev box pool, which work together to help you give access to users who will manage or use the dev boxes. To complete this quickstart, you need: - An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - Owner or Contributor role on an Azure subscription or a specific resource group.-- An existing virtual network and subnet. If you don't have them, [follow these instructions to create them](#create-a-virtual-network-and-subnet).-- Network Contributor permissions on an existing virtual network (Owner or Contributor), or permission to create a new virtual network and subnet.+ - User licenses. To use Dev Box Preview, each user must be licensed for Windows 11 Enterprise or Windows 10 Enterprise, Microsoft Intune, and Azure Active Directory (Azure AD) P1. These licenses are available independently and are included in the following subscriptions: - Microsoft 365 F3 To complete this quickstart, you need: - Microsoft 365 Business Premium - Microsoft 365 Education Student Use Benefit - [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/), which allows you to use your Windows licenses on Azure with Dev Box.-- A configured and working Azure AD join or hybrid Active Directory join:-- - To learn how to join devices directly to Azure AD, see [Plan your Azure Active Directory join deployment](../active-directory/devices/azureadjoin-plan.md). - - To learn how to join your AD DS domain-joined computers to Azure AD from an on-premises Azure Active Directory Domain Services (AD DS) environment, see [Plan your hybrid Azure Active Directory join deployment](../active-directory/devices/hybrid-azuread-join-plan.md). -- Certain ports to be open so that the Dev Box service can function, if your organization routes egress traffic through a firewall. For more information, see [Network requirements](/windows-365/enterprise/requirements-network).-+- Certain ports to be open so that the Dev Box service can function if your organization routes egress traffic through a firewall. For more information, see [Network requirements](/windows-365/enterprise/requirements-network). ## Create a dev center Use the following steps to create a dev center so that you can manage your dev box resources: You must have a virtual network and subnet available for your network connection 1. Select **Create**. -### Create the connection +### Create the network connection ++You now need a network connection to associate the virtual network and subnet with the dev center. A network connection specifies the type of join dev boxes use to join your Azure AD domain, either an Azure AD join or a hybrid Active Directory join. ++- To determine which type of join is appropriate for your dev boxes, refer to: + + - [Azure AD joined devices](/azure/active-directory/devices/concept-azure-ad-join). + - [Hybrid Azure AD joined devices](/azure/active-directory/devices/concept-azure-ad-join-hybrid). -You now need a network connection to associate the virtual network and subnet with the dev center. To create the connection, complete the steps on the relevant tab. +To create the network connection, complete the steps on the relevant tab. #### [Azure AD join](#tab/AzureADJoin/) -1. Sign in to the [Azure portal](https://portal.azure.com). +1. 1. 1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the search box, enter **network connections**. In the list of results, select **Network connections**. You now need a network connection to associate the virtual network and subnet wi 1. When the deployment is complete, select **Go to resource**. The network connection appears on the **Network connections** page. - ## Attach a network connection to a dev center To provide network configuration information for dev boxes, associate a network connection with a dev center: To provide network configuration information for dev boxes, associate a network 1. Select the dev center that you created, and then select **Networking**. -1. Select **+ Add**. +1. Select **+ Add**. 1. On the **Add network connection** pane, select the network connection that you created, and then select **Add**. To assign roles: [!INCLUDE [supported accounts note](./includes/note-supported-accounts.md)] -## Assign the Project Admin role +## Project Admins -The Microsoft Dev Box Preview service makes it possible for you to delegate administration of projects to a member of the project team. Project administrators can assist with the day-to-day management of projects for their teams, like creating and managing dev box pools. To give users permissions to manage projects, assign the DevCenter Project Admin role to them. +Microsoft Dev Box Preview makes it possible for you to delegate administration of projects to a member of the project team. Project administrators can assist with the day-to-day management of projects for their teams, like creating and managing dev box pools. To give users permissions to manage projects, assign the DevCenter Project Admin role to them. -You can assign the DevCenter Project Admin role by using the steps described earlier in [Provide access to a dev box project](#provide-access-to-a-dev-box-project), but select the Project Admin role instead of the Dev Box User role. For more information, see [Provide access to projects for project admins](how-to-project-admin.md). +You can assign the DevCenter Project Admin role by using the steps described earlier in [Provide access to a dev box ](#provide-access-to-a-dev-box-project)project and select the Project Admin role instead of the Dev Box User role. For more information, see [Provide access to projects for project admins](how-to-project-admin.md). [!INCLUDE [permissions note](./includes/note-permission-to-create-dev-box.md)] In this quickstart, you created a dev box project and the resources that are nec > [!div class="nextstepaction"] > [Create a dev box](./quickstart-create-dev-box.md)+ |
dns | Dns Get Started Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-get-started-terraform.md | Title: 'Quickstart: Create an Azure DNS zone and record using Terraform' description: 'In this article, you create an Azure DNS zone and record using Terraform' Previously updated : 3/16/2023 Last updated : 3/17/2023 This article shows how to use [Terraform](/azure/terraform) to create an [Azure In this article, you learn how to: > [!div class="checklist"]-> * Create a random pet name for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet) +> * Create a random value for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet) > * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group)-> * Create a random string using [random_string](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string) +> * Create a random value using [random_string](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string) > * Create an Azure DNS zone using [azurerm_dns_zone](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/dns_zone) > * Create an Azure DNS A record using [azurerm_dns_a_record](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/dns_a_record) |
key-vault | Logging | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/logging.md | tags: azure-resource-manager Previously updated : 11/14/2022 Last updated : 03/16/2023 #Customer intent: As a Managed HSM administrator, I want to enable logging so I can monitor how my HSM is accessed. az monitor diagnostic-settings create --name ContosoMHSM-Diagnostics --resource What's logged: -* All authenticated REST API requests, including failed requests as a result of access permissions, system errors, or bad requests. +* All authenticated REST API requests, including failed requests as a result of access permissions, system errors, firewall blocks, or bad requests. * Managed plane operations on the Managed HSM resource itself, including creation, deletion, and updating attributes such as tags. * Security Domain related operations such as initialize & download, initialize recovery, upload * Full HSM backup, restore and selective restore operations What's logged: * Creating, modifying, or deleting the keys. * Signing, verifying, encrypting, decrypting, wrapping and unwrapping keys, listing keys. * Key backup, restore, purge-* Unauthenticated requests that result in a 401 response. Examples are requests that don't have a bearer token, that are malformed or expired, or that have an invalid token. +* Invalid paths that result in a 404 response. ## Access your logs |
postgresql | Concepts High Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-high-availability.md | Last updated 11/05/2022 [!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)] -Azure Database for PostgreSQL - Flexible Server offers high availability configurations with automatic failover capabilities. The high availability solution is designed to ensure that committed data is never lost because of failures and that the database won't be a single point of failure in your software architecture. When high availability is configured, flexible server automatically provisions and manages a standby replica. Write-ahead-logs (WAL) is streamed to the replica in **synchronous** mode using PostgreSQL streaming replication. There are two high availability architectural models: +Azure Database for PostgreSQL - Flexible Server offers high availability configurations with automatic failover capabilities. The high availability solution is designed to ensure that committed data is never lost because of failures and that the database won't be a single point of failure in your architecture. When high availability is configured, flexible server automatically provisions and manages a standby. Write-ahead-logs (WAL) is streamed to the replica in synchronous mode using PostgreSQL streaming replication. There are two high availability architectural models: -* **Zone-redundant HA**: This option provides a complete isolation and redundancy of infrastructure across multiple availability zones within a region. It provides the highest level of availability, but it requires you to configure application redundancy across zones. Zone-redundant HA is preferred when you want protection from availability zone level failures and when latency across the availability zone is acceptable. Zone-redundant HA is available in a [subset of Azure regions](./overview.md#azure-regions) where the region supports multiple [availability zones](../../availability-zones/az-overview.md). Uptime [SLA of 99.99%](https://azure.microsoft.com/support/legal/sla/postgresql) is offered in this configuration. --* **Same-zone HA**: This option is preferred for infrastructure redundancy with lower network latency because the primary and standby servers will be in the same availability zone. It provides high availability without the need to configure application redundancy across zones. Same-zone HA is preferred when you want to achieve the highest level of availability within a single availability zone with the lowest network latency. Same-zone HA is available in all [Azure regions](./overview.md#azure-regions) where you can deploy Flexible Server. Uptime [SLA of 99.95%](https://azure.microsoft.com/support/legal/sla/postgresql) offered in this configuration. --High availability configuration enables automatic failover capability with zero data loss during planned events such as user-initiated scale compute operation, and also during unplanned events such as underlying hardware and software faults, network failures, and availability zone failures. +* **Zone-redundant HA**: This option provides a complete isolation and redundancy of infrastructure across multiple availability zones within a region. It provides the highest level of availability, but it requires you to configure application redundancy across availability zones. Zone-redundant HA is preferred when you want protection from availability zone failures. However, one should account for added latency for cross-AZ synchronous writes. This latency is more pronounced for applications with short duration transactions. Zone-redundant HA is available in a [subset of Azure regions](./overview.md#azure-regions) where the region supports multiple [availability zones](../../availability-zones/az-overview.md). Uptime [SLA of 99.99%](https://azure.microsoft.com/support/legal/sla/postgresql) is offered in this configuration. +* **Same-zone HA**: This option provide for infrastructure redundancy with lower network latency because the primary and standby servers will be in the same availability zone. It provides high availability without the need to configure application redundancy across zones. Same-zone HA is preferred when you want to achieve the highest level of availability within a single availability zone. This option lowers the latency impact but makes your application vulnerable to zone failures. Same-zone HA is available in all [Azure regions](./overview.md#azure-regions) where you can deploy Flexible Server. Uptime [SLA of 99.95%](https://azure.microsoft.com/support/legal/sla/postgresql) offered in this configuration. + +High availability configuration enables automatic failover capability with zero data loss (i.e. RPO=0) both during planned/unplanned events. For example, user-initiated scale compute operation is a planned failover even while unplanned event refers to failures such as underlying hardware and software faults, network failures, and availability zone failures. >[!NOTE] > Both these HA deployment models architecturally behave the same. Various discussions in the following sections are applicable to both unless called out otherwise. ## High availability architecture -Azure Database for PostgreSQL Flexible server supports two high availability deployment models. One is zone-redundant HA and the other is same-zone HA. In both deployment models, when the application performs writes or commits, using PostgreSQL streaming replication, transaction logs (write-ahead logs a.k.a WAL) are written to the local disk and also replicated in *synchronous* mode to the standby replica. Once the logs are persisted on the standby replica, the application is acknowledged of the writes or commits. The standby server will be in recovery mode which keeps applying the logs, but the primary server doesn't wait for the apply to complete at the standby server. +As mentioned earlier, Azure Database for PostgreSQL Flexible server supports two high availability deployment models: zone-redundant HA and same-zone HA. In both deployment models, when the application commits a transaction, the transaction logs (write-ahead logs a.k.a WAL) are written to the data/log disk and also replicated in *synchronous* mode to the standby server. Once the logs are persisted on the standby, the transaction is considered committed and an acknowledgement is sent to the application. The standby server is always in recovery mode applying the transaction logs. However, the primary server doesn't wait for standby to apply these log records. It is possible that under heavy transaction workload, the replica server may fall behind but typically catches up to the primary with workload throughput fluctuations. ### Zone-redundant high availability Automatic backups are performed periodically from the primary database server, w ### Same-zone high availability -This model of high availability deployment enables Flexible server to be highly available within the same availability zone. This is supported in all regions, including regions that don't support availability zones. You can choose the region and the availability zone to deploy your primary database server. A standby replica server is **automatically** provisioned and managed in the **same** availability zone in the same region with similar compute, storage, and network configuration as the primary server. Data files and transaction log files (write-ahead logs a.k.a WAL) are stored on locally redundant storage, which automatically stores as **three** data copies each for primary and standby. This provides physical isolation of the entire stack between primary and standby servers within the same availability zone. +This model of high availability deployment enables Flexible server to be highly available within the same availability zone. This is supported in all regions, including regions that don't support availability zones. You can choose the region and the availability zone to deploy your primary database server. A standby server is **automatically** provisioned and managed in the **same** availability zone in the same region with similar compute, storage, and network configuration as the primary server. Data files and transaction log files (write-ahead logs a.k.a WAL) are stored on locally redundant storage, which automatically stores as **three** synchronous data copies each for primary and standby. This provides physical isolation of the entire stack between primary and standby servers within the same availability zone. Automatic backups are performed periodically from the primary database server, while the transaction logs are continuously archived to the backup storage from the standby replica. If the region supports availability zones, then backup data is stored on zone-redundant storage (ZRS). In regions that doesn't support availability zones, backup data is stored on local redundant storage (LRS). :::image type="content" source="./media/business-continuity/concepts-same-zone-high-availability-architecture.png" alt-text="Same-zone high availability"::: Flexible server has a health monitoring in place that checks for the primary and There are two failover modes. - 1. With [**planned failovers**](#failover-processplanned-downtimes) (example: During maintenance window) where the failover is triggered with a known state in which the primary connections are drained, a clean shutdown is performed before the replication is severed. You can also use this to bring the primary server back to your preferred AZ. +1. With [**planned failovers**](#failover-processplanned-downtimes) (example: During maintenance window) where the failover is triggered with a known state in which the primary connections are drained, a clean shutdown is performed before the replication is severed. You can also use this to bring the primary server back to your preferred AZ. 2. With [**unplanned failover**](#failover-processunplanned-downtimes) (example: Primary server node crash), the primary is immediately fenced and hence any in-flight transactions are lost and to be retried by the application. For flexible servers configured with high availability, these maintenance activi ## Failover process - unplanned downtimes -Unplanned outages include software bugs or infrastructure component failures that impact the availability of the database. If the primary server becomes unavailable, it is detected by the monitoring system and initiates a failover process. The process includes a few seconds of wait time to make sure it is not a false positive. The replication to the standby replica is severed and the standby replica is activated to be the primary database server. That includes the standby to recover any residual WAL files. Once it is fully recovered, DNS for the same end point is updated with the standby server's IP address. Clients can then retry connecting to the database server using the same connection string and resume their operations. +- Unplanned outages include software bugs or infrastructure component failures that impact the availability of the database. If the primary server becomes unavailable, it is detected by the monitoring system and initiates a failover process. The process includes a few seconds of wait time to make sure it is not a false positive. The replication to the standby replica is severed and the standby replica is activated to be the primary database server. That includes the standby to recover any residual WAL files. Once it is fully recovered, DNS for the same end point is updated with the standby server's IP address. Clients can then retry connecting to the database server using the same connection string and resume their operations. ++> [!NOTE] +Flexible servers configured with zone-redundant high availability provide a recovery point objective (RPO) of **Zero** (no data loss). The recovery time objective (RTO) is expected to be **less than 120s** in typical cases. However, depending on the activity in the primary database server at the time of the failover, the failover may take longer. ->[!NOTE] -> Flexible servers configured with zone-redundant high availability provide a recovery point objective (RPO) of **Zero** (no data loss). The recovery time objective (RTO) is expected to be **less than 120s** in typical cases. However, depending on the activity in the primary database server at the time of the failover, the failover may take longer. After the failover, while a new standby server is being provisioned (which usually takes 5-10 minutes), applications can still connect to the primary server and proceed with their read/write operations. Once the standby server is established, it will start recovering the logs that were generated after the failover. See [this guide](how-to-manage-high-availability-portal.md) for managing high av Flexible servers that are configured with high availability, log data is replicated in real time to the standby server. Any user errors on the primary server - such as an accidental drop of a table or incorrect data updates are replicated to the standby replica as well. So, you cannot use standby to recover from such logical errors. To recover from such errors, you have to perform point-in-time restore from the backup. Using flexible server's point-in-time restore capability, you can restore to the time before the error occurred. For databases configured with high availability, a new database server will be restored as a single zone flexible server with a new user-provided server name. You can use the restored server for few use cases: - 1. You can use the restored server for production usage and can optionally enable zone-redundant high availability. +1. You can use the restored server for production usage and can optionally enable zone-redundant high availability. 2. If you just want to restore an object, you can then export the object from the restored database server and import it to your production database server. 3. If you want to clone your database server for testing and development purposes, or you want to restore for any other purposes, you can perform point-in-time restore. Here are some failure scenarios that require user action to recover: No. You can either configure HA within a VNET (spanned across AZs within a region) or public access. * **Can I configure HA across regions?** <br>- No. HA is configured within a region, but across availability zones. In future, we are planning to offer read replicas that can be configured across regions for disaster recovery (DR) purposes. We will provide more details when the feature is enabled. -+ No. HA is configured within a region, but across availability zones. However, you can enable Geo-read-replica (s) in asynchronous mode to achieve Geo-resiliency. * **Can I use logical replication with HA configured servers?** <br>- You can configure logical replication with HA. However, after a failover, the logical slot details are not copied over to the standby. Hence, there is currently limited support for this configuration. -+ You can configure logical replication with HA. However, after a failover, the logical slot details are not copied over to the standby. Hence, there is currently limited support for this configuration. If you must use logical replication, you will need to re-create it after every failover. + ### Replication and failover related questions * **How does flexible server provide high availability in the event of a fault - like AZ fault?** <br> Here are some failure scenarios that require user action to recover: - Learn about [business continuity](./concepts-business-continuity.md) - Learn how to [manage high availability](./how-to-manage-high-availability-portal.md) - Learn about [backup and recovery](./concepts-backup-restore.md)+ |
postgresql | Concepts Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-limits.md | A PostgreSQL connection, even idle, can occupy about 10 MB of memory. Also, crea ### Scale operations -- Scaling the server storage requires a server restart.+- At this time, scaling up the server storage requires a server restart. - Server storage can only be scaled in 2x increments, see [Compute and Storage](concepts-compute-storage.md) for details.-- Decreasing server storage size is currently not supported.-+- Decreasing server storage size is currently not supported. Only way to do is [dump and restore](../howto-migrate-using-dump-and-restore.md) it to a new Flexible Server. + ### Server version upgrades - Automated migration between major database engine versions is currently not supported. If you would like to upgrade to the next major version, take a [dump and restore](../howto-migrate-using-dump-and-restore.md) it to a server that was created with the new engine version.-+ ### Storage - Once configured, storage size can't be reduced. You have to create a new server with desired storage size, perform manual [dump and restore](../howto-migrate-using-dump-and-restore.md) and migrate your database(s) to the new server. - Currently, storage auto-grow feature isn't available. You can monitor the usage and increase the storage to a higher size. -- When the storage usage reaches 95% or if the available capacity is less than 5 GiB, the server is automatically switched to **read-only mode** to avoid errors associated with disk-full situations. +- When the storage usage reaches 95% or if the available capacity is less than 5 GiB whichever is more, the server is automatically switched to **read-only mode** to avoid errors associated with disk-full situations. In rare cases, if the rate of data growth outpaces the time it takes switch to read-only mode, your Server may still run out of storage. - We recommend to set alert rules for `storage used` or `storage percent` when they exceed certain thresholds so that you can proactively take action such as increasing the storage size. For example, you can set an alert if the storage percent exceeds 80% usage.-- If you are using logical replication, then you must drop the logical replication slot in the primary server if the corresponding subscriber no longer exists. Otherwise the WAL files start to get accumulated in the primary filling up the storage. If the storage threshold exceeds certain threshold and if the logical replication slot is not in use (due to non-available subscriber), Flexible server automatically drops that unused logical replication slot. That action releases accumulated WAL files and avoids your server becoming unavailable due to storage getting filled situation. -- +- If you're using logical replication, then you must drop the logical replication slot in the primary server if the corresponding subscriber no longer exists. Otherwise the WAL files start to get accumulated in the primary filling up the storage. If the storage threshold exceeds certain threshold and if the logical replication slot isn't in use (due to non-available subscriber), Flexible server automatically drops that unused logical replication slot. That action releases accumulated WAL files and avoids your server becoming unavailable due to storage getting filled situation. + ### Networking - Moving in and out of VNET is currently not supported. A PostgreSQL connection, even idle, can occupy about 10 MB of memory. Also, crea ### Availability zones -- Manually moving servers to a different availability zone is currently not supported. However, you can enable HA using the preferred AZ as the standby zone. Once established, you can failover to the standby and subsequently disable HA. +- Manually moving servers to a different availability zone is currently not supported. However, you can enable HA using the preferred AZ as the standby zone. Once established, you can fail over to the standby and subsequently disable HA. ### Postgres engine, extensions, and PgBouncer -- Postgres 10 and older aren't supported. We recommend using the [Single Server](../overview-single-server.md) option if you require older Postgres versions.-- Extension support is currently limited to the Postgres `contrib` extensions.+- Postgres 10 and older aren't supported as those are already retired by the open-source community. If you must use one of these versions, you'll need to use the [Single Server](../overview-single-server.md) option which supports the older major versions 95, 96 and 10. +- Flexible Server supports all `contrib` extensions and more. Please refer to [PostgreSQL extensions](/azure/postgresql/flexible-server/concepts-extensions). - Built-in PgBouncer connection pooler is currently not available for Burstable servers. - SCRAM authentication isn't supported with connectivity using built-in PgBouncer.-+ ### Stop/start operation -- Server can't be stopped for more than seven days.-+- Once you stop the Flexible Server, it automatically starts after 7- days. + ### Scheduled maintenance -- Changing the maintenance window less than five days before an already planned upgrade, won't affect that upgrade. Changes only take effect with the next scheduled maintenance.-+- You can change custom maintenance window to any day/time of the week. However, any changes made after receiving the maintenance notification will have no impact on the next maintenance. Changes only take effect with the following monthly scheduled maintenance. + ### Backing up a server -- Backups are managed by the system, there is currently no way to run these backups manually. We recommend using `pg_dump` instead.-- Backups are always snapshot-based full backups (not differential backups), possibly leading to higher backup storage utilization. The transaction logs (write ahead logs - WAL) are separate from the full/differential backups, and are archived continuously.-+- Backups are managed by the system, there's currently no way to run these backups manually. We recommend using `pg_dump` instead. +- The first snapshot is a full backup and consecutive snapshots are differential backups. The differential backups only back up the changed data since the last snapshot backup. For example, if the size of your database is 40GB and your provisioned storage is 64GB, the first snapshot backup will be 40GB. Now, if you change 4GB of data, then the next differential snapshot backup size will only be 4GB. The transaction logs (write ahead logs - WAL) are separate from the full/differential backups, and are archived continuously. + ### Restoring a server -- When using the Point-in-time-Restore feature, the new server is created with the same compute and storage configurations as the server it is based on.+- When using the Point-in-time-Restore feature, the new server is created with the same compute and storage configurations as the server isn't based on. - VNET based database servers are restored into the same VNET when you restore from a backup. - The new server created during a restore doesn't have the firewall rules that existed on the original server. Firewall rules need to be created separately for the new server. - Restoring a deleted server isn't supported. - Cross region restore isn't supported.--+- Restore to a different subscription is not supported but as a workaround, you can restore the server within the same subscription and then migrate the restored server to a different subscription. + ## Next steps - Understand [whatΓÇÖs available for compute and storage options](concepts-compute-storage.md) - Learn about [Supported PostgreSQL Database Versions](concepts-supported-versions.md) - Review [how to back up and restore a server in Azure Database for PostgreSQL using the Azure portal](how-to-restore-server-portal.md)+ |
sap | Configure Sap Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-sap-parameters.md | This table contains the default parameters defined by the framework. ### User IDs +This table contains the IDs for the SAP users and groups for the different platforms. ++> [!div class="mx-tdCol2BreakAll "] +> | Parameter | Description | Default Value | +> | - | -- | - | +> | HANA | | | +> | `sapadm_uid` | The UID for the sapadm account. | 2100 | +> | `sidadm_uid` | The UID for the sidadm account. | 2003 | +> | `hdbadm_uid` | The UID for the hdbadm account. | 2200 | +> | `sapinst_gid` | The GID for the sapinst group. | 2001 | +> | `sapsys_gid` | The GID for the sapsys group. | 2000 | +> | `hdbshm_gid` | The GID for the hdbshm group. | 2002 | +> | DB2 | | | +> | `db2sidadm_uid` | The UID for the db2sidadm account. | 3004 | +> | `db2sapsid_uid` | The UID for the db2sapsid account. | 3005 | +> | `db2sysadm_gid` | The UID for the db2sysadm group. | 3000 | +> | `db2sysctrl_gid` | The UID for the db2sysctrl group. | 3001 | +> | `db2sysmaint_gid` | The UID for the db2sysmaint group. | 3002 | +> | `db2sysmon_gid` | The UID for the db2sysmon group. | 2003 | +> | ORACLE | | | +> | `orasid_uid` | The UID for the orasid account. | 3100 | +> | `oracle_uid` | The UID for the oracle account. | 3101 | +> | `observer_uid` | The UID for the observer account. | 4000 | +> | `dba_gid` | The GID for the dba group. | 3100 | +> | `oper_gid` | The GID for the oper group. | 3101 | +> | `asmoper_gid` | The GID for the asmoper group. | 3102 | +> | `asmadmin_gid` | The GID for the asmadmin group. | 3103 | +> | `asmdba_gid` | The GID for the asmdba group. | 3104 | +> | `oinstall_gid` | The GID for the oinstall group. | 3105 | +> | `backupdba_gid` | The GID for the backupdba group. | 3106 | +> | `dgdba_gid` | The GID for the dgdba group. | 3107 | +> | `kmdba_gid` | The GID for the kmdba group. | 3108 | +> | `racdba_gid` | The GID for the racdba group. | 3108 | +++### Windows parameters ++This table contains the information pertinent to Windows deployments. + > [!div class="mx-tdCol2BreakAll "]-> | Parameter | Description | Default Value | Type | -> | - | -- | - | - | -> | `sapadm_uid` | The UID for the sapadm account. | 2100 | Required | -> | `sidadm_uid` | The UID for the sidadm account. | 2003 | Required | -> | `hdbadm_uid` | The UID for the hdbadm account. | 2200 | Required | -> | `sapinst_gid` | The GID for the sapinst group. | 2001 | Required | -> | `sapsys_gid` | The GID for the sapsys group. | 2000 | Required | -> | `hdbshm_gid` | The GID for the hdbshm group. | 2002 | Required | -> | | | | | -> | `db2sidadm_uid` | The UID for the db2sidadm account. | 3004 | Required | -> | `db2sapsid_uid` | The UID for the db2sapsid account. | 3005 | Required | -> | `db2sysadm_gid` | The UID for the db2sysadm group. | 3000 | Required | -> | `db2sysctrl_gid` | The UID for the db2sysctrl group. | 3001 | Required | -> | `db2sysmaint_gid` | The UID for the db2sysmaint group. | 3002 | Required | -> | `db2sysmon_gid` | The UID for the db2sysmon group. | 2003 | Required | -> | | | | | -> | `orasid_uid` | The UID for the orasid account. | 3100 | Required | -> | `oracle_uid` | The UID for the oracle account. | 3101 | Required | -> | `observer_uid` | The UID for the observer account. | 4000 | Required | -> | `dba_gid` | The GID for the dba group. | 3100 | Required | -> | `oper_gid` | The GID for the oper group. | 3101 | Required | -> | `asmoper_gid` | The GID for the asmoper group. | 3102 | Required | -> | `asmadmin_gid` | The GID for the asmadmin group. | 3103 | Required | -> | `asmdba_gid` | The GID for the asmdba group. | 3104 | Required | -> | `oinstall_gid` | The GID for the oinstall group. | 3105 | Required | -> | `backupdba_gid` | The GID for the backupdba group. | 3106 | Required | -> | `dgdba_gid` | The GID for the dgdba group. | 3107 | Required | -> | `kmdba_gid` | The GID for the kmdba group. | 3108 | Required | -> | `racdba_gid` | The GID for the racdba group. | 3108 | Required | +> | Parameter | Description | Default Value | +> | - | -- | - | +> | `mssserver_version` | SQL Server version | `mssserver2019` | ## Parameters This table contains the parameters stored in the sap-parameters.yaml file, most > | `ers_instance_number` | Defines the instance number for ERS | Required | > | `ers_lb_ip` | IP address of ERS instance | Required | > | `pas_instance_number` | Defines the instance number for PAS | Required |+> | `web_sid` | The SID for the Web Dispatcher | Required if web dispatchers are deployed | +> | `scs_clst_lb_ip` | IP address of Windows Cluster service | Required | ### Database Tier This table contains the parameters stored in the sap-parameters.yaml file, most > | Parameter | Description | Type | > | - | - | - | > | `db_sid` | The SID of the SAP database | Required |+> | `db_instance_number` | Defines the instance number for the database | Required | > | `db_high_availability` | Defines if the database is deployed highly available | Required | > | `db_lb_ip` | IP address of the database load balancer | Required | > | `platform` | The database platform. Valid values are: ASE, DB2, HANA, ORACLE, SQLSERVER | Required |+> | `db_clst_lb_ip` | IP address of database cluster for Windows | Required | ### NFS This table contains the parameters stored in the sap-parameters.yaml file, most > | `sap_trans` | The NFS path for sap_trans | Required | > | `usr_sap_install_mountpoint` | The NFS path for usr/sap/install | Required | +### Azure NetApp Files +> [!div class="mx-tdCol2BreakAll "] +> | Parameter | Description | Type | +> | - | - | - | +> | `hana_data` | The NFS path for hana_data volumes | Required | +> | `hana_log` | The NFS path for hana_log volumes | Required | +> | `hana_shared` | The NFS path for hana_shared volumes | Required | +> | `usr_sap` | The NFS path for /usr/sap volumes | Required | ++### Windows support ++> [!div class="mx-tdCol2BreakAll "] +> | Parameter | Description | Type | +> | - | - | - | +> | `domain_name` | Defines the Windows domain name, for example sap.contoso.net. | Required | +> | `domain` | Defines the Windows domain Netbios name, for example sap. | Optional | +> | SQL | | | +> | `use_sql_for_SAP` | Uses the SAP defined SQL Server media, defaults to 'true' | Optional | +> | `win_cluster_share_type` | Defines the cluster type (CSD/FS), defaults to CSD | Optional | + ### Miscellaneous > [!div class="mx-tdCol2BreakAll "] This table contains the parameters stored in the sap-parameters.yaml file, most > | `kv_name` | The name of the Azure key vault containing the system credentials | Required | > | `secret_prefix` | The prefix for the name of the secrets for the SID stored in key vault | Required | > | `upgrade_packages` | Update all installed packages on the virtual machines | Required |+> | `use_msi_for_clusters` | Use managed identities for fencing | Required | ### Disks In order to install the Oracle backend using the SAP on Azure Deployment Automat > | `ora_release` | The Oracle release version, for example 19.0.0 | Required | > | `oracle_sbp_patch` | The Oracle SBP patch file name | Required | -### Shared Home support +#### Shared Home support To configure shared home support for Oracle, you need to add a dictionary defining the SIDs to be deployed. You can do that by adding the parameter 'MULTI_SIDS' that contains a list of the SIDs and the SID details. Each row must specify the following parameters. > | `app_inst_no` | The APP instance number for the instance | Required | +## Overriding the default parameters ++You can override the default parameters by either specifying them in the sap-parameters.yaml file or by passing them as command line parameters to the Ansible playbooks. ++For example if you want to override the default value of the group ID for the sapinst group (`sapinst_gid`) parameter, you can do it by adding the following line to the sap-parameters.yaml file: ++```yaml +sapinst_gid: 1000 +``` ++If you want to provide them as parameters for the Ansible playbooks, you can do it by adding the following parameter to the command line: ++```bash +ansible-playbook -i hosts SID_hosts.yaml --extra-vars "sapinst_gid=1000" ..... +``` ++You can also override the default parameters by specifying them in the `configuration_settings' variable in your tfvars file. For example, if you want to override 'sapinst_gid' your tfvars file should contain the following line: ++```terraform +configuration_settings = { + sapinst_gid = "1000" +} +``` +++ ## Next steps > [!div class="nextstepaction"] |
service-bus-messaging | Jms Developer Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/jms-developer-guide.md | Last updated 02/12/2022 This guide contains detailed information to help you succeed in communicating with Azure Service Bus using the Java Message Service (JMS) 2.0 API. -As a Java developer, if you are new to Azure Service Bus, please consider reading the below articles. +As a Java developer, if you're new to Azure Service Bus, please consider reading the below articles. | Getting started | Concepts | |-|-| The connection factory object is used by the client to connect with the JMS prov Each connection factory is an instance of `ConnectionFactory`, `QueueConnectionFactory` or `TopicConnectionFactory` interface. -To simplify connecting with Azure Service Bus, these interfaces are implemented through `ServiceBusJmsConnectionFactory`, `ServiceBusJmsQueueConnectionFactory` and `ServiceBusJmsTopicConnectionFactory` respectively. The Connection factory can be instantiated with the below parameters - +To simplify connecting with Azure Service Bus, these interfaces are implemented through `ServiceBusJmsConnectionFactory`, `ServiceBusJmsQueueConnectionFactory` and `ServiceBusJmsTopicConnectionFactory` respectively. ++> [!IMPORTANT] +> Java applications leveraging JMS 2.0 API can connect to Azure Service Bus using the connection string, or using a `TokenCredential` for leveraging Azure Active Directory (AAD) backed authentication. ++# [System Assigned Managed Identity](#tab/system-assigned-managed-identity-backed-authentication) ++Create a [system assigned managed identity](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) on Azure, and use this identity to create a `TokenCredential`. ++```java +TokenCredential tokenCredential = new DefaultAzureCredentialBuilder().build(); +``` ++The Connection factory can then be instantiated with the below parameters.- + * Token credential - Represents a credential capable of providing an OAuth token. + * Connection string - the connection string for the Azure Service Bus Premium tier namespace. + * ServiceBusJmsConnectionFactorySettings property bag, which contains + * connectionIdleTimeoutMS - idle connection timeout in milliseconds. + * traceFrames - boolean flag to collect AMQP trace frames for debugging. + * *other configuration parameters* ++The factory can be created as shown here. The connection string is a required parameter, but the other properties are optional. ++```java +String host = "<YourNamespaceName>.servicebus.windows.net"; +ConnectionFactory factory = new ServiceBusJmsConnectionFactory(tokenCredential, host, null); +``` ++# [User Assigned Managed Identity](#tab/user-assigned-managed-identity-backed-authentication) ++Create a [user assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity) on Azure, and use this identity to create a `TokenCredential`. ++```java +TokenCredential tokenCredential = new DefaultAzureCredentialBuilder() + .managedIdentityClientId("<clientIDOfUserAssignedIdentity>") + .build(); +``` ++The Connection factory can then be instantiated with the below parameters.- + * Token credential - Represents a credential capable of providing an OAuth token. + * Connection string - the connection string for the Azure Service Bus Premium tier namespace. + * ServiceBusJmsConnectionFactorySettings property bag, which contains + * connectionIdleTimeoutMS - idle connection timeout in milliseconds. + * traceFrames - boolean flag to collect AMQP trace frames for debugging. + * *other configuration parameters* ++The factory can be created as shown here. The connection string is a required parameter, but the other properties are optional. ++```java +String host = "<YourNamespaceName>.servicebus.windows.net"; +ConnectionFactory factory = new ServiceBusJmsConnectionFactory(tokenCredential, host, null); +``` ++# [Connection string authentication](#tab/connection-string-authentication) ++The Connection factory can be instantiated with the below parameters - * Connection string - the connection string for the Azure Service Bus Premium tier namespace.- * ServiceBusJmsConnectionFactorySettings property bag which contains + * ServiceBusJmsConnectionFactorySettings property bag, which contains * connectionIdleTimeoutMS - idle connection timeout in milliseconds. * traceFrames - boolean flag to collect AMQP trace frames for debugging. * *other configuration parameters* -The factory can be created as below. The connection string is a required parameter, but the additional properties are optional. +The factory can be created as shown here. The connection string is a required parameter, but the other properties are optional. ```java ConnectionFactory factory = new ServiceBusJmsConnectionFactory(SERVICE_BUS_CONNECTION_STRING, null); ``` -> [!IMPORTANT] -> Java applications leveraging JMS 2.0 API must connect to Azure Service Bus using the connection string only. Currently, authentication for JMS clients is only supported using the Connection string. -> -> Azure active directory (AAD) backed authentication is not currently supported. -> + ### JMS destination Destinations map to entities in Azure Service Bus - queues (in point to point sc ### Connections -A connection encapsulates a virtual connection with a JMS provider. With Azure Service Bus,this represents a stateful connection between the application and Azure Service Bus over AMQP. +A connection encapsulates a virtual connection with a JMS provider. With Azure Service Bus, this represents a stateful connection between the application and Azure Service Bus over AMQP. A connection is created from the connection factory as shown below. A session can be created with any of the below modes. |**Session.DUPS_OK_ACKNOWLEDGE**|This acknowledgment mode instructs the session to lazily acknowledge the delivery of messages.| |**Session.SESSION_TRANSACTED**|This value may be passed as the argument to the method createSession(int sessionMode) on the Connection object to specify that the session should use a local transaction.| -When the session mode is not specified, the **Session.AUTO_ACKNOWLEDGE** is picked by default. +When the session mode isn't specified, the **Session.AUTO_ACKNOWLEDGE** is picked by default. ### JMSContext Just like the **Session** object, the JMSContext can be created with the same ac JMSContext context = connectionFactory.createContext(JMSContext.AUTO_ACKNOWLEDGE); ``` -When the mode is not specified, the **JMSContext.AUTO_ACKNOWLEDGE** is picked by default. +When the mode isn't specified, the **JMSContext.AUTO_ACKNOWLEDGE** is picked by default. ### JMS message producers A message producer is an object that is created using a JMSContext or a Session and used for sending messages to a destination. -It can be created either as a stand alone object as below - +It can be created either as a stand-alone object as below - ```java JMSProducer producer = context.createProducer(); |
service-bus-messaging | Migrate Jms Activemq To Servicebus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/migrate-jms-activemq-to-servicebus.md | As part of migrating and modifying your client applications to interact with Azu #### Authentication and authorization -Azure role-based access control (Azure RBAC), backed by Azure Active Directory, is the preferred authentication mechanism for Service Bus. Because Azure RBAC, or claim-based authentication, isn't currently supported by Apache QPID JMS, however, you should use SAS keys for authentication. +Azure role-based access control (Azure RBAC), backed by Azure Active Directory, is the preferred authentication mechanism for Service Bus. To enable role-based access control, please follow the steps in the [Azure Service Bus JMS 2.0 developer guide](jms-developer-guide.md). ## Pre-migration |
spring-apps | How To Enterprise Large Cpu Memory Applications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-large-cpu-memory-applications.md | + + Title: How to deploy large CPU and memory applications in Azure Spring Apps in the Enterprise tier +description: Learn how to deploy large CPU and memory applications in the Enterprise tier for Azure Spring Apps. ++++ Last updated : 03/17/2023++++# Deploy large CPU and memory applications in Azure Spring Apps in the Enterprise tier ++**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier ++This article shows how to deploy large CPU and memory applications in Azure Spring Apps to support CPU intensive or memory intensive workloads. Support for large applications is currently available only in the Enterprise tier, which supports the CPU and memory combinations as shown in the following table. ++| CPU (cores) | Memory (GB) | +| -- | -- | +| 4 | 16 | +| 6 | 24 | +| 8 | 32 | ++## Prerequisites ++- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. +- An Azure Spring Apps service instance. For more information, see [Quickstart: Provision an Azure Spring Apps service instance](/azure/spring-apps/quickstart-provision-service-instance). +- The [Azure CLI](/cli/azure/install-azure-cli). Install the Azure Spring Apps extension with the following command: `az extension add --name spring`. ++## Create a large CPU and memory application ++You can use the Azure portal or the Azure CLI to create applications. ++### [Azure portal](#tab/azure-portal) ++Use the following steps to create a large CPU and memory application using the Azure portal. ++1. Go to your Azure Spring Apps service instance. ++1. In the navigation pane, select **Apps**, and then select **Create app**. ++1. On the **Create App** page, provide a name for **App name** and select the desired **vCpu** and **Memory** values for your application. ++1. Select **Create**. ++ :::image type="content" source="media/how-to-enterprise-large-cpu-memory-applications/create-large-application.png" lightbox="media/how-to-enterprise-large-cpu-memory-applications/create-large-application.png" alt-text="Screenshot of the Azure portal Create App page in Azure Spring Apps showing configuration settings for a new app."::: ++### [Azure CLI](#tab/azure-cli) ++The following command creates an application with the CPU set to eight core processors and memory set to 32 gigabytes. ++```azurecli +az spring app create \ + --resource-group <resource-group-name> \ + --service <Azure-Spring-Apps-service-instance-name> \ + --name <Spring-app-name> \ + --cpu 8 \ + --memory 32Gi +``` ++++## Scale up and down for large CPU and memory applications ++To adjust your application's CPU and memory settings, you can use the Azure portal or Azure CLI commands. ++### [Azure portal](#tab/azure-portal) ++Use the following steps to scale up or down a large CPU and memory application. ++1. On the overview page of your app, select **Scale up** in the navigation pane. ++1. Select the preferred **vCpu** and **Memory** values. ++ :::image type="content" source="media/how-to-enterprise-large-cpu-memory-applications/scale-large-application.png" lightbox="media/how-to-enterprise-large-cpu-memory-applications/scale-large-application.png" alt-text="Screenshot of Azure portal Configuration page showing how to scale large app."::: ++1. Select **Save**. ++### [Azure CLI](#tab/azure-cli) ++The following command scales up an app to have high CPU and memory values. ++```azurecli +az spring app scale \ + --resource-group <resource-group-name> \ + --service <Azure-Spring-Apps-service-instance-name> \ + --name <Spring-app-name> \ + --cpu 8 \ + --memory 32Gi +``` ++The following command scales down an app to have low CPU and memory values. ++```azurecli +az spring app scale \ + --resource-group <resource-group-name> \ + --service <Azure-Spring-Apps-service-instance-name> \ + --name <Spring-app-name> \ + --cpu 1 \ + --memory 2Gi +``` ++++## Next steps ++- [Build and deploy apps to Azure Spring Apps](/azure/spring-apps/quickstart-deploy-apps) +- [Scale an application in Azure Spring Apps](/azure/spring-apps/how-to-scale-manual) |
spring-apps | How To Scale Manual | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-scale-manual.md | -This article demonstrates how to scale any Spring application using the Azure Spring Apps dashboard in the Azure portal. +This article demonstrates how to scale a Spring application using Azure Spring Apps in the Azure portal. -Scale your application up and down by modifying its number of virtual CPUs (vCPUs) and amount of memory. Scale your application in and out by modifying the number of application instances. +You can scale your app up and down by modifying its number of virtual CPUs (vCPUs) and amount of memory. Scale your app in and out by modifying the number of application instances. -After you finish, you'll know how to make quick manual changes to each application in your service. Scaling takes effect in seconds and doesn't require any code changes or redeployment. +After you finish, you'll know how to make quick manual changes to each application in your service. Scaling takes effect within seconds and doesn't require any code changes or redeployment. ## Prerequisites -To follow these procedures, you need: - * An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-* A deployed Azure Spring Apps service instance. Follow the [quickstart on deploying an app via the Azure CLI](./quickstart.md) to get started. +* A deployed Azure Spring Apps service instance. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md) to get started. * At least one application already created in your service instance. ## Navigate to the Scale page in the Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com). -1. Go to your Azure Spring Apps **Overview** page. --1. Select the resource group that contains your service. +1. Go to your Azure Spring Apps instance. -1. Select the **Apps** tab under **Settings** in the menu on the left side of the page. +1. Select **Apps** under **Settings** in the navigation pane. -1. Select the application you want to scale. In this example, select the application named **account-service**. You should then see the application's **Overview** page. +1. Select the app you want to scale and then select **Scale up** in the navigation pane. Specify the **vCPU** and **Memory** settings using the guidelines as described in the following section. -1. Go to the **Scale** tab under **Settings** in the menu on the left side of the page. You should see options for the scaling the attributes shown in the following section. +1. Select **Scale out** in the navigation pane. Specify the **instance count** setting as described in the following section. ## Scale your application -If you modify the scaling attributes, keep the following notes in mind: +As you modify the scaling attributes, keep the following notes in mind: -* **CPUs**: The maximum number of CPUs per application instance is four. The total number of CPUs for an application is the value set here multiplied by the number of application instances. +* **vCPU**: The maximum number of CPUs per application instance is four. The total number of CPUs for an application is the value set here multiplied by the number of application instances. -* **Memory/GB**: The maximum amount of memory per application instance is 8 GB. The total amount of memory for an application is the value set here multiplied by the number of application instances. +* **Memory**: The maximum amount of memory per application instance is 8 GB. The total amount of memory for an application is the value set here multiplied by the number of application instances. -* **App instance count**: In the Standard tier, you can scale out to a maximum of 20 instances. This value changes the number of separate running instances of the Spring application. +* **instance count**: In the Standard tier, you can scale out to a maximum of 20 instances. This value changes the number of separate running instances of the Spring application. Be sure to select **Save** to apply your scaling settings. - -After a few seconds, the changes you made are displayed on the **Overview** page, with more details available in the **Application instances** tab. Scaling doesn't require any code changes or redeployment. +After a few seconds, the scaling changes you make are reflected on the **Overview** page of the app. Select **App instance** in the navigation pane for details about the instance of the app. ## Upgrade to the Standard tier -If you are on the Basic tier and constrained by one or more of these [limits](./quotas.md), you can upgrade to the Standard tier. To do this go to the Pricing tier menu by first selecting the **Standard tier** column and then selecting the **Upgrade** button. +If you're on the Basic tier and constrained by current limits, you can upgrade to the Standard tier. For more information, see [Quotas and service plans for Azure Spring Apps](./quotas.md) and [Migrate an Azure Spring Apps Basic or Standard tier instance to Enterprise tier](/azure/spring-apps/how-to-migrate-standard-tier-to-enterprise-tier). ## Next steps |
spring-apps | Quotas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quotas.md | -All Azure services set default limits and quotas for resources and features. Azure Spring Apps offers two pricing tiers: Basic and Standard. We will detail limits for both tiers in this article. +All Azure services set default limits and quotas for resources and features. Azure Spring Apps offers three pricing tiers: Basic, Standard, and Enterprise. ## Azure Spring Apps service tiers and limits -| Resource | Scope | Basic | Standard/Enterprise | -|--|--|--|-| -| vCPU | per app instance | 1 | 4 | -| Memory | per app instance | 2 GB | 8 GB | -| Azure Spring Apps service instances | per region per subscription | 10 | 10 | -| Total app instances | per Azure Spring Apps service instance | 25 | 500 | -| Custom Domains | per Azure Spring Apps service instance | 0 | 25 | -| Persistent volumes | per Azure Spring Apps service instance | 1 GB/app x 10 apps | 50 GB/app x 10 apps | -| Inbound Public Endpoints | per Azure Spring Apps service instance | 10 <sup>1</sup> | 10 <sup>1</sup> | -| Outbound Public IPs | per Azure Spring Apps service instance | 1 <sup>2</sup> | 2 <sup>2</sup> <br> 1 if using VNet<sup>2</sup> | -| User-assigned managed identities | per app instance | 20 | 20 | +The following table defines limits for the pricing tiers in Azure Spring Apps. ++| Resource | Scope | Basic | Standard | Enterprise | +|--|--||--|--| +| vCPU | per app instance | 1 | 4 | 8 | +| Memory | per app instance | 2 GB | 8 GB | 32 GB | +| Azure Spring Apps service instances | per region per subscription | 10 | 10 | 10 | +| Total app instances | per Azure Spring Apps service instance | 25 | 500 | 500 | +| Custom Domains | per Azure Spring Apps service instance | 0 | 25 | 25 | +| Persistent volumes | per Azure Spring Apps service instance | 1 GB/app x 10 apps | 50 GB/app x 10 apps | 50 GB/app x 10 apps | +| Inbound Public Endpoints | per Azure Spring Apps service instance | 10 <sup>1</sup> | 10 <sup>1</sup> | 10 <sup>1</sup> | +| Outbound Public IPs | per Azure Spring Apps service instance | 1 <sup>2</sup> | 2 <sup>2</sup> <br> 1 if using VNet<sup>2</sup> | 2 <sup>2</sup> <br> 1 if using VNet<sup>2</sup> | +| User-assigned managed identities | per app instance | 20 | 20 | 20 | <sup>1</sup> You can increase this limit via support request to a maximum of 1 per app. <sup>2</sup> You can increase this limit via support request to a maximum of 10. > [!TIP]-> Limits listed for Total app instances per service instance apply for apps and deployments in any state, including stopped state. Be sure to delete apps or deployments that aren't in use. +> Limits listed for total app instances, per service instance, apply for apps and deployments in any state, including apps in a stopped state. Be sure to delete apps or deployments that are not being used. ## Next steps -Some default limits can be increased. If your setup requires an increase, [create a support request](../azure-portal/supportability/how-to-create-azure-support-request.md). +Some default limits can be increased. For more information, see [create a support request](../azure-portal/supportability/how-to-create-azure-support-request.md). |
synapse-analytics | Synapse File Mount Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/synapse-file-mount-api.md | You can create a linked service for Data Lake Storage Gen2 or Blob Storage. Curr  +> [!IMPORTANT] +> +> - If the above created Linked Service to Azure Data Lake Storage Gen2 uses a [managed private endpoint](../security/synapse-workspace-managed-private-endpoints.md) (with a *dfs* URI) , then we need to create another secondary managed private endpoint using the Azure Blob Storage option (with a **blob** URI) to ensure that the internal [fsspec/adlfs](https://github.com/fsspec/adlfs/blob/main/adlfs/spec.py#L400) code can connect using the *BlobServiceClient* interface. +> - In case the secondary managed private endpoint is not configured correctly, then we would see an error message like *ServiceRequestError: Cannot connect to host [storageaccountname].blob.core.windows.net:443 ssl:True [Name or service not known]* +> +>  + > [!NOTE] > If you create a linked service by using a managed identity as the authentication method, make sure that the workspace MSI file has the Storage Blob Data Contributor role of the mounted container. |
synapse-analytics | Tutorial Use Pandas Spark Pool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/tutorial-use-pandas-spark-pool.md | If you don't have an Azure subscription, [create a free account before you begin :::image type="content" source="media/tutorial-use-pandas-spark-pool/create-adls-linked-service.png" alt-text="Screenshot of creating a linked service using an ADLS Gen2 storage access key."::: +> [!IMPORTANT] +> +> - If the above created Linked Service to Azure Data Lake Storage Gen2 uses a [managed private endpoint](../security/synapse-workspace-managed-private-endpoints.md) (with a *dfs* URI) , then we need to create another secondary managed private endpoint using the Azure Blob Storage option (with a **blob** URI) to ensure that the internal [fsspec/adlfs](https://github.com/fsspec/adlfs/blob/main/adlfs/spec.py#L400) code can connect using the *BlobServiceClient* interface. +> - In case the secondary managed private endpoint is not configured correctly, then we would see an error message like *ServiceRequestError: Cannot connect to host [storageaccountname].blob.core.windows.net:443 ssl:True [Name or service not known]* +> +>  > [!NOTE] > - Pandas feature is supported on **Python 3.8** and **Spark3** serverless Apache Spark pool in Azure Synapse Analytics. |
virtual-desktop | Connection Latency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/connection-latency.md | In contrast to other diagnostics tables that report data at regular intervals th - Learn more about how to monitor and run queries about connection quality issues at [Monitor connection quality](connection-quality-monitoring.md). - Troubleshoot connection and latency issues at [Troubleshoot connection quality for Azure Virtual Desktop](troubleshoot-connection-quality.md). - To check the best location for optimal latency, see the [Azure Virtual Desktop Experience Estimator tool](https://azure.microsoft.com/services/virtual-desktop/assessment/).-- For pricing plans, see [Azure Log Analytics pricing](/services-hub/health/azure_pricing).+- For pricing plans, see [Azure Log Analytics pricing](/services-hub/premier/health/azure_pricing). - To get started with your Azure Virtual Desktop deployment, check out [our tutorial](./create-host-pools-azure-marketplace.md). - To learn about bandwidth requirements for Azure Virtual Desktop, see [Understanding Remote Desktop Protocol (RDP) Bandwidth Requirements for Azure Virtual Desktop](rdp-bandwidth.md). - To learn about Azure Virtual Desktop network connectivity, see [Understanding Azure Virtual Desktop network connectivity](network-connectivity.md). |
virtual-machines | Dedicated Host Retirement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-retirement.md | Last updated 3/15/2021 We continue to modernize and optimize Azure Dedicated Host by using the latest innovations in processor and datacenter technologies. Azure Dedicated Host is a combination of a virtual machine (VM) series and a specific Intel or AMD-based physical server. As we innovate and work with our technology partners, we also need to plan how we retire aging technology. -## Migrations required by 31 March 2023 +## UPDATE: Retirement timeline extension +Considering the feedback from several Azure Dedicated Host customers that are running their critical workloads on SKUs that are scheduled for retirement, we have extended the retirement timeline from March 31, 2023 to June 30, 2023. +We don't intend to move the retirement timeline any further and recommend all ADH users that are using any of the listed SKUs to migrate to newer generation based SKUs to avoid workload disruptions. -All hardware has a finite lifespan, including the underlying hardware for Azure Dedicated Host. As we continue to modernize Azure datacenters, hardware is decommissioned and eventually retired. The hardware that runs the following Dedicated Host SKUs will be reaching end of life: +## Migrations required by 30 June 2023 [Updated] ++All hardware has a finite lifespan, including the underlying hardware for Azure Dedicated Host. As we continue to modernize Azure datacenters, hardware is decommissioned and eventually retired. The hardware that runs the following Dedicated Host SKUs is reaching end of life: - Dsv3-Type1 - Dsv3-Type2 - Esv3-Type1 - Esv3-Type2 -As a result we'll retire these Dedicated Host SKUs on 31 March 2023. +As a result we'll retire these Dedicated Host SKUs on 30 June 2023. ## How does the retirement of Azure Dedicated Host SKUs affect you? The current retirement impacts the following Azure Dedicated Host SKUs: - Dsv3-Type2 - Esv3-Type2 -Note: If you're running a Dsv3-Type3, Dsv3-Type4, an Esv3-Type3, or an Esv3-Type4 Dedicated Host, you won't be impacted. +Note: If you're running a Dsv3-Type3, Dsv3-Type4, an Esv3-Type3, or an Esv3-Type4 Dedicated Host, you are not impacted. ## What actions should you take? -For manually placed VMs, you'll need to create a Dedicated Host of a newer SKU, stop the VMs on your existing Dedicated Host, reassign them to the new host, start the VMs, and delete the old host. For automatically placed VMs or for virtual machine scale sets, you'll need to create a Dedicated Host of a newer SKU, stop the VMs or virtual machine scale set, delete the old host, and then start the VMs or virtual machine scale set. +For manually placed VMs, you need to create a Dedicated Host of a newer SKU, stop the VMs on your existing Dedicated Host, reassign them to the new host, start the VMs, and delete the old host. For automatically placed VMs or for Virtual Machine Scale Sets, you need to create a Dedicated Host of a newer SKU, stop the VMs or Virtual Machine Scale Set, delete the old host, and then start the VMs or Virtual Machine Scale Set. Refer to the [Azure Dedicated Host Migration Guide](dedicated-host-migration-guide.md) for more detailed instructions. We recommend moving to the latest generation of Dedicated Host for your VM family. If you have any questions, contact us through customer support. ### Q: Will migration result in downtime? -A: Yes, you'll need to stop/deallocate your VMs or virtual machine scale sets before moving them to the target host. +A: Yes, you would have to stop/deallocate your VMs or Virtual Machine Scale Sets before moving them to the target host. ### Q: When will the other Dedicated Host SKUs retire? A: | Date | Action | | - | --| | 15 March 2022 | Dsv3-Type1, Dsv3-Type2, Esv3-Type1, Esv3-Type2 retirement announcement |-| 31 March 2023 | Dsv3-Type1, Dsv3-Type2, Esv3-Type1, Esv3-Type2 retirement | +| 30 June 2023 | Dsv3-Type1, Dsv3-Type2, Esv3-Type1, Esv3-Type2 retirement | -### Q: What will happen to my Azure Reservation? +### Q: What happens to my Azure Reservation? -A: You'll need to [exchange your reservation](../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md#how-to-exchange-or-refund-an-existing-reservation) through the Azure portal to match the new Dedicated Host SKU. +A: You need to [exchange your reservation](../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md#how-to-exchange-or-refund-an-existing-reservation) through the Azure portal to match the new Dedicated Host SKU. -### Q: What would happen to my host if I do not migrate by March 31, 2023? +### Q: What would happen to my host if I do not migrate by June 30, 2023? -A: After March 31, 2023 any dedicated host running on the SKUs that are marked for retirement will be set to 'Host Pending Deallocate' state before eventually deallocating the host. For additional assistance please reach out to Azure support. +A: After June 30, 2023 any dedicated host running on the SKUs that are marked for retirement will be set to 'Host Pending Deallocate' state before eventually deallocating the host. For more assistance, please reach out to Azure support. ### Q: What will happen to my VMs if a Host is automatically deallocated? |
virtual-network-manager | Concept Security Admins | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-security-admins.md | Security admin rules are similar to NSG rules in structure and the parameters th | **Security Admin Rules** | Network admins, central governance team | Virtual networks | Higher priority | Allow, Deny, Always Allow | Priority, protocol, action, source, destination | | **NSG Rules** | Individual teams | Subnets, NICs | Lower priority, after security admin rules | Allow, Deny | Priority, protocol, action, source, destination | -## Network intent policies and security admin rules --A network intent policy is applied to some network services to ensure the network traffic is working as needed for these services. By default, a security admin configuration will not apply security admin rules to virtual networks with services that use network intent policies such as SQL managed instance service. With this default option, if you deploy a service using network intent policies in a virtual network with existing security admin rules applied, those security admin rules will be removed from those virtual networks. You can also elect for your security admin configuration to handle virtual networks with services that use network intent policies differently to instead apply security admin rules to those virtual networks unless the security admin rule is of a "deny" action type. With either option, your security admin rules will not block traffic to or from virtual networks with services that use network intent policies, ensuring that these services continue to function as expected. --If you need to apply security admin rules on virtual networks with services that use network intent policies, contact AVNMFeatureRegister@microsoft.com to enable this functionality. Overriding the default behavior described above could break the network intent policies created for those services. For example, creating a deny admin rule can block some traffic allowed by the SQL managed instance service, which is defined by their network intent policies. Make sure to review your environment before applying a security admin configuration. For an example of how to allow the traffic of services that use network intent policies, see [How can I explicitly allow SQLMI traffic before having deny rules](faq.md#how-can-i-explicitly-allow-azure-sql-managed-instance-traffic-before-having-deny-rules) ## Security admin fields When you define a security admin rule, there are required and optional fields. |