Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
advisor | Advisor Alerts Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-alerts-arm.md | Title: Create Azure Advisor alerts for new recommendations using Resource Manager template -description: Learn how to set up an alert for new recommendations from Azure Advisor using an Azure Resource Manager template (ARM template). + Title: Create Advisor alerts for new recommendations by using Resource Manager template +description: Learn how to set up an alert for new recommendations from Azure Advisor by using an Azure Resource Manager template (ARM template). Last updated 06/29/2020 -# Quickstart: Create Azure Advisor alerts on new recommendations using an ARM template +# Quickstart: Create Advisor alerts on new recommendations by using an ARM template -This article shows you how to set up an alert for new recommendations from Azure Advisor using an Azure Resource Manager template (ARM template). +This article shows you how to set up an alert for new recommendations from Azure Advisor by using an Azure Resource Manager template (ARM template). [!INCLUDE [About Azure Resource Manager](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-introduction.md)] -Whenever Azure Advisor detects a new recommendation for one of your resources, an event is stored in [Azure Activity log](../azure-monitor/essentials/platform-logs-overview.md). You can set up alerts for these events from Azure Advisor using a recommendation-specific alerts creation experience. You can select a subscription and optionally a resource group to specify the resources that you want to receive alerts on. +Whenever Advisor detects a new recommendation for one of your resources, an event is stored in an [Azure activity log](../azure-monitor/essentials/platform-logs-overview.md). You can set up alerts for these events from Advisor by using a recommendation-specific alerts creation experience. You can select a subscription and optionally a resource group to specify the resources that you want to receive alerts on. You can also determine the types of recommendations by using these properties: You can also determine the types of recommendations by using these properties: - Impact level - Recommendation type -You can also configure the action that will take place when an alert is triggered by: +You can also configure the action that takes place when an alert is triggered by: -- Selecting an existing action group-- Creating a new action group+- Selecting an existing action group. +- Creating a new action group. To learn more about action groups, see [Create and manage action groups](../azure-monitor/alerts/action-groups.md). > [!NOTE]-> Advisor alerts are currently only available for High Availability, Performance, and Cost recommendations. Security recommendations are not supported. +> Advisor alerts are currently only available for High Availability, Performance, and Cost recommendations. Security recommendations aren't supported. ## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- To run the commands from your local computer, install Azure CLI or the Azure PowerShell modules. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Install Azure PowerShell](/powershell/azure/install-azure-powershell).+- To run the commands from your local computer, install the Azure CLI or the Azure PowerShell modules. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Install Azure PowerShell](/powershell/azure/install-azure-powershell). ## Review the template The template defines two resources: ## Deploy the template -Deploy the template using any standard method for [deploying an ARM template](../azure-resource-manager/templates/deploy-portal.md) such as the following examples using CLI and PowerShell. Replace the sample values for **Resource Group**, and **emailAddress** with appropriate values for your environment. The workspace name must be unique among all Azure subscriptions. +Deploy the template by using any standard method for [deploying an ARM template](../azure-resource-manager/templates/deploy-portal.md), such as the following examples that use the CLI and PowerShell. Replace the sample values for `ResourceGroup`, and `emailAddress` with appropriate values for your environment. The workspace name must be unique among all Azure subscriptions. # [CLI](#tab/CLI) New-AzResourceGroupDeployment -Name CreateAdvisorAlert -ResourceGroupName my-res ## Validate the deployment -Verify that the workspace has been created using one of the following commands. Replace the sample values for **Resource Group** with the value you used above. +Verify that the workspace was created by using one of the following commands. Replace the sample values for **Resource Group** with the value that you used in the previous example. # [CLI](#tab/CLI) Get-AzActivityLogAlert -ResourceGroupName my-resource-group -Name AdvisorAlertsT ## Clean up resources -If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When no longer needed, delete the resource group, which deletes the alert rule and the related resources. To delete the resource group by using Azure CLI or Azure PowerShell +If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When you no longer need the resources, delete the resource group, which deletes the alert rule and the related resources. To delete the resource group by using the CLI or PowerShell: # [CLI](#tab/CLI) Remove-AzResourceGroup -Name my-resource-group -## Next steps +## Related content -- Get an [overview of activity log alerts](../azure-monitor/alerts/alerts-overview.md), and learn how to receive alerts.+- Get an [overview of activity log alerts](../azure-monitor/alerts/alerts-overview.md) and learn how to receive alerts. - Learn more about [action groups](../azure-monitor/alerts/action-groups.md). |
advisor | Advisor Alerts Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-alerts-bicep.md | Title: Create Azure Advisor alerts for new recommendations using Bicep -description: Learn how to set up an alert for new recommendations from Azure Advisor using Bicep. + Title: Create Advisor alerts for new recommendations by using Bicep +description: Learn how to set up an alert for new recommendations from Azure Advisor by using Bicep. Last updated 04/26/2022 -# Quickstart: Create Azure Advisor alerts on new recommendations using Bicep +# Quickstart: Create Advisor alerts on new recommendations by using Bicep -This article shows you how to set up an alert for new recommendations from Azure Advisor using Bicep. +This article shows you how to set up an alert for new recommendations from Azure Advisor by using Bicep. [!INCLUDE [About Bicep](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-bicep-introduction.md)] -Whenever Azure Advisor detects a new recommendation for one of your resources, an event is stored in [Azure Activity log](../azure-monitor/essentials/platform-logs-overview.md). You can set up alerts for these events from Azure Advisor using a recommendation-specific alerts creation experience. You can select a subscription and optionally select a resource group to specify the resources that you want to receive alerts on. +Whenever Advisor detects a new recommendation for one of your resources, an event is stored in an [Azure activity log](../azure-monitor/essentials/platform-logs-overview.md). You can set up alerts for these events from Advisor by using a recommendation-specific alerts creation experience. You can select a subscription and optionally select a resource group to specify the resources that you want to receive alerts on. You can also determine the types of recommendations by using these properties: You can also determine the types of recommendations by using these properties: - Impact level - Recommendation type -You can also configure the action that will take place when an alert is triggered by: +You can also configure the action that takes place when an alert is triggered by: -- Selecting an existing action group-- Creating a new action group+- Selecting an existing action group. +- Creating a new action group. To learn more about action groups, see [Create and manage action groups](../azure-monitor/alerts/action-groups.md). > [!NOTE]-> Advisor alerts are currently only available for High Availability, Performance, and Cost recommendations. Security recommendations are not supported. +> Advisor alerts are currently only available for High Availability, Performance, and Cost recommendations. Security recommendations aren't supported. ## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- To run the commands from your local computer, install Azure CLI or the Azure PowerShell modules. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Install Azure PowerShell](/powershell/azure/install-azure-powershell).+- To run the commands from your local computer, install the Azure CLI or the Azure PowerShell modules. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Install Azure PowerShell](/powershell/azure/install-azure-powershell). ## Review the Bicep file The Bicep file defines two resources: ## Deploy the Bicep file -1. Save the Bicep file as **main.bicep** to your local computer. -1. Deploy the Bicep file using either Azure CLI or Azure PowerShell. +1. Save the Bicep file as `main.bicep` to your local computer. +1. Deploy the Bicep file by using either the Azure CLI or Azure PowerShell. # [CLI](#tab/CLI) The Bicep file defines two resources: > [!NOTE]- > Replace **\<alert-name\>** with the name of the alert. + > Replace \<alert-name\> with the name of the alert. - When the deployment finishes, you should see a message indicating the deployment succeeded. + When the deployment finishes, you should see a message that indicates the deployment succeeded. ## Validate the deployment -Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group. +Use the Azure portal, the Azure CLI, or Azure PowerShell to list the deployed resources in the resource group. # [CLI](#tab/CLI) Get-AzResource -ResourceGroupName exampleRG ## Clean up resources -When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group. +When you no longer need the resources, use the Azure portal, the Azure CLI, or Azure PowerShell to delete the resource group. # [CLI](#tab/CLI) Remove-AzResourceGroup -Name exampleRG -## Next steps +## Related content -- Get an [overview of activity log alerts](../azure-monitor/alerts/alerts-overview.md), and learn how to receive alerts.+- Get an [overview of activity log alerts](../azure-monitor/alerts/alerts-overview.md) and learn how to receive alerts. - Learn more about [action groups](../azure-monitor/alerts/action-groups.md). |
advisor | Advisor Alerts Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-alerts-portal.md | Title: Create Azure Advisor alerts for new recommendations using Azure portal -description: Create Azure Advisor alerts for new recommendation + Title: Create Advisor alerts for new recommendations using Azure portal +description: Create Azure Advisor alerts for new recommendations by using the Azure portal. Last updated 09/09/2019 -# Create Azure Advisor alerts on new recommendations using the Azure portal +# Create Azure Advisor alerts on new recommendations by using the Azure portal -This article shows you how to set up an alert for new recommendations from Azure Advisor using the Azure portal. +This article shows you how to set up an alert for new recommendations from Azure Advisor by using the Azure portal. -Whenever Azure Advisor detects a new recommendation for one of your resources, an event is stored in [Azure Activity log](../azure-monitor/essentials/platform-logs-overview.md). You can set up alerts for these events from Azure Advisor using a recommendation-specific alerts creation experience. You can select a subscription and optionally a resource group to specify the resources that you want to receive alerts on. +Whenever Advisor detects a new recommendation for one of your resources, an event is stored in the [Azure activity log](../azure-monitor/essentials/platform-logs-overview.md). You can set up alerts for these events from Advisor by using a recommendation-specific alerts creation experience. You can select a subscription and optionally a resource group to specify the resources that you want to receive alerts on. You can also determine the types of recommendations by using these properties: You can also determine the types of recommendations by using these properties: * Impact level * Recommendation type -You can also configure the action that will take place when an alert is triggered by: +You can also configure the action that takes place when an alert is triggered by: -* Selecting an existing action group -* Creating a new action group +* Selecting an existing action group. +* Creating a new action group. To learn more about action groups, see [Create and manage action groups](../azure-monitor/alerts/action-groups.md). -> [!NOTE] -> Advisor alerts are currently only available for High Availability, Performance, and Cost recommendations. Security recommendations are not supported. +> [!NOTE] +> Advisor alerts are currently only available for High Availability, Performance, and Cost recommendations. Security recommendations aren't supported. -## Create alert rule -1. In the **portal**, select **Azure Advisor**. +## Create an alert rule - ![Azure Advisor in portal](./media/advisor-alerts/create1.png) +Follow these steps to create an alert rule. -2. In the **Monitoring** section of the left menu, select **Alerts**. +1. In the [Azure portal](https://portal.azure.com), select **Advisor**. - ![Alerts in Advisor](./media/advisor-alerts/create2.png) + ![Screenshot that shows Advisor in the portal.](./media/advisor-alerts/create1.png) -3. Select **New Advisor Alert**. +1. In the **Monitoring** section on the left menu, select **Alerts**. - ![New Advisor alert](./media/advisor-alerts/create3.png) + ![Screenshot that shows Alerts in Advisor.](./media/advisor-alerts/create2.png) -4. In the **Scope** section, select the subscription and optionally the resource group that you want to be alerted on. +1. Select **New Advisor Alert**. - ![Advisor alert scope](./media/advisor-alerts/create4.png) + ![Screenshot that shows New Advisor Alert.](./media/advisor-alerts/create3.png) -5. In the **Condition** section, select the method you want to use for configuring your alert. If you want to alert for all recommendations for a certain category and/or impact level, select **Category and impact level**. If you want to alert for all recommendations of a certain type, select **Recommendation type**. +1. In the **Scope** section, select the subscription and optionally the resource group that you want to be alerted on. - ![Azure Advisor alert condition](./media/advisor-alerts/create5.png) + ![Screenshot that shows Advisor alert scope.](./media/advisor-alerts/create4.png) -6. Depending on the Configure by option that you select, you will be able to specify the criteria. If you want all recommendations, just leave the remaining fields blank. +1. In the condition section, select the method you want to use for configuring your alert. If you want to alert for all recommendations for a certain category or impact level, select **Category and impact level**. If you want to alert for all recommendations of a certain type, select **Recommendation type**. - ![Advisor alert action group](./media/advisor-alerts/create6.png) + ![Screenshot that shows Advisor alert conditions.](./media/advisor-alerts/create5.png) -7. In the **action groups** section, select **Add existing** to use an action group you already created or select **Create new** to set up a new [action group](../azure-monitor/alerts/action-groups.md). +1. Depending on the **Configured by** option that you select, you can specify the criteria. If you want all recommendations, leave the remaining fields blank. - ![Advisor alert add existing](./media/advisor-alerts/create7.png) + ![Screenshot that shows Advisor alert action group.](./media/advisor-alerts/create6.png) -8. In the Alert details section, give your alert a name and short description. If you want your alert to be enabled, leave **Enable rule upon creation** selection set to **Yes**. Then select the resource group to save your alert to. This will not impact the targeting scope of the recommendation. +1. In the action groups section, choose **Select existing** to use an action group that you already created or select **Create new** to set up a new [action group](../azure-monitor/alerts/action-groups.md). - :::image type="content" source="./media/advisor-alerts/create8.png" alt-text="Screenshot of the Alert details section."::: + ![Screenshot that shows Advisor alert Select existing.](./media/advisor-alerts/create7.png) +1. In the alert details section, give your alert a name and short description. If you want your alert to be enabled, leave the **Enable rule upon creation** selection set to **Yes**. Then select the resource group to save your alert to. This setting won't affect the targeting scope of the recommendation. ++ :::image type="content" source="./media/advisor-alerts/create8.png" alt-text="Screenshot that shows the alert details section."::: ## Configure recommendation alerts to use a webhook-This section shows you how to configure Azure Advisor alerts to send recommendation data through webhooks to your existing systems. -You can set up alerts to be notified when you have a new Advisor recommendation on one of your resources. These alerts can notify you through email or text message, but they can also be used to integrate with your existing systems through a webhook. +This section shows you how to configure Advisor alerts to send recommendation data through webhooks to your existing systems. ++You can set up alerts to be notified when you have a new Advisor recommendation on one of your resources. These alerts can notify you through email or text message. They can also be used to integrate with your existing systems through a webhook. +### Use the Advisor recommendation alert payload -### Using the Advisor recommendation alert payload -If you want to integrate Advisor alerts into your own systems using a webhook, you will need to parse the JSON payload that is sent from the notification. +If you want to integrate Advisor alerts into your own systems by using a webhook, you need to parse the JSON payload that's sent from the notification. -When you set up your action group for this alert, you select if you would like to use the common alert schema. If you select the common alert schema, your payload will look like: +When you set up your action group for this alert, you select if you want to use the common alert schema. If you select the common alert schema, your payload looks like this example: ```json { When you set up your action group for this alert, you select if you would like t } ``` -If you do not use the common schema, your payload looks like the following: +If you don't use the common schema, your payload looks like the following example: ```json { If you do not use the common schema, your payload looks like the following: } ``` -In either schema, you can identify Advisor recommendation events by looking for **eventSource** is `Recommendation` and **operationName** is `Microsoft.Advisor/recommendations/available/action`. +In either schema, you can identify Advisor recommendation events by looking for `eventSource` is `Recommendation` and `operationName` is `Microsoft.Advisor/recommendations/available/action`. -Some of the other important fields that you may want to use are: +Some of the other important fields that you might want to use are: -* *alertTargetIDs* (in the common schema) or *resourceId* (legacy schema) -* *recommendationType* -* *recommendationName* -* *recommendationCategory* -* *recommendationImpact* -* *recommendationResourceLink* +* `alertTargetIDs` (in the common schema) or `resourceId` (legacy schema) +* `recommendationType` +* `recommendationName` +* `recommendationCategory` +* `recommendationImpact` +* `recommendationResourceLink` +## Manage your alerts -## Manage your alerts +From Advisor, you can edit, delete, or disable and enable your recommendations alerts. -From Azure Advisor, you can edit, delete, or disable and enable your recommendations alerts. +1. In the [Azure portal](https://portal.azure.com), select **Advisor**. -1. In the **portal**, select **Azure Advisor**. + :::image type="content" source="./media/advisor-alerts/create1.png" alt-text="Screenshot that shows the Azure portal menu with Advisor selected."::: - :::image type="content" source="./media/advisor-alerts/create1.png" alt-text="Screenshot of the Azure portal menu showing Azure Advisor selected."::: +1. In the **Monitoring** section on the left menu, select **Alerts**. -2. In the **Monitoring** section of the left menu, select **Alerts**. + :::image type="content" source="./media/advisor-alerts/create2.png" alt-text="Screenshot that shows the Azure portal menu with Alerts selected."::: - :::image type="content" source="./media/advisor-alerts/create2.png" alt-text="Screenshot of the Azure portal menu showing Alerts selected."::: +1. To edit an alert, select the alert name to open the alert and edit the fields you want to edit. -3. To edit an alert, click on the Alert name to open the alert and edit the fields you want to edit. +1. To delete, enable, or disable an alert, select the ellipsis at the end of the row. Then select the action you want to take. -4. To delete, enable, or disable an alert, click on the ellipse at the end of the row and then select the action you would like to take. - +## Related content -## Next steps -- Get an [overview of activity log alerts](../azure-monitor/alerts/alerts-overview.md), and learn how to receive alerts.+- Get an [overview of activity log alerts](../azure-monitor/alerts/alerts-overview.md) and learn how to receive alerts. - Learn more about [action groups](../azure-monitor/alerts/action-groups.md). |
advisor | Azure Advisor Score | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/azure-advisor-score.md | Last updated 07/12/2024 # Use Advisor score +This article shows you how to use Azure Advisor score to measure optimization progress. + ## Introduction to score -Azure Advisor provides best practice recommendations for your workloads. These recommendations are personalized and actionable to help you: +Advisor provides best-practice recommendations for your workloads. These recommendations are personalized and actionable to help you: * Improve the posture of your workloads and optimize your Azure deployments. * Proactively prevent top issues by following best practices.-* Assess your Azure workloads against the five pillars of the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/). +* Assess your Azure workloads against the five pillars of the [Azure Well-Architected Framework](/azure/architecture/framework/). As a core feature of Advisor, Advisor score can help you achieve these goals effectively and efficiently. To get the most out of Azure, it's crucial to understand where you are in your w It's also important to track and report the progress you're making in this optimization journey. With Advisor score, you can easily do all these things with the new gamification experience. -As your personalized cloud consultant, Azure Advisor continually assesses your usage telemetry and resource configuration to check for industry best practices. Advisor then aggregates its findings into a single score. With this score, you can tell at a glance if you're taking the necessary steps to build reliable, secure, and cost-efficient solutions. +As your personalized cloud consultant, Advisor continually assesses your usage telemetry and resource configuration to check for industry best practices. Advisor then aggregates its findings into a single score. With this score, you can tell at a glance if you're taking the necessary steps to build reliable, secure, and cost-efficient solutions. The Advisor score consists of an overall score, which can be further broken down into five category scores. One score for each category of Advisor represents the five pillars of the Well-Architected Framework. You can track the progress you make over time by viewing your overall score and ## Use Advisor score in the portal -1. Sign in to the [**Azure portal**](https://portal.azure.com). +1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for and select [**Advisor**](https://aka.ms/azureadvisordashboard) from any page. -1. Select **Advisor score** in the left menu pane to open score page. +1. Select **Advisor score** on the left pane to open the score page. ## Interpret an Advisor score Advisor displays your overall Advisor score and a breakdown for Advisor categori * **Score by category** for each recommendation tells you which outstanding recommendations improve your score the most. These values reflect both the weight of the recommendation and the predicted ease of implementation. These factors help to make sure you can get the most value with your time. They also help you with prioritization. * **Category score impact** for each recommendation helps you prioritize your remediation actions for each category. -The contribution of each recommendation to your category score is shown clearly on the **Advisor score** page in the Azure portal. You can increase each category score by the percentage point listed in the **Potential score increase** column. This value reflects both the weight of the recommendation within the category and the predicted ease of implementation to address the potentially easiest tasks. Focusing on the recommendations with the greatest score impact will help you make the most progress with time. +The contribution of each recommendation to your category score is shown clearly on the **Advisor score** page in the Azure portal. You can increase each category score by the percentage point listed in the **Potential score increase** column. This value reflects both the weight of the recommendation within the category and the predicted ease of implementation to address the potentially easiest tasks. Focusing on the recommendations with the greatest score impact helps you make the most progress with time. ![Screenshot that shows the Advisor score impact.](https://user-images.githubusercontent.com/41593141/195171044-6a45fa99-a291-49f3-8914-2b596771e63b.png) -If any Advisor recommendations aren't relevant for an individual resource, you can postpone or dismiss those recommendations. They'll be excluded from the score calculation with the next refresh. Advisor will also use this input as feedback to improve the model. +If any Advisor recommendations aren't relevant for an individual resource, you can postpone or dismiss those recommendations. They're excluded from the score calculation with the next refresh. Advisor also uses this input as feedback to improve the model. ## How is an Advisor score calculated? Advisor displays your category scores and your overall Advisor score as percentages. A score of 100% in any category means all your resources, *assessed by Advisor*, follow the best practices that Advisor recommends. On the other end of the spectrum, a score of 0% means that none of your resources, assessed by Advisor, follows Advisor recommendations. -**Each of the five categories has a highest potential score of 100.** Your overall Advisor score is calculated as a sum of each applicable category score, divided by the sum of the highest potential score from all applicable categories. In most cases this means adding up five Advisor scores for each category and dividing by 500. But *each category score is calculated only if you use resources that are assessed by Advisor*. +**Each of the five categories has a highest potential score of 100.** Your overall Advisor score is calculated as a sum of each applicable category score, divided by the sum of the highest potential score from all applicable categories. In most cases, this means adding up five Advisor scores for each category and dividing by 500. But *each category score is calculated only if you use resources that are assessed by Advisor*. ### Advisor score calculation example -* **Single subscription score:** This example is the simple mean of all Advisor category scores for your subscription. If the Advisor category scores are - **Cost** = 73, **Reliability** = 85, **Operational excellence** = 77, and **Performance** = 100, the Advisor score would be (73 + 85 + 77 + 100)/(4x100) = 0.84% or 84%. -* **Multiple subscriptions score:** When multiple subscriptions are selected, the overall Advisor score is calculated as an average of aggregated category scores. Each category score is calculated using individual subscription score and subscription consumsumption based weight. Overall score is calculated as sum of aggregated category scores divided by the sum of the highest potential scores. +* **Single subscription score:** This example is the simple mean of all Advisor category scores for your subscription. If the Advisor category scores are **Cost** = 73, **Reliability** = 85, **Operational excellence** = 77, and **Performance** = 100, the Advisor score would be (73 + 85 + 77 + 100)/(4x100) = 0.84% or 84%. +* **Multiple subscriptions score:** When multiple subscriptions are selected, the overall Advisor score is calculated as an average of aggregated category scores. Each category score is calculated by using the individual subscription score and the subscription consumption-based weight. The overall score is calculated as the sum of aggregated category scores divided by the sum of the highest potential scores. ### Scoring methodology The calculation of the Advisor score can be summarized in four steps: Advisor applies this model at an Advisor category level to give an Advisor score for each category. **Security** uses a [secure score](../defender-for-cloud/secure-score-security-controls.md) model. A simple average produces the final Advisor score. -## Frequently Asked Questions (FAQs) +## Frequently asked questions (FAQs) ++Here are answers to common questions about Advisor score. ### How often is my score refreshed? Your score is refreshed at least once per day. Your score can change if you remediate impacted resources by adopting the best practices that Advisor recommends. If you or anyone with permissions on your subscription has modified or created new resources, you might also see fluctuations in your score. Your score is based on a ratio of the cost-impacted resources relative to the total cost of all resources. -### I implemented a recommendation but my score did not change. Why the score did not increase? +### I implemented a recommendation but my score didn't change. Why didn't the score increase? -The score does not reflect adopted recommendations right away. It takes at least 24 hours for the score to change after the recommendation is remediated. +The score doesn't reflect adopted recommendations right away. It takes at least 24 hours for the score to change after the recommendation is remediated. ### Why do some recommendations have the empty "-" value in the category score impact column? This message means that the recommendation is new, and we're working on bringing ### What if a recommendation isn't relevant? -If you dismiss a recommendation from Advisor, it is excluded from the calculation of your score. Dismissing recommendations also helps Advisor improve the quality of recommendations. +If you dismiss a recommendation from Advisor, it's excluded from the calculation of your score. Dismissing recommendations also helps Advisor improve the quality of recommendations. ### Why don't I have a score for one or more categories or subscriptions? The scoring methodology is designed to control for the number of resources on a ### Does my score depend on how much I spend on Azure? -No. Your score isn't necessarily a reflection of how much you spend. Unnecessary spending will result in a lower **Cost** score. +No. Your score isn't necessarily a reflection of how much you spend. Unnecessary spending results in a lower **Cost** score. -## Next steps +## Related content For more information about Advisor recommendations, see: |
advisor | View Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/view-recommendations.md | -# Configure Azure Advisor recommendations view +# Configure the Azure Advisor recommendations view -Azure Advisor provides recommendations to help you optimize your Azure deployments. Within Advisor, you have access to a few features that help you to narrow down your recommendations to only those that matter to you. +Azure Advisor provides recommendations to help you optimize your Azure deployments. Within Advisor, you have access to a few features that help you narrow down your recommendations to only the ones that matter to you. ## Configure subscriptions and resource groups -Advisor gives you the ability to select Subscriptions and Resource Groups that matter to you and your organization. You only see recommendations for the subscriptions and resource groups that you select. By default, all are selected. Configuration settings apply to the subscription or resource group, so the same settings apply to everyone that has access to that subscription or resource group. Configuration settings can be changed in the Azure portal or programmatically. +Advisor gives you the ability to select subscriptions and resource groups that matter to you and your organization. You only see recommendations for the subscriptions and resource groups that you select. By default, all are selected. Configuration settings apply to the subscription or resource group, so the same settings apply to everyone that has access to that subscription or resource group. Configuration settings can be changed in the Azure portal or programmatically. To make changes in the Azure portal: To make changes in the Azure portal: 1. Select **Configuration** from the menu. - :::image type="content" source="./media/view-recommendations/configuration.png" alt-text="Screenshot of Azure Advisor showing configuration pane."::: + :::image type="content" source="./media/view-recommendations/configuration.png" alt-text="Screenshot of Azure Advisor showing the Configuration pane."::: -1. Check the box in the **Include** column for any subscriptions or resource groups to receive Advisor recommendations. If the box is disabled, you may not have permission to make a configuration change on that subscription or resource group. Learn more about [permissions in Azure Advisor](permissions.md). +1. Select the checkbox in the **Include** column for any subscriptions or resource groups to receive Advisor recommendations. If the box is disabled, you might not have permission to make a configuration change on that subscription or resource group. Learn more about [permissions in Azure Advisor](permissions.md). -1. Click **Apply** at the bottom after you make a change. +1. Select **Apply** at the bottom after you make a change. -## Filtering your view in the Azure portal +## Filter your view in the Azure portal -Configuration settings remain active until changed. If you want to limit the view of recommendations for a single viewing, you can use the drop downs provided at the top of the Advisor panel. You can filter recommendations by subscription, resource group, workload, resource type, recommendation status and impact. These filters are available for Overview, Score, Cost, Security, Reliability, Operational Excellence, Performance and All Recommendations pages. +Configuration settings remain active until changed. If you want to limit the view of recommendations for a single viewing, you can use the dropdown lists provided at the top of the Advisor pane. You can filter recommendations by subscription, resource group, workload, resource type, recommendation status, and impact. These filters are available for **Overview**, **Score**, **Cost**, **Security**, **Reliability**, **Operational excellence**, **Performance**, and **All recommendations** pages. - :::image type="content" source="./media/view-recommendations/filtering.png" alt-text="Screenshot of Azure Advisor showing filtering options."::: + :::image type="content" source="./media/view-recommendations/filtering.png" alt-text="Screenshot of Advisor showing filtering options."::: > [!NOTE]-> Contact your account team to add new workloads to the workload filter or edit workload names. +> Contact your account team to add new workloads to the workload filter or edit workload names. -## Dismissing and postponing recommendations +## Dismiss and postpone recommendations -Azure Advisor allows you to dismiss or postpone recommendations on a single resource. If you dismiss a recommendation, you do not see it again unless you manually activate it. However, postponing a recommendation allows you to specify a duration after which the recommendation is automatically activated again. Postponing can be done in the Azure portal or programmatically. +Advisor allows you to dismiss or postpone recommendations on a single resource. If you dismiss a recommendation, you don't see it again unless you manually activate it. However, postponing a recommendation allows you to specify a duration after which the recommendation is automatically activated again. Postponing can be done in the Azure portal or programmatically. ### Postpone a single recommendation in the Azure portal 1. Open [Azure Advisor](https://aka.ms/azureadvisordashboard) in the Azure portal.-1. Select a recommendation category to view your recommendations -1. Select a recommendation from the list of recommendations -1. Select Postpone or Dismiss for the recommendation you want to postpone or dismiss +1. Select a recommendation category to view your recommendations. +1. Select a recommendation from the list of recommendations. +1. Select **Postpone** or **Dismiss** for the recommendation you want to postpone or dismiss. - :::image type="content" source="./media/view-recommendations/postpone-dismiss.png" alt-text="Screenshot of the Use Managed Disks window showing the select column and Postpone and Dismiss actions for a single recommendation highlighted."::: + :::image type="content" source="./media/view-recommendations/postpone-dismiss.png" alt-text="Screenshot that shows the Use Managed Disks page with the Select column and Postpone and Dismiss actions for a single recommendation highlighted."::: ### Postpone or dismiss multiple recommendations in the Azure portal Azure Advisor allows you to dismiss or postpone recommendations on a single reso 1. Select a recommendation category to view your recommendations. 1. Select a recommendation from the list of recommendations. 1. Select the checkbox at the left of the row for all resources you want to postpone or dismiss the recommendation.-1. Select **Postpone** or **Dismiss** at the top left of the table. +1. Select **Postpone** or **Dismiss** in the upper-left corner of the table. - :::image type="content" source="./media/view-recommendations/postpone-dismiss-multiple.png" alt-text="Screenshot of the Use Managed Disks window showing the select column and Postpone and Dismiss actions on the top left of the table highlighted."::: + :::image type="content" source="./media/view-recommendations/postpone-dismiss-multiple.png" alt-text="Screenshot that shows the Use Managed Disks page with the Select column and Postpone and Dismiss actions in the table highlighted."::: > [!NOTE]-> You need contributor or owner permission to dismiss or postpone a recommendation. Learn more about permissions in Azure Advisor. +> You need Contributor or Owner permission to dismiss or postpone a recommendation. Learn more about permissions in Advisor. -> [!NOTE] -> If the selection boxes are disabled, recommendations may still be loading. Please wait for all recommendations to load before trying to postpone or dismiss. +If the selection boxes are disabled, recommendations might still be loading. Wait for all recommendations to load before you try to postpone or dismiss. ### Reactivate a postponed or dismissed recommendation -You can activate a recommendation that has been postponed or dismissed. This action can be done in the Azure portal or programmatically. In the Azure portal: +You can activate a recommendation that was postponed or dismissed. This action can be done in the Azure portal or programmatically. In the Azure portal: -1. Open [Azure Advisor](https://aka.ms/azureadvisordashboard) in the Azure portal. +1. Open [Advisor](https://aka.ms/azureadvisordashboard) in the Azure portal. -1. Change the filter on the Overview panel to **Postponed**. Advisor then displays postponed or dismissed recommendations. +1. Change the filter on the **Overview** pane to **Postponed**. Advisor then displays postponed or dismissed recommendations. - :::image type="content" source="./media/view-recommendations/activate-postponed.png" alt-text="Screenshot of the Azure Advisor window showing the Postponed drop-down menu selected."::: + :::image type="content" source="./media/view-recommendations/activate-postponed.png" alt-text="Screenshot that shows the Advisor pane with the Postponed dropdown menu selected."::: 1. Select a category to see **Postponed** and **Dismissed** recommendations. -1. Select a recommendation from the list of recommendations. This opens recommendations with the **Postponed & Dismissed** tab already selected to show the resources for which this recommendation has been postponed or dismissed. +1. Select a recommendation from the list of recommendations. This action opens recommendations with the **Postponed & Dismissed** tab already selected to show the resources for which this recommendation was postponed or dismissed. -1. Click on **Activate** at the end of the row. Once clicked, the recommendation is active for that resource and so removed from this table. The recommendation is now visible in the **Active** tab. - - :::image type="content" source="./media/view-recommendations/activate-postponed-2.png" alt-text="Screenshot of the Enable Soft Delete window showing the Postponed & Dismissed tab with the Activate action highlighted."::: +1. Select **Activate** at the end of the row. The recommendation is now active for that resource and removed from the table. The recommendation is visible on the **Active** tab. -## Next steps + :::image type="content" source="./media/view-recommendations/activate-postponed-2.png" alt-text="Screenshot that shows the Enable Soft Delete pane with the Postponed & Dismissed tab and the Activate action highlighted."::: -This article explains how you can view recommendations that matter to you in Azure Advisor. To learn more about Advisor, see: +## Related content ++This article explains how you can view recommendations that matter to you in Advisor. To learn more about Advisor, see: - [What is Azure Advisor?](advisor-overview.md)-- [Getting Started with Advisor](advisor-get-started.md)+- [Get started with Advisor](advisor-get-started.md) - [Permissions in Azure Advisor](permissions.md)--- |
ai-services | Liveness | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/Tutorials/liveness.md | The high-level steps involved in liveness orchestration are illustrated below: #### [C#](#tab/csharp) ```csharp- var endpoint = new Uri(System.Environment.GetEnvironmentVariable("VISION_ENDPOINT")); - var credential = new AzureKeyCredential(System.Environment.GetEnvironmentVariable("VISION_KEY")); + var endpoint = new Uri(System.Environment.GetEnvironmentVariable("FACE_ENDPOINT")); + var credential = new AzureKeyCredential(System.Environment.GetEnvironmentVariable("FACE_APIKEY")); var sessionClient = new FaceSessionClient(endpoint, credential); The high-level steps involved in liveness orchestration are illustrated below: #### [Java](#tab/java) ```java- String endpoint = System.getenv("VISION_ENDPOINT"); - String accountKey = System.getenv("VISION_KEY"); + String endpoint = System.getenv("FACE_ENDPOINT"); + String accountKey = System.getenv("FACE_APIKEY"); FaceSessionClient sessionClient = new FaceSessionClientBuilder() .endpoint(endpoint) The high-level steps involved in liveness orchestration are illustrated below: #### [Python](#tab/python) ```python- endpoint = os.environ["VISION_ENDPOINT"] - key = os.environ["VISION_KEY"] + endpoint = os.environ["FACE_ENDPOINT"] + key = os.environ["FACE_APIKEY"] face_session_client = FaceSessionClient(endpoint=endpoint, credential=AzureKeyCredential(key)) The high-level steps involved in liveness orchestration are illustrated below: #### [REST API (Windows)](#tab/cmd) ```console- curl --request POST --location "%VISION_ENDPOINT%/face/v1.1-preview.1/detectliveness/singlemodal/sessions" ^ - --header "Ocp-Apim-Subscription-Key: %VISION_KEY%" ^ + curl --request POST --location "%FACE_ENDPOINT%/face/v1.1-preview.1/detectliveness/singlemodal/sessions" ^ + --header "Ocp-Apim-Subscription-Key: %FACE_APIKEY%" ^ --header "Content-Type: application/json" ^ --data ^ "{ ^ The high-level steps involved in liveness orchestration are illustrated below: #### [REST API (Linux)](#tab/bash) ```bash- curl --request POST --location "${VISION_ENDPOINT}/face/v1.1-preview.1/detectliveness/singlemodal/sessions" \ - --header "Ocp-Apim-Subscription-Key: ${VISION_KEY}" \ + curl --request POST --location "${FACE_ENDPOINT}/face/v1.1-preview.1/detectliveness/singlemodal/sessions" \ + --header "Ocp-Apim-Subscription-Key: ${FACE_APIKEY}" \ --header "Content-Type: application/json" \ --data \ '{ The high-level steps involved in liveness orchestration are illustrated below: #### [REST API (Windows)](#tab/cmd) ```console- curl --request GET --location "%VISION_ENDPOINT%/face/v1.1-preview.1/detectliveness/singlemodal/sessions/<session-id>" ^ - --header "Ocp-Apim-Subscription-Key: %VISION_KEY%" + curl --request GET --location "%FACE_ENDPOINT%/face/v1.1-preview.1/detectliveness/singlemodal/sessions/<session-id>" ^ + --header "Ocp-Apim-Subscription-Key: %FACE_APIKEY%" ``` #### [REST API (Linux)](#tab/bash) ```bash- curl --request GET --location "${VISION_ENDPOINT}/face/v1.1-preview.1/detectliveness/singlemodal/sessions/<session-id>" \ - --header "Ocp-Apim-Subscription-Key: ${VISION_KEY}" + curl --request GET --location "${FACE_ENDPOINT}/face/v1.1-preview.1/detectliveness/singlemodal/sessions/<session-id>" \ + --header "Ocp-Apim-Subscription-Key: ${FACE_APIKEY}" ``` The high-level steps involved in liveness orchestration are illustrated below: #### [REST API (Windows)](#tab/cmd) ```console- curl --request DELETE --location "%VISION_ENDPOINT%/face/v1.1-preview.1/detectliveness/singlemodal/sessions/<session-id>" ^ - --header "Ocp-Apim-Subscription-Key: %VISION_KEY%" + curl --request DELETE --location "%FACE_ENDPOINT%/face/v1.1-preview.1/detectliveness/singlemodal/sessions/<session-id>" ^ + --header "Ocp-Apim-Subscription-Key: %FACE_APIKEY%" ``` #### [REST API (Linux)](#tab/bash) ```bash- curl --request DELETE --location "${VISION_ENDPOINT}/face/v1.1-preview.1/detectliveness/singlemodal/sessions/<session-id>" \ - --header "Ocp-Apim-Subscription-Key: ${VISION_KEY}" + curl --request DELETE --location "${FACE_ENDPOINT}/face/v1.1-preview.1/detectliveness/singlemodal/sessions/<session-id>" \ + --header "Ocp-Apim-Subscription-Key: ${FACE_APIKEY}" ``` The high-level steps involved in liveness with verification orchestration are il #### [C#](#tab/csharp) ```csharp- var endpoint = new Uri(System.Environment.GetEnvironmentVariable("VISION_ENDPOINT")); - var credential = new AzureKeyCredential(System.Environment.GetEnvironmentVariable("VISION_KEY")); + var endpoint = new Uri(System.Environment.GetEnvironmentVariable("FACE_ENDPOINT")); + var credential = new AzureKeyCredential(System.Environment.GetEnvironmentVariable("FACE_APIKEY")); var sessionClient = new FaceSessionClient(endpoint, credential); The high-level steps involved in liveness with verification orchestration are il #### [Java](#tab/java) ```java- String endpoint = System.getenv("VISION_ENDPOINT"); - String accountKey = System.getenv("VISION_KEY"); + String endpoint = System.getenv("FACE_ENDPOINT"); + String accountKey = System.getenv("FACE_APIKEY"); FaceSessionClient sessionClient = new FaceSessionClientBuilder() .endpoint(endpoint) The high-level steps involved in liveness with verification orchestration are il #### [Python](#tab/python) ```python- endpoint = os.environ["VISION_ENDPOINT"] - key = os.environ["VISION_KEY"] + endpoint = os.environ["FACE_ENDPOINT"] + key = os.environ["FACE_APIKEY"] face_session_client = FaceSessionClient(endpoint=endpoint, credential=AzureKeyCredential(key)) The high-level steps involved in liveness with verification orchestration are il #### [REST API (Windows)](#tab/cmd) ```console- curl --request POST --location "%VISION_ENDPOINT%/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions" ^ - --header "Ocp-Apim-Subscription-Key: %VISION_KEY%" ^ + curl --request POST --location "%FACE_ENDPOINT%/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions" ^ + --header "Ocp-Apim-Subscription-Key: %FACE_APIKEY%" ^ --form "Parameters=""{\\\""livenessOperationMode\\\"": \\\""passive\\\"", \\\""deviceCorrelationId\\\"": \\\""723d6d03-ef33-40a8-9682-23a1feb7bccd\\\""}""" ^ --form "VerifyImage=@""test.png""" ``` #### [REST API (Linux)](#tab/bash) ```bash- curl --request POST --location "${VISION_ENDPOINT}/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions" \ - --header "Ocp-Apim-Subscription-Key: ${VISION_KEY}" \ + curl --request POST --location "${FACE_ENDPOINT}/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions" \ + --header "Ocp-Apim-Subscription-Key: ${FACE_APIKEY}" \ --form 'Parameters="{ \"livenessOperationMode\": \"passive\", \"deviceCorrelationId\": \"723d6d03-ef33-40a8-9682-23a1feb7bccd\" The high-level steps involved in liveness with verification orchestration are il #### [REST API (Windows)](#tab/cmd) ```console- curl --request GET --location "%VISION_ENDPOINT%/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions/<session-id>" ^ - --header "Ocp-Apim-Subscription-Key: %VISION_KEY%" + curl --request GET --location "%FACE_ENDPOINT%/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions/<session-id>" ^ + --header "Ocp-Apim-Subscription-Key: %FACE_APIKEY%" ``` #### [REST API (Linux)](#tab/bash) ```bash- curl --request GET --location "${VISION_ENDPOINT}/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions/<session-id>" \ - --header "Ocp-Apim-Subscription-Key: ${VISION_KEY}" + curl --request GET --location "${FACE_ENDPOINT}/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions/<session-id>" \ + --header "Ocp-Apim-Subscription-Key: ${FACE_APIKEY}" ``` The high-level steps involved in liveness with verification orchestration are il #### [REST API (Windows)](#tab/cmd) ```console- curl --request DELETE --location "%VISION_ENDPOINT%/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions/<session-id>" ^ - --header "Ocp-Apim-Subscription-Key: %VISION_KEY%" + curl --request DELETE --location "%FACE_ENDPOINT%/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions/<session-id>" ^ + --header "Ocp-Apim-Subscription-Key: %FACE_APIKEY%" ``` #### [REST API (Linux)](#tab/bash) ```bash- curl --request DELETE --location "${VISION_ENDPOINT}/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions/<session-id>" \ - --header "Ocp-Apim-Subscription-Key: ${VISION_KEY}" + curl --request DELETE --location "${FACE_ENDPOINT}/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions/<session-id>" \ + --header "Ocp-Apim-Subscription-Key: ${FACE_APIKEY}" ``` |
ai-services | Add Faces | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/add-faces.md | -This guide demonstrates how to add a large number of persons and faces to a **PersonGroup** object. The same strategy also applies to **LargePersonGroup**, **FaceList**, and **LargeFaceList** objects. This sample is written in C# and uses the Azure AI Face .NET client library. +This guide demonstrates how to add a large number of persons and faces to a **PersonGroup** object. The same strategy also applies to **LargePersonGroup**, **FaceList**, and **LargeFaceList** objects. This sample is written in C#. ## Initialization static async Task WaitCallLimitPerSecondAsync() } ``` -## Authorize the API call --When you use the Face client library, the key and subscription endpoint are passed in through the constructor of the FaceClient class. See the [quickstart](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp&tabs=visual-studio) for instructions on creating a Face client object. - ## Create the PersonGroup This code creates a **PersonGroup** named `"MyPersonGroup"` to save the persons. const string personGroupId = "mypersongroupid"; const string personGroupName = "MyPersonGroup"; _timeStampQueue.Enqueue(DateTime.UtcNow);-await faceClient.LargePersonGroup.CreateAsync(personGroupId, personGroupName); +using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = personGroupName, ["recognitionModel"] = "recognition_04" })))) +{ + content.Headers.ContentType = new MediaTypeHeaderValue("application/json"); + await httpClient.PutAsync($"{ENDPOINT}/face/v1.0/persongroups/{personGroupId}", content); +} ``` ## Create the persons for the PersonGroup await faceClient.LargePersonGroup.CreateAsync(personGroupId, personGroupName); This code creates **Persons** concurrently, and uses `await WaitCallLimitPerSecondAsync()` to avoid exceeding the call rate limit. ```csharp-Person[] persons = new Person[PersonCount]; +string?[] persons = new string?[PersonCount]; Parallel.For(0, PersonCount, async i => { await WaitCallLimitPerSecondAsync(); string personName = $"PersonName#{i}";- persons[i] = await faceClient.PersonGroupPerson.CreateAsync(personGroupId, personName); + using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = personName })))) + { + content.Headers.ContentType = new MediaTypeHeaderValue("application/json"); + using (var response = await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/persongroups/{personGroupId}/persons", content)) + { + string contentString = await response.Content.ReadAsStringAsync(); + persons[i] = (string?)(JsonConvert.DeserializeObject<Dictionary<string, object>>(contentString)?["personId"]); + } + } }); ``` Faces added to different persons are processed concurrently. Faces added for one ```csharp Parallel.For(0, PersonCount, async i => {- Guid personId = persons[i].PersonId; string personImageDir = @"/path/to/person/i/images"; foreach (string imagePath in Directory.GetFiles(personImageDir, "*.jpg")) Parallel.For(0, PersonCount, async i => using (Stream stream = File.OpenRead(imagePath)) {- await faceClient.PersonGroupPerson.AddFaceFromStreamAsync(personGroupId, personId, stream); + using (var content = new StreamContent(stream)) + { + content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream"); + await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/persongroups/{personGroupId}/persons/{persons[i]}/persistedfaces?detectionModel=detection_03", content); + } } } }); |
ai-services | Find Similar Faces | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/find-similar-faces.md | -The [Find Similar](/rest/api/face/face-recognition-operations/find-similar-from-large-face-list) operation does face matching between a target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This is useful for doing a face search by image. +The [Find Similar](/rest/api/face/face-recognition-operations/find-similar) operation does face matching between a target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This is useful for doing a face search by image. This guide demonstrates how to use the Find Similar feature in the different language SDKs. The following sample code assumes you have already authenticated a Face client object. For details on how to do this, follow a [quickstart](../quickstarts-sdk/identity-client-library.md). You need to detect faces in images before you can compare them. In this guide, t The following face detection method is optimized for comparison operations. It doesn't extract detailed face attributes, and it uses an optimized recognition model. -[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_face_detect_recognize)] +[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FindSimilar.cs?name=snippet_face_detect_recognize)] The following code uses the above method to get face data from a series of images. -[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_loadfaces)] +[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FindSimilar.cs?name=snippet_loadfaces)] #### [REST API](#tab/rest) In this guide, the face detected in the *Family1-Dad1.jpg* image should be retur The following code calls the Find Similar API on the saved list of faces. -[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_find_similar)] +[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FindSimilar.cs?name=snippet_find_similar)] The following code prints the match details to the console: -[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_find_similar_print)] +[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FindSimilar.cs?name=snippet_find_similar_print)] #### [REST API](#tab/rest) |
ai-services | Identity Detect Faces | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/identity-detect-faces.md | The code snippets in this guide are written in C# by using the Azure AI Face cli ## Setup -This guide assumes that you already constructed a [FaceClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceclient) object, named `faceClient`, using a Face key and endpoint URL. For instructions on how to set up this feature, follow one of the quickstarts. +This guide assumes that you already constructed a [FaceClient](/dotnet/api/azure.ai.vision.face.faceclient) object, named `faceClient`, using a Face key and endpoint URL. For instructions on how to set up this feature, follow one of the quickstarts. ## Submit data to the service -To find faces and get their locations in an image, call the [DetectWithUrlAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithurlasync) or [DetectWithStreamAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithstreamasync) method. **DetectWithUrlAsync** takes a URL string as input, and **DetectWithStreamAsync** takes the raw byte stream of an image as input. +To find faces and get their locations in an image, call the [DetectAsync](/dotnet/api/azure.ai.vision.face.faceclient.detectasync). It takes either a URL string or the raw image binary as input. -The service returns a [DetectedFace](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.detectedface) object, which you can query for different kinds of information, specified below. +The service returns a [FaceDetectionResult](/dotnet/api/azure.ai.vision.face.facedetectionresult) object, which you can query for different kinds of information, specified below. -For information on how to parse the location and dimensions of the face, see [FaceRectangle](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.facerectangle). Usually, this rectangle contains the eyes, eyebrows, nose, and mouth. The top of head, ears, and chin aren't necessarily included. To use the face rectangle to crop a complete head or get a mid-shot portrait, you should expand the rectangle in each direction. +For information on how to parse the location and dimensions of the face, see [FaceRectangle](/dotnet/api/azure.ai.vision.face.facedetectionresult.facerectangle). Usually, this rectangle contains the eyes, eyebrows, nose, and mouth. The top of head, ears, and chin aren't necessarily included. To use the face rectangle to crop a complete head or get a mid-shot portrait, you should expand the rectangle in each direction. ## Determine how to process the data This guide focuses on the specifics of the Detect call, such as what arguments y If you set the parameter _returnFaceId_ to `true` (approved customers only), you can get the unique ID for each face, which you can use in later face recognition tasks. The optional _faceIdTimeToLive_ parameter specifies how long (in seconds) the face ID should be stored on the server. After this time expires, the face ID is removed. The default value is 86400 (24 hours). ### Get face landmarks -[Face landmarks](../concept-face-detection.md#face-landmarks) are a set of easy-to-find points on a face, such as the pupils or the tip of the nose. To get face landmark data, set the _detectionModel_ parameter to `DetectionModel.Detection01` and the _returnFaceLandmarks_ parameter to `true`. +[Face landmarks](../concept-face-detection.md#face-landmarks) are a set of easy-to-find points on a face, such as the pupils or the tip of the nose. To get face landmark data, set the _detectionModel_ parameter to `FaceDetectionModel.Detection03` and the _returnFaceLandmarks_ parameter to `true`. ### Get face attributes Besides face rectangles and landmarks, the face detection API can analyze several conceptual attributes of a face. For a full list, see the [Face attributes](../concept-face-detection.md#attributes) conceptual section. -To analyze face attributes, set the _detectionModel_ parameter to `DetectionModel.Detection01` and the _returnFaceAttributes_ parameter to a list of [FaceAttributeType Enum](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.faceattributetype) values. +To analyze face attributes, set the _detectionModel_ parameter to `FaceDetectionModel.Detection03` and the _returnFaceAttributes_ parameter to a list of [FaceAttributeType Enum](/dotnet/api/azure.ai.vision.face.faceattributetype) values. ## Get results from the service To analyze face attributes, set the _detectionModel_ parameter to `DetectionMode The following code demonstrates how you might retrieve the locations of the nose and pupils: You also can use face landmark data to accurately calculate the direction of the face. For example, you can define the rotation of the face as a vector from the center of the mouth to the center of the eyes. The following code calculates this vector: When you know the direction of the face, you can rotate the rectangular face frame to align it more properly. To crop faces in an image, you can programmatically rotate the image so the faces always appear upright. When you know the direction of the face, you can rotate the rectangular face fra The following code shows how you might retrieve the face attribute data that you requested in the original call. To learn more about each of the attributes, see the [Face detection and attributes](../concept-face-detection.md) conceptual guide. In this guide, you learned how to use the various functionalities of face detect ## Related articles - [Reference documentation (REST)](/rest/api/face/operation-groups)-- [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/face-readme)+- [Reference documentation (.NET SDK)](https://aka.ms/azsdk-csharp-face-ref) |
ai-services | Mitigate Latency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/mitigate-latency.md | We recommend that you select a region that is closest to your users to minimize The Face service provides two ways to upload images for processing: uploading the raw byte data of the image directly in the request, or providing a URL to a remote image. Regardless of the method, the Face service needs to download the image from its source location. If the connection from the Face service to the client or the remote server is slow or poor, it affects the response time of requests. If you have an issue with latency, consider storing the image in Azure Blob Storage and passing the image URL in the request. For more implementation details, see [storing the image in Azure Premium Blob Storage](../../../storage/blobs/storage-upload-process-images.md?tabs=dotnet). An example API call: ``` csharp-var faces = await client.Face.DetectWithUrlAsync("https://<storage_account_name>.blob.core.windows.net/<container_name>/<file_name>"); +var url = "https://<storage_account_name>.blob.core.windows.net/<container_name>/<file_name>"; +var response = await faceClient.DetectAsync(new Uri(url), FaceDetectionModel.Detection03, FaceRecognitionModel.Recognition04, returnFaceId: false); +var faces = response.Value; ``` Be sure to use a storage account in the same region as the Face resource. This reduces the latency of the connection between the Face service and the storage account. To achieve the optimal balance between accuracy and speed, follow these tips to #### Other file size tips Note the following additional tips:-- For face detection, when using detection model `DetectionModel.Detection01`, reducing the image file size increases processing speed. When you use detection model `DetectionModel.Detection02`, reducing the image file size will only increase processing speed if the image file is smaller than 1920x1080 pixels.+- For face detection, when using detection model `FaceDetectionModel.Detection01`, reducing the image file size increases processing speed. When you use detection model `FaceDetectionModel.Detection02`, reducing the image file size will only increase processing speed if the image file is smaller than 1920x1080 pixels. - For face recognition, reducing the face size will only increase the speed if the image is smaller than 200x200 pixels. - The performance of the face detection methods also depends on how many faces are in an image. The Face service can return up to 100 faces for an image. Faces are ranked by face rectangle size from large to small. Note the following additional tips: If you need to call multiple APIs, consider calling them in parallel if your application design allows for it. For example, if you need to detect faces in two images to perform a face comparison, you can call them in an asynchronous task: ```csharp-var faces_1 = client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedymini-biography.jpg"); -var faces_2 = client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1NDY3OTIxMzExNzM3NjE3/john-f-kennedydebating-richard-nixon.jpg"); +string url1 = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg"; +string url2 = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection2.jpg"; +var response1 = client.DetectAsync(new Uri(url1), FaceDetectionModel.Detection03, FaceRecognitionModel.Recognition04, returnFaceId: false); +var response2 = client.DetectAsync(new Uri(url2), FaceDetectionModel.Detection03, FaceRecognitionModel.Recognition04, returnFaceId: false); -Task.WaitAll (new Task<IList<DetectedFace>>[] { faces_1, faces_2 }); -IEnumerable<DetectedFace> results = faces_1.Result.Concat (faces_2.Result); +Task.WaitAll(new Task<Response<IReadOnlyList<FaceDetectionResult>>>[] { response1, response2 }); +IEnumerable<FaceDetectionResult> results = response1.Result.Value.Concat(response2.Result.Value); ``` ## Smooth over spiky traffic |
ai-services | Specify Detection Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/specify-detection-model.md | When you use the [Detect] API, you can assign the model version with the `detect A request URL for the [Detect] REST API looks like this: -`https://westus.api.cognitive.microsoft.com/face/v1.0/detect[?returnFaceId][&returnFaceLandmarks][&returnFaceAttributes][&recognitionModel][&returnRecognitionModel][&detectionModel]&subscription-key=<Subscription key>` +`https://westus.api.cognitive.microsoft.com/face/v1.0/detect?detectionModel={detectionModel}&recognitionModel={recognitionModel}&returnFaceId={returnFaceId}&returnFaceAttributes={returnFaceAttributes}&returnFaceLandmarks={returnFaceLandmarks}&returnRecognitionModel={returnRecognitionModel}&faceIdTimeToLive={faceIdTimeToLive}` If you are using the client library, you can assign the value for `detectionModel` by passing in an appropriate string. If you leave it unassigned, the API uses the default model version (`detection_01`). See the following code example for the .NET client library. ```csharp string imageUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg";-var faces = await faceClient.Face.DetectWithUrlAsync(url: imageUrl, returnFaceId: false, returnFaceLandmarks: false, recognitionModel: "recognition_04", detectionModel: "detection_03"); +var response = await faceClient.DetectAsync(new Uri(imageUrl), FaceDetectionModel.Detection03, FaceRecognitionModel.Recognition04, returnFaceId: false, returnFaceLandmarks: false); +var faces = response.Value; ``` ## Add face to Person with specified model See the following code example for the .NET client library. ```csharp // Create a PersonGroup and add a person with face detected by "detection_03" model string personGroupId = "mypersongroupid";-await faceClient.PersonGroup.CreateAsync(personGroupId, "My Person Group Name", recognitionModel: "recognition_04"); --string personId = (await faceClient.PersonGroupPerson.CreateAsync(personGroupId, "My Person Name")).PersonId; +using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = "My Person Group Name", ["recognitionModel"] = "recognition_04" })))) +{ + content.Headers.ContentType = new MediaTypeHeaderValue("application/json"); + await httpClient.PutAsync($"{ENDPOINT}/face/v1.0/persongroups/{personGroupId}", content); +} ++string? personId = null; +using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = "My Person Name" })))) +{ + content.Headers.ContentType = new MediaTypeHeaderValue("application/json"); + using (var response = await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/persongroups/{personGroupId}/persons", content)) + { + string contentString = await response.Content.ReadAsStringAsync(); + personId = (string?)(JsonConvert.DeserializeObject<Dictionary<string, object>>(contentString)?["personId"]); + } +} string imageUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg";-await client.PersonGroupPerson.AddFaceFromUrlAsync(personGroupId, personId, imageUrl, detectionModel: "detection_03"); +using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["url"] = imageUrl })))) +{ + content.Headers.ContentType = new MediaTypeHeaderValue("application/json"); + await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/persongroups/{personGroupId}/persons/{personId}/persistedfaces?detectionModel=detection_03", content); +} ``` This code creates a **PersonGroup** with ID `mypersongroupid` and adds a **Person** to it. Then it adds a Face to this **Person** using the `detection_03` model. If you don't specify the *detectionModel* parameter, the API uses the default model, `detection_01`. This code creates a **PersonGroup** with ID `mypersongroupid` and adds a **Perso You can also specify a detection model when you add a face to an existing **FaceList** object. See the following code example for the .NET client library. ```csharp-await faceClient.FaceList.CreateAsync(faceListId, "My face collection", recognitionModel: "recognition_04"); +using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = "My face collection", ["recognitionModel"] = "recognition_04" })))) +{ + content.Headers.ContentType = new MediaTypeHeaderValue("application/json"); + await httpClient.PutAsync($"{ENDPOINT}/face/v1.0/facelists/{faceListId}", content); +} string imageUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg";-await client.FaceList.AddFaceFromUrlAsync(faceListId, imageUrl, detectionModel: "detection_03"); +using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["url"] = imageUrl })))) +{ + content.Headers.ContentType = new MediaTypeHeaderValue("application/json"); + await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/facelists/{faceListId}/persistedfaces?detectionModel=detection_03", content); +} ``` This code creates a **FaceList** called `My face collection` and adds a Face to it with the `detection_03` model. If you don't specify the *detectionModel* parameter, the API uses the default model, `detection_01`. In this article, you learned how to specify the detection model to use with diff * [Face .NET SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp) * [Face Python SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-python%253fpivots%253dprogramming-language-python)+* [Face JavaScript SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-javascript%253fpivots%253dprogramming-language-javascript) [Detect]: /rest/api/face/face-detection-operations/detect [Identify From Person Group]: /rest/api/face/face-recognition-operations/identify-from-person-group |
ai-services | Specify Recognition Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/specify-recognition-model.md | When using the [Detect] API, assign the model version with the `recognitionModel Optionally, you can specify the _returnRecognitionModel_ parameter (default **false**) to indicate whether _recognitionModel_ should be returned in response. So, a request URL for the [Detect] REST API will look like this: -`https://westus.api.cognitive.microsoft.com/face/v1.0/detect[?returnFaceId][&returnFaceLandmarks][&returnFaceAttributes][&recognitionModel][&returnRecognitionModel]&subscription-key=<Subscription key>` +`https://westus.api.cognitive.microsoft.com/face/v1.0/detect?detectionModel={detectionModel}&recognitionModel={recognitionModel}&returnFaceId={returnFaceId}&returnFaceAttributes={returnFaceAttributes}&returnFaceLandmarks={returnFaceLandmarks}&returnRecognitionModel={returnRecognitionModel}&faceIdTimeToLive={faceIdTimeToLive}` If you're using the client library, you can assign the value for `recognitionModel` by passing a string representing the version. If you leave it unassigned, a default model version of `recognition_01` will be used. See the following code example for the .NET client library. ```csharp-string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg"; -var faces = await faceClient.Face.DetectWithUrlAsync(url: imageUrl, returnFaceId: true, returnFaceLandmarks: true, recognitionModel: "recognition_01", returnRecognitionModel: true); +string imageUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg"; +var response = await faceClient.DetectAsync(new Uri(imageUrl), FaceDetectionModel.Detection03, FaceRecognitionModel.Recognition04, returnFaceId: true, returnFaceLandmarks: true, returnRecognitionModel: true); +var faces = response.Value; ``` > [!NOTE] The Face service can extract face data from an image and associate it with a **P A **PersonGroup** should have one unique recognition model for all of the **Person**s, and you can specify this using the `recognitionModel` parameter when you create the group ([Create Person Group] or [Create Large Person Group]). If you don't specify this parameter, the original `recognition_01` model is used. A group will always use the recognition model it was created with, and new faces will become associated with this model when they're added to it. This can't be changed after a group's creation. To see what model a **PersonGroup** is configured with, use the [Get Person Group] API with the _returnRecognitionModel_ parameter set as **true**. -See the following code example for the .NET client library. +See the following .NET code example. ```csharp // Create an empty PersonGroup with "recognition_04" model string personGroupId = "mypersongroupid";-await faceClient.PersonGroup.CreateAsync(personGroupId, "My Person Group Name", recognitionModel: "recognition_04"); +using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = "My Person Group Name", ["recognitionModel"] = "recognition_04" })))) +{ + content.Headers.ContentType = new MediaTypeHeaderValue("application/json"); + await httpClient.PutAsync($"{ENDPOINT}/face/v1.0/persongroups/{personGroupId}", content); +} ``` In this code, a **PersonGroup** with ID `mypersongroupid` is created, and it's set up to use the _recognition_04_ model to extract face features. There is no change in the [Identify From Person Group] API; you only need to spe You can also specify a recognition model for similarity search. You can assign the model version with `recognitionModel` when creating the **FaceList** with [Create Face List] API or [Create Large Face List]. If you don't specify this parameter, the `recognition_01` model is used by default. A **FaceList** will always use the recognition model it was created with, and new faces will become associated with this model when they're added to the list; you can't change this after creation. To see what model a **FaceList** is configured with, use the [Get Face List] API with the _returnRecognitionModel_ parameter set as **true**. -See the following code example for the .NET client library. +See the following .NET code example. ```csharp-await faceClient.FaceList.CreateAsync(faceListId, "My face collection", recognitionModel: "recognition_04"); +using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = "My face collection", ["recognitionModel"] = "recognition_04" })))) +{ + content.Headers.ContentType = new MediaTypeHeaderValue("application/json"); + await httpClient.PutAsync($"{ENDPOINT}/face/v1.0/facelists/{faceListId}", content); +} ``` This code creates a **FaceList** called `My face collection`, using the _recognition_04_ model for feature extraction. When you search this **FaceList** for similar faces to a new detected face, that face must have been detected ([Detect]) using the _recognition_04_ model. As in the previous section, the model needs to be consistent. In this article, you learned how to specify the recognition model to use with di * [Face .NET SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp) * [Face Python SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-python%253fpivots%253dprogramming-language-python)+* [Face JavaScript SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-javascript%253fpivots%253dprogramming-language-javascript) [Detect]: /rest/api/face/face-detection-operations/detect [Verify Face To Face]: /rest/api/face/face-recognition-operations/verify-face-to-face [Identify From Person Group]: /rest/api/face/face-recognition-operations/identify-from-person-group-[Find Similar]: /rest/api/face/face-recognition-operations/find-similar-from-large-face-list +[Find Similar]: /rest/api/face/face-recognition-operations/find-similar-from-face-list [Create Person Group]: /rest/api/face/person-group-operations/create-person-group [Get Person Group]: /rest/api/face/person-group-operations/get-person-group [Train Person Group]: /rest/api/face/person-group-operations/train-person-group |
ai-services | Use Headpose | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/use-headpose.md | In this guide, you'll see how you can use the HeadPose attribute of a detected f The face rectangle, returned with every detected face, marks the location and size of the face in the image. By default, the rectangle is always aligned with the image (its sides are vertical and horizontal); this can be inefficient for framing angled faces. In situations where you want to programmatically crop faces in an image, it's better to be able to rotate the rectangle to crop. -The [Azure AI Face WPF (Windows Presentation Foundation)](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) sample app uses the HeadPose attribute to rotate its detected face rectangles. +The [Azure AI Face WPF (Windows Presentation Foundation)](https://github.com/Azure-Samples/azure-ai-vision/tree/main/face/DemoWPF) sample app uses the HeadPose attribute to rotate its detected face rectangles. ### Explore the sample code -You can programmatically rotate the face rectangle by using the HeadPose attribute. If you specify this attribute when detecting faces (see [Call the detect API](identity-detect-faces.md)), you will be able to query it later. The following method from the [Azure AI Face WPF](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) app takes a list of **DetectedFace** objects and returns a list of **[Face](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/app-samples/Cognitive-Services-Face-WPF/Sample-WPF/Controls/Face.cs)** objects. **Face** here is a custom class that stores face data, including the updated rectangle coordinates. New values are calculated for **top**, **left**, **width**, and **height**, and a new field **FaceAngle** specifies the rotation. +You can programmatically rotate the face rectangle by using the HeadPose attribute. If you specify this attribute when detecting faces (see [Call the detect API](identity-detect-faces.md)), you will be able to query it later. The following method from the [Azure AI Face WPF](https://github.com/Azure-Samples/azure-ai-vision/tree/main/face/DemoWPF) app takes a list of **FaceDetectionResult** objects and returns a list of **[Face](https://github.com/Azure-Samples/azure-ai-vision/blob/main/face/DemoWPF/Sample-WPF/Controls/Face.cs)** objects. **Face** here is a custom class that stores face data, including the updated rectangle coordinates. New values are calculated for **top**, **left**, **width**, and **height**, and a new field **FaceAngle** specifies the rotation. ```csharp /// <summary> You can programmatically rotate the face rectangle by using the HeadPose attribu /// <param name="maxSize">Image rendering size</param> /// <param name="imageInfo">Image width and height</param> /// <returns>Face structure for rendering</returns>-public static IEnumerable<Face> CalculateFaceRectangleForRendering(IList<DetectedFace> faces, int maxSize, Tuple<int, int> imageInfo) +public static IEnumerable<Face> CalculateFaceRectangleForRendering(IList<FaceDetectionResult> faces, int maxSize, Tuple<int, int> imageInfo) { var imageWidth = imageInfo.Item1; var imageHeight = imageInfo.Item2; public static IEnumerable<Face> CalculateFaceRectangleForRendering(IList<Detecte ### Display the updated rectangle -From here, you can use the returned **Face** objects in your display. The following lines from [FaceDetectionPage.xaml](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/app-samples/Cognitive-Services-Face-WPF/Sample-WPF/Controls/FaceDetectionPage.xaml) show how the new rectangle is rendered from this data: +From here, you can use the returned **Face** objects in your display. The following lines from [FaceDetectionPage.xaml](https://github.com/Azure-Samples/azure-ai-vision/blob/main/face/DemoWPF/Sample-WPF/Controls/FaceDetectionPage.xaml) show how the new rectangle is rendered from this data: ```xaml <DataTemplate> From here, you can use the returned **Face** objects in your display. The follow ## Next steps -* See the [Azure AI Face WPF](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) app on GitHub for a working example of rotated face rectangles. -* Or, see the [Face HeadPose Sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples) app, which tracks the HeadPose attribute in real time to detect head movements. +* See the [Azure AI Face WPF](https://github.com/Azure-Samples/azure-ai-vision/tree/main/face/DemoWPF) app on GitHub for a working example of rotated face rectangles. |
ai-services | Use Large Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/use-large-scale.md | This guide demonstrates the migration process. It assumes a basic familiarity wi **LargePersonGroup** and **LargeFaceList** are collectively referred to as large-scale operations. **LargePersonGroup** can contain up to 1 million persons, each with a maximum of 248 faces. **LargeFaceList** can contain up to 1 million faces. The large-scale operations are similar to the conventional **PersonGroup** and **FaceList** but have some differences because of the new architecture. -The samples are written in C# by using the Azure AI Face client library. +The samples are written in C#. > [!NOTE] > To enable Face search performance for **Identification** and **FindSimilar** in large-scale, introduce a **Train** operation to preprocess the **LargeFaceList** and **LargePersonGroup**. The training time varies from seconds to about half an hour based on the actual capacity. During the training period, it's possible to perform **Identification** and **FindSimilar** if a successful training operating was done before. The drawback is that the new added persons and faces don't appear in the result until a new post migration to large-scale training is completed. -## Step 1: Initialize the client object --When you use the Face client library, the key and subscription endpoint are passed in through the constructor of the FaceClient class. See the [quickstart](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp&tabs=visual-studio) for instructions on creating a Face client object. --## Step 2: Code migration +## Step 1: Code migration This section focuses on how to migrate **PersonGroup** or **FaceList** implementation to **LargePersonGroup** or **LargeFaceList**. Although **LargePersonGroup** or **LargeFaceList** differs from **PersonGroup** or **FaceList** in design and internal implementation, the API interfaces are similar for backward compatibility. private static async Task TrainLargeFaceList( string largeFaceListId, int timeIntervalInMilliseconds = 1000) {+ HttpClient httpClient = new HttpClient(); + httpClient.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", SUBSCRIPTION_KEY); + // Trigger a train call.- await FaceClient.LargeFaceList.TrainAsync(largeFaceListId); + await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/largefacelists/{largeFaceListId}/train", null); // Wait for training finish. while (true) { await Task.Delay(timeIntervalInMilliseconds);- var status = await faceClient.LargeFaceList.GetTrainingStatusAsyn(largeFaceListId); + string? trainingStatus = null; + using (var response = await httpClient.GetAsync($"{ENDPOINT}/face/v1.0/largefacelists/{largeFaceListId}/training")) + { + string contentString = await response.Content.ReadAsStringAsync(); + trainingStatus = (string?)(JsonConvert.DeserializeObject<Dictionary<string, object>>(contentString)?["status"]); + } - if (status.Status == Status.Running) + if ("running".Equals(trainingStatus)) { continue; }- else if (status.Status == Status.Succeeded) + else if ("succeeded".Equals(trainingStatus)) { break; } Previously, a typical use of **FaceList** with added faces and **FindSimilar** l const string FaceListId = "myfacelistid_001"; const string FaceListName = "MyFaceListDisplayName"; const string ImageDir = @"/path/to/FaceList/images";-await faceClient.FaceList.CreateAsync(FaceListId, FaceListName); +using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = FaceListName, ["recognitionModel"] = "recognition_04" })))) +{ + content.Headers.ContentType = new MediaTypeHeaderValue("application/json"); + await httpClient.PutAsync($"{ENDPOINT}/face/v1.0/facelists/{FaceListId}", content); +} // Add Faces to the FaceList. Parallel.ForEach( Directory.GetFiles(ImageDir, "*.jpg"), async imagePath =>+ { + using (Stream stream = File.OpenRead(imagePath)) {- using (Stream stream = File.OpenRead(imagePath)) + using (var content = new StreamContent(stream)) {- await faceClient.FaceList.AddFaceFromStreamAsync(FaceListId, stream); + content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream"); + await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/facelists/{FaceListId}/persistedfaces?detectionModel=detection_03", content); }- }); + } + }); // Perform FindSimilar. const string QueryImagePath = @"/path/to/query/image";-var results = new List<SimilarPersistedFace[]>(); +var results = new List<HttpResponseMessage>(); using (Stream stream = File.OpenRead(QueryImagePath)) {- var faces = await faceClient.Face.DetectWithStreamAsync(stream); + var response = await faceClient.DetectAsync(BinaryData.FromStream(stream), FaceDetectionModel.Detection03, FaceRecognitionModel.Recognition04, returnFaceId: true); + var faces = response.Value; foreach (var face in faces) {- results.Add(await faceClient.Face.FindSimilarAsync(face.FaceId, FaceListId, 20)); + using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["faceId"] = face.FaceId, ["faceListId"] = FaceListId })))) + { + content.Headers.ContentType = new MediaTypeHeaderValue("application/json"); + results.Add(await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/findsimilars", content)); + } } } ``` When migrating it to **LargeFaceList**, it becomes the following: const string LargeFaceListId = "mylargefacelistid_001"; const string LargeFaceListName = "MyLargeFaceListDisplayName"; const string ImageDir = @"/path/to/FaceList/images";-await faceClient.LargeFaceList.CreateAsync(LargeFaceListId, LargeFaceListName); +using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = LargeFaceListName, ["recognitionModel"] = "recognition_04" })))) +{ + content.Headers.ContentType = new MediaTypeHeaderValue("application/json"); + await httpClient.PutAsync($"{ENDPOINT}/face/v1.0/largefacelists/{LargeFaceListId}", content); +} // Add Faces to the LargeFaceList. Parallel.ForEach( Directory.GetFiles(ImageDir, "*.jpg"), async imagePath =>+ { + using (Stream stream = File.OpenRead(imagePath)) {- using (Stream stream = File.OpenRead(imagePath)) + using (var content = new StreamContent(stream)) {- await faceClient.LargeFaceList.AddFaceFromStreamAsync(LargeFaceListId, stream); + content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream"); + await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/largefacelists/{LargeFaceListId}/persistedfaces?detectionModel=detection_03", content); }- }); + } + }); // Train() is newly added operation for LargeFaceList.-// Must call it before FindSimilarAsync() to ensure the newly added faces searchable. +// Must call it before FindSimilar to ensure the newly added faces searchable. await TrainLargeFaceList(LargeFaceListId); // Perform FindSimilar. const string QueryImagePath = @"/path/to/query/image";-var results = new List<SimilarPersistedFace[]>(); +var results = new List<HttpResponseMessage>(); using (Stream stream = File.OpenRead(QueryImagePath)) {- var faces = await faceClient.Face.DetectWithStreamAsync(stream); + var response = await faceClient.DetectAsync(BinaryData.FromStream(stream), FaceDetectionModel.Detection03, FaceRecognitionModel.Recognition04, returnFaceId: true); + var faces = response.Value; foreach (var face in faces) {- results.Add(await faceClient.Face.FindSimilarAsync(face.FaceId, largeFaceListId: LargeFaceListId)); + using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["faceId"] = face.FaceId, ["largeFaceListId"] = LargeFaceListId })))) + { + content.Headers.ContentType = new MediaTypeHeaderValue("application/json"); + results.Add(await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/findsimilars", content)); + } } } ``` As previously shown, the data management and the **FindSimilar** part are almost the same. The only exception is that a fresh preprocessing **Train** operation must complete in the **LargeFaceList** before **FindSimilar** works. -## Step 3: Train suggestions +## Step 2: Train suggestions Although the **Train** operation speeds up [FindSimilar](/rest/api/face/face-recognition-operations/find-similar-from-large-face-list) and [Identification](/rest/api/face/face-recognition-operations/identify-from-large-person-group), the training time suffers, especially when coming to large scale. The estimated training time in different scales is listed in the following table. and [Identification](/rest/api/face/face-recognition-operations/identify-from-la To better utilize the large-scale feature, we recommend the following strategies. -### Step 3a: Customize time interval +### Step 2a: Customize time interval As is shown in `TrainLargeFaceList()`, there's a time interval in milliseconds to delay the infinite training status checking process. For **LargeFaceList** with more faces, using a larger interval reduces the call counts and cost. Customize the time interval according to the expected capacity of the **LargeFaceList**. The same strategy also applies to **LargePersonGroup**. For example, when you train a **LargePersonGroup** with 1 million persons, `timeIntervalInMilliseconds` might be 60,000, which is a 1-minute interval. -### Step 3b: Small-scale buffer +### Step 2b: Small-scale buffer Persons or faces in a **LargePersonGroup** or a **LargeFaceList** are searchable only after being trained. In a dynamic scenario, new persons or faces are constantly added and must be immediately searchable, yet training might take longer than desired. An example workflow: 1. When the buffer collection size increases to a threshold or at a system idle time, create a new buffer collection. Trigger the **Train** operation on the master collection. 1. Delete the old buffer collection after the **Train** operation finishes on the master collection. -### Step 3c: Standalone training +### Step 2c: Standalone training If a relatively long latency is acceptable, it isn't necessary to trigger the **Train** operation right after you add new data. Instead, the **Train** operation can be split from the main logic and triggered regularly. This strategy is suitable for dynamic scenarios with acceptable latency. It can be applied to static scenarios to further reduce the **Train** frequency. Suppose there's a `TrainLargePersonGroup` function similar to `TrainLargeFaceLis ```csharp private static void Main() {- // Create a LargePersonGroup. - const string LargePersonGroupId = "mylargepersongroupid_001"; - const string LargePersonGroupName = "MyLargePersonGroupDisplayName"; - faceClient.LargePersonGroup.CreateAsync(LargePersonGroupId, LargePersonGroupName).Wait(); - // Set up standalone training at regular intervals. const int TimeIntervalForStatus = 1000 * 60; // 1-minute interval for getting training status. const double TimeIntervalForTrain = 1000 * 60 * 60; // 1-hour interval for training. var trainTimer = new Timer(TimeIntervalForTrain);- trainTimer.Elapsed += (sender, args) => TrainTimerOnElapsed(LargePersonGroupId, TimeIntervalForStatus); + trainTimer.Elapsed += (sender, args) => TrainTimerOnElapsed("mylargepersongroupid_001", TimeIntervalForStatus); trainTimer.AutoReset = true; trainTimer.Enabled = true; |
ai-services | Use Persondirectory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/use-persondirectory.md | HttpResponseMessage response; var body = new Dictionary<string, object>(); body.Add("faceId", "{guid1}"); body.Add("personId", "{guid1}");-var jsSerializer = new JavaScriptSerializer(); -byte[] byteData = Encoding.UTF8.GetBytes(jsSerializer.Serialize(body)); +byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body)); using (var content = new ByteArrayContent(byteData)) { |
ai-services | Overview Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview-identity.md | And these images are the candidate faces: ![Five images of people smiling. Images A and B show the same person.](./media/FaceFindSimilar.Candidates.jpg) -To find four similar faces, the **matchPerson** mode returns A and B, which show the same person as the target face. The **matchFace** mode returns A, B, C, and D, which is exactly four candidates, even if some aren't the same person as the target or have low similarity. For more information, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Find Similar API](/rest/api/face/face-recognition-operations/find-similar-from-large-face-list) reference documentation. +To find four similar faces, the **matchPerson** mode returns A and B, which show the same person as the target face. The **matchFace** mode returns A, B, C, and D, which is exactly four candidates, even if some aren't the same person as the target or have low similarity. For more information, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Find Similar API](/rest/api/face/face-recognition-operations/find-similar) reference documentation. ## Group faces |
ai-services | Custom Categories Rapid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/how-to/custom-categories-rapid.md | curl --location --request PATCH 'https://<endpoint>/contentsafety/text/incidents --header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>' \ --header 'Content-Type: application/json' \ --data '{- "incidentName": "<text-incident-name>", - "incidentDefinition": "string" + \"incidentName\": \"<text-incident-name>\", + \"incidentDefinition\": \"string\" }' ``` curl --location --request PATCH 'https://<endpoint>/contentsafety/image/incident --header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>' \ --header 'Content-Type: application/json' \ --data '{- "incidentName": "<image-incident-name>" + \"incidentName\": \"<image-incident-name>\" }' ``` |
ai-services | Azure Openai Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/azure-openai-integration.md | At the same time, customers often require a custom answer authoring experience t * An existing Azure OpenAI resource. If you don't already have an Azure OpenAI resource, then [create one and deploy a model](../../../openai/how-to/create-resource.md). * An Azure Language Service resource and custom question answering project. If you donΓÇÖt have one already, then [create one](../quickstart/sdk.md). - * Azure OpenAI requires registration and is currently only available to approved enterprise customers and partners. See [Limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context) for more information. You can apply for access to Azure OpenAI by completing the form at https://aka.ms/oai/access. Open an issue on this repo to contact us if you have an issue. * Be sure that you are assigned at least the [Cognitive Services OpenAI Contributor role](/azure/role-based-access-control/built-in-roles#cognitive-services-openai-contributor) for the Azure OpenAI resource. |
ai-services | Gpt With Vision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/gpt-with-vision.md | The GPT-4 Turbo with Vision model answers general questions about what's present Enhancements let you incorporate other Azure AI services (such as Azure AI Vision) to add new functionality to the chat-with-vision experience. -**Object grounding**: Azure AI Vision complements GPT-4 Turbo with VisionΓÇÖs text response by identifying and locating salient objects in the input images. This lets the chat model give more accurate and detailed responses about the contents of the image. - > [!IMPORTANT] > To use Vision enhancement, you need a Computer Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource. +> [!IMPORTANT] +> Vision enhancements are not supported by the GPT-4 Turbo GA model. They are only available with the preview models. ++**Object grounding**: Azure AI Vision complements GPT-4 Turbo with VisionΓÇÖs text response by identifying and locating salient objects in the input images. This lets the chat model give more accurate and detailed responses about the contents of the image. + :::image type="content" source="../media/concepts/gpt-v/object-grounding.png" alt-text="Screenshot of an image with object grounding applied. Objects have bounding boxes with labels."::: :::image type="content" source="../media/concepts/gpt-v/object-grounding-response.png" alt-text="Screenshot of a chat response to an image prompt about an outfit. The response is an itemized list of clothing items seen in the image."::: **Optical Character Recognition (OCR)**: Azure AI Vision complements GPT-4 Turbo with Vision by providing high-quality OCR results as supplementary information to the chat model. It allows the model to produce higher quality responses for images with dense text, transformed images, and numbers-heavy financial documents, and increases the variety of languages the model can recognize in text. -> [!IMPORTANT] -> To use Vision enhancement, you need a Computer Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource. - :::image type="content" source="../media/concepts/gpt-v/receipts.png" alt-text="Photo of several receipts."::: :::image type="content" source="../media/concepts/gpt-v/ocr-response.png" alt-text="Screenshot of the JSON response of an OCR call."::: Enhancements let you incorporate other Azure AI services (such as Azure AI Visio > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RW1eHRf] -> [!NOTE] -> In order to use the video prompt enhancement, you need both an Azure AI Vision resource, in the paid (S1) tier, in addition to your Azure OpenAI resource. - ## Special pricing information > [!IMPORTANT] |
ai-services | Use Your Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md | Along with using Elasticsearch databases in Azure OpenAI Studio, you can also us -## Deploy to a copilot (preview) or web app +## Deploy to a copilot (preview), Teams app (preview), or web app After you connect Azure OpenAI to your data, you can deploy it using the **Deploy to** button in Azure OpenAI studio. :::image type="content" source="../media/use-your-data/deploy-model.png" alt-text="A screenshot showing the model deployment button in Azure OpenAI Studio." lightbox="../media/use-your-data/deploy-model.png"::: -This gives you the option of deploying a standalone web app for you and your users to interact with chat models using a graphical user interface. See [Use the Azure OpenAI web app](../how-to/use-web-app.md) for more information. +This gives you multiple options for deploying your solution. -You can also deploy to a copilot in [Copilot Studio](/microsoft-copilot-studio/fundamentals-what-is-copilot-studio) (preview) directly from Azure OpenAI studio, enabling you to bring conversational experiences to various channels such as: Microsoft Teams, websites, Dynamics 365, and other [Azure Bot Service channels](/microsoft-copilot-studio/publication-connect-bot-to-azure-bot-service-channels). The tenant used in the Azure OpenAI service and Copilot Studio (preview) should be the same. For more information, see [Use a connection to Azure OpenAI On Your Data](/microsoft-copilot-studio/nlu-generative-answers-azure-openai). +#### [Copilot (preview)](#tab/copilot) ++You can deploy to a copilot in [Copilot Studio](/microsoft-copilot-studio/fundamentals-what-is-copilot-studio) (preview) directly from Azure OpenAI studio, enabling you to bring conversational experiences to various channels such as: Microsoft Teams, websites, Dynamics 365, and other [Azure Bot Service channels](/microsoft-copilot-studio/publication-connect-bot-to-azure-bot-service-channels). The tenant used in the Azure OpenAI service and Copilot Studio (preview) should be the same. For more information, see [Use a connection to Azure OpenAI On Your Data](/microsoft-copilot-studio/nlu-generative-answers-azure-openai). > [!NOTE] > Deploying to a copilot in Copilot Studio (preview) is only available in US regions. +#### [Teams app (preview)](#tab/teams) ++A Teams app lets you bring conversational experience to your users in Teams to improve operational efficiency and democratize access of information. This Teams app is configured to users within your Azure account tenant and personal chat (non-group chat) scenarios. +++**Prerequisites** ++- The latest version of [Visual Studio Code](https://code.visualstudio.com/) installed. +- The latest version of [Teams toolkit](https://marketplace.visualstudio.com/items?itemName=TeamsDevApp.ms-teams-vscode-extension) installed. This is a VS Code extension that creates a project scaffolding for your app. +- [Node.js](https://nodejs.org/en/download/) (version 16 or 17) installed. For more information, see [Node.js version compatibility table for project type](/microsoftteams/platform/toolkit/build-environments#nodejs-version-compatibility-table-for-project-type). +- [Microsoft Teams](https://www.microsoft.com/microsoft-teams/download-app) installed. +- Sign in to your [Microsoft 365 developer account](/microsoftteams/platform/concepts/build-and-test/prepare-your-o365-tenant) (using this link to get a test account: [Developer program](https://developer.microsoft.com/microsoft-365/dev-program)). + - Enable **custom Teams apps** and turn on **custom app uploading** in your account (instructions [here](/microsoftteams/platform/concepts/build-and-test/prepare-your-o365-tenant#enable-custom-teams-apps-and-turn-on-custom-app-uploading)) +- [Azure command-line interface (CLI)](/cli/azure/install-azure-cli) installed. This is a cross-platform command-line tool to connect to Azure and execute administrative commands on Azure resources. For more information on setting up environment variables, see the [Azure SDK documentation](https://github.com/Azure/azure-sdk-for-go/wiki/Set-up-Your-Environment-for-Authentication). +- Your Azure account has been assigned **Cognitive Services OpenAI user** or **Cognitive Services OpenAI Contributor** role of the Azure OpenAI resource you're using, allowing your account to make Azure OpenAI API calls. For more information, see [Using your data with Azure OpenAI securely](/azure/ai-services/openai/how-to/use-your-data-securely#using-the-api) and [Add role assignment to an Azure OpenAI resource](/azure/ai-services/openai/how-to/role-based-access-control#add-role-assignment-to-an-azure-openai-resource) for instructions on setting this role in the Azure portal. +++You can deploy to a standalone Teams app directly from Azure OpenAI Studio. Follow the steps below: ++1. After you've added your data to the chat model, select **Deploy** and then **a new Teams app (preview)**. ++1. Enter the name of your Teams app and download the resulting .zip file. ++1. Extract the .zip file and open the folder in Visual Studio Code. ++1. If you chose **API key** in the data connection step, manually copy and paste your Azure AI Search key into the `src\prompts\chat\config.json` file. Your Azure AI Search Key can be found in Azure OpenAI Studio Playground by selecting the **View code** button with the key located under Azure Search Resource Key. If you chose **System assigned managed identity**, you can skip this step. Learn more about different data connection options in the [Data connection](/azure/ai-services/openai/concepts/use-your-data?tabs=ai-search#data-connection) section. ++1. Open the Visual Studio Code terminal and log into Azure CLI, selecting the account that you assigned **Cognitive Service OpenAI User** role to. Use the `az login` command in the terminal to log in. ++1. To debug your app, press the **F5** key or select **Run and Debug** from the left pane. Then select your debugging environment from the dropdown list. A webpage opens where you can chat with your custom copilot. + > [!NOTE] + > The citation experience is available in **Debug (Edge)** or **Debug (Chrome)** only. ++1. After you've tested your copilot, you can provision, deploy, and publish your Teams app by selecting the **Teams Toolkit Extension** on the left pane in Visual Studio Code. Run the separate provision, deploy, and publish stages in the **Lifecycle** section. You may be asked to sign in to your Microsoft 365 account where you have permissions to upload custom apps and your Azure Account. + +1. Provision your app: (detailed instructions in [Provision cloud resources](/microsoftteams/platform/toolkit/provision)) ++1. Assign the **Cognitive Service OpenAI User** role to your deployed App Service resource + 1. Go to the Azure portal and select the newly created Azure App Service resource + 1. Go to **settings** -> **identity** -> **enable system assigned identity** + 1. Select **Azure role assignments** and then **add role assignments**. Specify the following parameters: + * Scope: resource group + * Subscription: the subscription of your Azure OpenAI resource + * Resource group of your Azure OpenAI resource + * Role: **Cognitive Service OpenAI user** ++1. Deploy your app to Azure by following the instructions in [Deploy to the cloud](/microsoftteams/platform/toolkit/deploy). ++1. Publish your app to Teams by following the instructions in [Publish Teams app](/microsoftteams/platform/toolkit/publish). ++The README file in your Teams app has additional details and tips. Also, see [Tutorial - Build Custom Copilot using Teams](/microsoftteams/platform/teams-ai-library-tutorial) for guided steps. ++#### [Web app](#tab/web-app) ++Deploying to a standalone web app lets you and your users to interact with chat models through a graphical user interface. See [Use the Azure OpenAI web app](../how-to/use-web-app.md) for more information. +++ ## Use Azure OpenAI On Your Data securely You can use Azure OpenAI On Your Data securely by protecting data and resources with Microsoft Entra ID role-based access control, virtual networks, and private endpoints. You can also restrict the documents that can be used in responses for different users with Azure AI Search security filters. See [Securely use Azure OpenAI On Your Data](../how-to/use-your-data-securely.md). |
ai-services | Use Your Image Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-image-data.md | Use this article to learn how to provide your own image data for GPT-4 Turbo wit ## Prerequisites -* An Azure subscription. [Create one for free](https://azure.microsoft.com/free/cognitive-services). -* Access granted to Azure OpenAI in the desired Azure subscription. -- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by [completing the form](https://aka.ms/oai/access). Open an issue on this repo to contact us if you have a problem. -* An Azure OpenAI resource with the GPT-4 Turbo with Vision model deployed. For more information about model deployment, see the [resource deployment guide](../how-to/create-resource.md). +- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>. +- An Azure OpenAI resource with the GPT-4 Turbo with Vision model deployed. For more information about model deployment, see the [resource deployment guide](../how-to/create-resource.md). * At least the [Cognitive Services Contributor role](../how-to/role-based-access-control.md#cognitive-services-contributor) assigned to you for the Azure OpenAI resource. ## Add your data source |
ai-services | Azure Developer Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/azure-developer-cli.md | Use this article to learn how to automate resource deployment for Azure OpenAI S ## Prerequisites - An Azure subscription. [Create one for free](https://azure.microsoft.com/free/cognitive-services).-- Access granted to Azure OpenAI in the desired Azure subscription.-- Azure OpenAI requires registration and is currently available only to approved enterprise customers and partners. For more information, see [Limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context). You can apply for access to Azure OpenAI by [completing the form](https://aka.ms/oai/access). Open an issue on this repo to contact us if you have a problem. - - The Azure Developer CLI [installed](/azure/developer/azure-developer-cli/install-azd) on your machine. ## Clone and initialize the Azure Developer CLI template |
ai-services | Dall E | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/dall-e.md | OpenAI's DALL-E models generate images based on user-provided text prompts. This #### [DALL-E 3](#tab/dalle3) - An Azure subscription. <a href="https://azure.microsoft.com/free/ai-services" target="_blank">Create one for free</a>.-- Access granted to DALL-E in the desired Azure subscription. - An Azure OpenAI resource created in the `SwedenCentral` region. - Then, you need to deploy a `dalle3` model with your Azure resource. For more information, see [Create a resource and deploy a model with Azure OpenAI](../how-to/create-resource.md). #### [DALL-E 2 (preview)](#tab/dalle2) - An Azure subscription. <a href="https://azure.microsoft.com/free/ai-services" target="_blank">Create one for free</a>.-- Access granted to DALL-E in the desired Azure subscription. - An Azure OpenAI resource created in the East US region. For more information, see [Create a resource and deploy a model with Azure OpenAI](../how-to/create-resource.md). |
ai-services | Gpt With Vision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/gpt-with-vision.md | The **object grounding** integration brings a new layer to data analysis and use > [!IMPORTANT] > To use the Vision enhancement with an Azure OpenAI resource, you need to specify a Computer Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource. If you're using an Azure AI Services resource, you don't need an additional Computer Vision resource. +> [!IMPORTANT] +> Vision enhancements are not supported by the GPT-4 Turbo GA model. They are only available with the preview models. + > [!CAUTION] > Azure AI enhancements for GPT-4 Turbo with Vision will be billed separately from the core functionalities. Each specific Azure AI enhancement for GPT-4 Turbo with Vision has its own distinct charges. For details, see the [special pricing information](../concepts/gpt-with-vision.md#special-pricing-information). Follow these steps to set up a video retrieval system and integrate it with your > [!IMPORTANT] > To use the Vision enhancement with an Azure OpenAI resource, you need to specify a Computer Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource. If you're using an Azure AI Services resource, you don't need an additional Computer Vision resource. +> [!IMPORTANT] +> Vision enhancements are not supported by the GPT-4 Turbo GA model. They are only available with the preview models. + > [!CAUTION] > Azure AI enhancements for GPT-4 Turbo with Vision will be billed separately from the core functionalities. Each specific Azure AI enhancement for GPT-4 Turbo with Vision has its own distinct charges. For details, see the [special pricing information](../concepts/gpt-with-vision.md#special-pricing-information). |
ai-services | Integrate Synapseml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/integrate-synapseml.md | This tutorial shows how to apply large language models at a distributed scale by - An Azure subscription. <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>. -- Access granted to Azure OpenAI in your Azure subscription.-- Currently, you must submit an application to access Azure OpenAI Service. To apply for access, complete <a href="https://aka.ms/oai/access" target="_blank">this form</a>. If you need assistance, open an issue on this repo to contact Microsoft. - An Azure OpenAI resource. [Create a resource](create-resource.md?pivots=web-portal#create-a-resource). |
ai-services | Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/managed-identity.md | -More complex security scenarios require Azure role-based access control (Azure RBAC). This document covers how to authenticate to your OpenAI resource using Microsoft Entra ID. +More complex security scenarios require Azure role-based access control (Azure RBAC). This document covers how to authenticate to your Azure OpenAI resource using Microsoft Entra ID. In the following sections, you'll use the Azure CLI to sign in, and obtain a bearer token to call the OpenAI resource. If you get stuck, links are provided in each section with all available options for each command in Azure Cloud Shell/Azure CLI. ## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>-- Access granted to the Azure OpenAI Service in the desired Azure subscription-- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the [Request Access to Azure OpenAI Service form](https://aka.ms/oai/access). Open an issue on this repo to contact us if you have an issue. - [Custom subdomain names are required to enable features like Microsoft Entra ID for authentication.]( ../../cognitive-services-custom-subdomains.md) |
ai-services | Provisioned Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/provisioned-get-started.md | The following guide walks you through setting up a provisioned deployment with y ## Prerequisites - An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true)-- Access granted to Azure OpenAI in the desired Azure subscription.- Currently, access to this service is by application. You can apply for access to Azure OpenAI Service by completing the form at [https://aka.ms/oai/access](https://aka.ms/oai/access?azure-portal=true). - Obtained Quota for a provisioned deployment and purchased a commitment. > [!NOTE] |
ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/overview.md | At Microsoft, we're committed to the advancement of AI driven by principles that ## How do I get access to Azure OpenAI? -How do I get access to Azure OpenAI? --Access is currently limited as we navigate high demand, upcoming product improvements, and <a href="https://www.microsoft.com/ai/responsible-ai?activetab=pivot1:primaryr6" target="_blank">MicrosoftΓÇÖs commitment to responsible AI</a>. For now, we're working with customers with an existing partnership with Microsoft, lower risk use cases, and those committed to incorporating mitigations. --More specific information is included in the application form. We appreciate your patience as we work to responsibly enable broader access to Azure OpenAI. --Apply here for access: --<a href="https://aka.ms/oaiapply" target="_blank">Apply now</a> +A Limited Access registration form is not required to access most Azure OpenAI models. Learn more on the [Azure OpenAI Limited Access page](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context). ## Comparing Azure OpenAI and OpenAI |
ai-services | Text To Speech Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/text-to-speech-quickstart.md | The available voices are: `alloy`, `echo`, `fable`, `onyx`, `nova`, and `shimmer ## Prerequisites - An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true).-- Access granted to Azure OpenAI Service in the desired Azure subscription. - An Azure OpenAI resource created in the North Central US or Sweden Central regions with the `tts-1` or `tts-1-hd` model deployed. For more information, see [Create a resource and deploy a model with Azure OpenAI](how-to/create-resource.md). -> [!NOTE] -> Currently, you must submit an application to access Azure OpenAI Service. To apply for access, complete [this form](https://aka.ms/oai/access). - ## Set up ### Retrieve key and endpoint |
ai-services | Fine Tune | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/tutorials/fine-tune.md | In this tutorial you learn how to: ## Prerequisites - An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true).-- Access granted to Azure OpenAI in the desired Azure subscription Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at https://aka.ms/oai/access. - Python 3.8 or later version - The following Python libraries: `json`, `requests`, `os`, `tiktoken`, `time`, `openai`, `numpy`. - [Jupyter Notebooks](https://jupyter.org/) |
ai-services | Use Your Data Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/use-your-data-quickstart.md | In this quickstart you can use your own data with Azure OpenAI models. Using Azu ## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.-- Access granted to Azure OpenAI in the desired Azure subscription.-- Azure OpenAI requires registration and is currently only available to approved enterprise customers and partners. [See Limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context) for more information. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. - An Azure OpenAI resource deployed in a [supported region and with a supported model](./concepts/use-your-data.md#regional-availability-and-model-support). |
ai-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md | For more information, see the [deployment types guide](https://aka.ms/aoai/docs/ ### DALL-E and GPT-4 Turbo Vision GA configurable content filters -Create custom content filters for your DALL-E 2 and 3, GPT-4 Turbo with Vision GA (gpt-4-turbo-2024-04-09) and GPT-4o deployments. [Content filtering](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cpython-new#configurability-preview) +Create custom content filters for your DALL-E 2 and 3, GPT-4 Turbo with Vision GA (`turbo-2024-04-09`), and GPT-4o deployments. [Content filtering](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cpython-new#configurability-preview) ### Asynchronous Filter available for all Azure OpenAI customers If you are currently using the `2023-03-15-preview` API, we recommend migrating ## April 2023 -- **DALL-E 2 public preview**. Azure OpenAI Service now supports image generation APIs powered by OpenAI's DALL-E 2 model. Get AI-generated images based on the descriptive text you provide. To learn more, check out the [quickstart](./dall-e-quickstart.md). To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/access).+- **DALL-E 2 public preview**. Azure OpenAI Service now supports image generation APIs powered by OpenAI's DALL-E 2 model. Get AI-generated images based on the descriptive text you provide. To learn more, check out the [quickstart](./dall-e-quickstart.md). - **Inactive deployments of customized models will now be deleted after 15 days; models will remain available for redeployment.** If a customized (fine-tuned) model is deployed for more than fifteen (15) days during which no completions or chat completions calls are made to it, the deployment will automatically be deleted (and no further hosting charges will be incurred for that deployment). The underlying customized model will remain available and can be redeployed at any time. To learn more check out the [how-to-article](/azure/ai-services/openai/how-to/fine-tuning?tabs=turbo%2Cpython-new&pivots=programming-language-studio#deploy-a-custom-model). |
ai-services | Whisper Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whisper-quickstart.md | The file size limit for the Azure OpenAI Whisper model is 25 MB. If you need to ## Prerequisites - An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true).-- Access granted to Azure OpenAI Service in the desired Azure subscription. - An Azure OpenAI resource with a `whisper` model deployed in a supported region. [Whisper model regional availability](./concepts/models.md#whisper-models). For more information, see [Create a resource and deploy a model with Azure OpenAI](how-to/create-resource.md). -> [!NOTE] -> Currently, you must submit an application to access Azure OpenAI Service. To apply for access, complete [this form](https://aka.ms/oai/access). - ## Set up ### Retrieve key and endpoint |
ai-services | Migrate To Openai | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/migrate-to-openai.md | QnA Maker was designed to be a cloud-based Natural Language Processing (NLP) ser * A QnA Maker project. * An existing Azure OpenAI resource. If you don't already have an Azure OpenAI resource, then [create one and deploy a model](../../openai/how-to/create-resource.md).- * Azure OpenAI requires registration and is currently only available to approved enterprise customers and partners. See [Limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context) for more information. You can apply for access to Azure OpenAI by completing the form at https://aka.ms/oai/access. Open an issue on this repo to contact us if you have an issue. * Be sure that you are assigned at least the [Cognitive Services OpenAI Contributor role](/azure/role-based-access-control/built-in-roles#cognitive-services-openai-contributor) for the Azure OpenAI resource. ## Migrate to Azure OpenAI |
ai-studio | Concept Model Distillation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/concept-model-distillation.md | + + Title: Distillation in AI Studio ++description: Learn how to do distillation in Azure AI Studio. +++ Last updated : 07/23/2024++reviewer: anshirga ++++++# Distillation in Azure AI Studio ++In this article + - [Distillation](#distillation) + - [Next Steps](#next-steps) ++In Azure AI Studio, you can leverage Distillation to efficiently train the student model. ++## Distillation ++In machine learning, distillation is a technique used to transfer knowledge from a large, complex model (often called the ΓÇ£teacher modelΓÇ¥) to a smaller, simpler model (the ΓÇ£student modelΓÇ¥). This process helps the smaller model achieve similar performance to the larger one while being more efficient in terms of computation and memory usage. ++The main steps in knowledge distillation involve: ++- **Using the teacher model** to generate predictions for the dataset. ++- **Training the student model** using these predictions, along with the original dataset, to mimic the teacher modelΓÇÖs behavior. + +You can use the sample notebook available at this [link](https://aka.ms/meta-llama-3.1-distillation) to see how to perform distillation. In this sample notebook, the teacher model used the Meta Llama 3.1 405B Instruct model, and the student model used the Meta Llama 3.1 8B Instruct. ++We used an advanced prompt during synthetic data generation, which incorporates Chain of thought (COT) reasoning, resulting in higher accuracy data labels in the synthetic data. This further improves the accuracy of the distilled model. ++## Next steps +- [What is Azure AI Studio?](../what-is-ai-studio.md) +- [Learn more about deploying Meta Llama models](../how-to/deploy-models-llama.md) ++- [Azure AI FAQ article](../faq.yml) |
ai-studio | Concept Synthetic Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/concept-synthetic-data.md | + + Title: Synthetic data generation in AI Studio ++description: Learn how to generate Synthetic dataset in Azure AI Studio. +++ Last updated : 07/23/2024++reviewer: anshirga ++++++# Synthetic data generation in Azure AI Studio ++In this article + - [Synthetic data generation](#synthetic-data-generation) + - [Next Steps](#next-steps) ++In Azure AI Studio, you can leverage synthetic data generation to efficiently produce predictions for your datasets. ++## Synthetic data generation ++Synthetic data generation involves creating artificial data that mimics the statistical properties of real-world data. This data is generated using algorithms and machine learning techniques, and it can be used in various ways, such as computer simulations or by modeling real-world events. ++In machine learning, synthetic data is particularly valuable for several reasons: ++**Data Augmentation:** It helps in expanding the size of training datasets, which is crucial for training robust machine learning models. This is especially useful when real-world data is scarce or expensive to obtain. ++**Testing and Validation:** It allows for extensive testing and validation of machine learning models under various scenarios without the need for real-world data. ++You can use the sample notebook available at this [link](https://aka.ms/meta-llama-3.1-datagen) to see how to generate Synthetic data. ++## Next steps +- [What is Azure AI Studio?](../what-is-ai-studio.md) +- [Learn more about deploying Meta Llama models](../how-to/deploy-models-llama.md) ++- [Azure AI FAQ article](../faq.yml) |
ai-studio | Fine Tuning Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/fine-tuning-overview.md | There isn't a single right answer to this question, but you should have clearly Now that you know when to leverage fine-tuning for your use-case, you can go to Azure AI Studio to find several models available to fine-tune including: - Azure OpenAI models-- Llama 2 family models+- Meta Llama 2 family models +- Meta Llama 3.1 family of models ### Azure OpenAI models Please note for fine-tuning Azure OpenAI models, you must add a connection to an ### Llama 2 family models The following Llama 2 family models are supported in Azure AI Studio for fine-tuning:-- `Llama-2-70b`-- `Llama-2-7b`-- `Llama-2-13b`+- `Meta-Llama-2-70b` +- `Meta-Llama-2-7b` +- `Meta-Llama-2-13b` Fine-tuning of Llama 2 models is currently supported in projects located in West US 3. +### Llama 3.1 family models +The following Llama 3.1 family models are supported in Azure AI Studio for fine-tuning: +- `Meta-Llama-3.1-70b-Instruct` +- `Meta-Llama-3.1-7b-Instruct` ++Fine-tuning of Llama 3.1 models is currently supported in projects located in West US 3. + ## Related content - [Learn how to fine-tune an Azure OpenAI model in Azure AI Studio](../../ai-services/openai/how-to/fine-tuning.md?context=/azure/ai-studio/context/context) |
ai-studio | Data Image Add | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/data-image-add.md | Use this article to learn how to provide your own image data for GPT-4 Turbo wit ## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.-- Access granted to Azure OpenAI in the desired Azure subscription.- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. - An Azure OpenAI resource with the GPT-4 Turbo with Vision model deployed. For more information about model deployment, see the [resource deployment guide](../../ai-services/openai/how-to/create-resource.md). - Be sure that you're assigned at least the [Cognitive Services Contributor role](../../ai-services/openai/how-to/role-based-access-control.md#cognitive-services-contributor) for the Azure OpenAI resource. - An Azure AI Search resource. See [create an Azure AI Search service in the portal](/azure/search/search-create-service-portal). If you don't have an Azure AI Search resource, you're prompted to create one when you add your data source later in this guide. |
ai-studio | Deploy Models Llama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-llama.md | Title: How to deploy Meta Llama models with Azure AI Studio + Title: How to deploy Meta Llama 3.1 models with Azure AI Studio -description: Learn how to deploy Meta Llama models with Azure AI Studio. +description: Learn how to deploy Meta Llama 3.1 models with Azure AI Studio. Previously updated : 5/21/2024 Last updated : 7/21/2024 reviewer: shubhirajMsft -# How to deploy Meta Llama models with Azure AI Studio +# How to deploy Meta Llama 3.1 models with Azure AI Studio [!INCLUDE [Feature preview](~/reusable-content/ce-skilling/azure/includes/ai-studio/includes/feature-preview.md)] -In this article, you learn about the Meta Llama models. You also learn how to use Azure AI Studio to deploy models from this set either to serverless APIs with pay-as you go billing or to managed compute. +In this article, you learn about the Meta Llama model family. You also learn how to use Azure AI Studio to deploy models from this set either to serverless APIs with pay-as you go billing or to managed compute. > [!IMPORTANT]- > Read more about the announcement of Meta Llama 3 models available now on Azure AI Model Catalog: [Microsoft Tech Community Blog](https://aka.ms/Llama3Announcement) and from [Meta Announcement Blog](https://aka.ms/meta-llama3-announcement-blog). + > Read more about the announcement of Meta Llama 3.1 405B Instruct and other Llama 3.1 models available now on Azure AI Model Catalog: [Microsoft Tech Community Blog](https://aka.ms/meta-llama-3.1-release-on-azure) and from [Meta Announcement Blog](https://aka.ms/meta-llama-3.1-release-announcement). -Meta Llama 3 models and tools are a collection of pretrained and fine-tuned generative text models ranging in scale from 8 billion to 70 billion parameters. The model family also includes fine-tuned versions optimized for dialogue use cases with reinforcement learning from human feedback (RLHF), called Meta-Llama-3-8B-Instruct and Meta-Llama-3-70B-Instruct. See the following GitHub samples to explore integrations with [LangChain](https://aka.ms/meta-llama3-langchain-sample), [LiteLLM](https://aka.ms/meta-llama3-litellm-sample), [OpenAI](https://aka.ms/meta-llama3-openai-sample) and the [Azure API](https://aka.ms/meta-llama3-azure-api-sample). +Now available on Azure AI Models-as-a-Service: +- `Meta-Llama-3.1-405B-Instruct` +- `Meta-Llama-3.1-70B-Instruct` +- `Meta-Llama-3.1-8B-Instruct` -## Deploy Meta Llama models as a serverless API +The Meta Llama 3.1 family of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). All models support long context length (128k) and are optimized for inference with support for grouped query attention (GQA). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. -Certain models in the model catalog can be deployed as a serverless API with pay-as-you-go, providing a way to consume them as an API without hosting them on your subscription while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription. +See the following GitHub samples to explore integrations with [LangChain](https://aka.ms/meta-llama-3.1-405B-instruct-langchain), [LiteLLM](https://aka.ms/meta-llama-3.1-405B-instruct-litellm), [OpenAI](https://aka.ms/meta-llama-3.1-405B-instruct-openai) and the [Azure API](https://aka.ms/meta-llama-3.1-405B-instruct-webrequests). -Meta Llama 3 models are deployed as a serverless API with pay-as-you-go billing through Microsoft Azure Marketplace, and they might add more terms of use and pricing. +## Deploy Meta Llama 3.1 405B Instruct as a serverless API ++Meta Llama 3.1 models - like `Meta Llama 3.1 405B Instruct` - can be deployed as a serverless API with pay-as-you-go, providing a way to consume them as an API without hosting them on your subscription while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription. Meta Llama 3.1 models are deployed as a serverless API with pay-as-you-go billing through Microsoft Azure Marketplace, and they might add more terms of use and pricing. ### Azure Marketplace model offerings -# [Meta Llama 3](#tab/llama-three) +# [Meta Llama 3.1](#tab/llama-three) -The following models are available in Azure Marketplace for Llama 3 when deployed as a service with pay-as-you-go: +The following models are available in Azure Marketplace for Llama 3.1 and Llama 3 when deployed as a service with pay-as-you-go: -* [Meta Llama-3 8B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-8b-chat) -* [Meta Llama-3 70B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-70b-chat) +* [Meta-Llama-3.1-405B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-405B-base) +* [Meta-Llama-3.1-70B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-8B-refresh) +* [Meta Llama-3.1-8B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-70B-refresh) +* [Meta-Llama-3-70B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-70b-chat) +* [Meta-Llama-3-8B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-8b-chat) # [Meta Llama 2](#tab/llama-two) If you need to deploy a different model, [deploy it to managed compute](#deploy- # [Meta Llama 3](#tab/llama-three) - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An [AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for Meta Llama 3 is only available with hubs created in these regions:+- An [AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for Meta Llama 3.1 and Llama 3 is only available with hubs created in these regions: * East US * East US 2 If you need to deploy a different model, [deploy it to managed compute](#deploy- To create a deployment: 1. Sign in to [Azure AI Studio](https://ai.azure.com).-1. Choose the model you want to deploy from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models). +1. Choose `Meta-Llama-3.1-405B-Instruct` deploy from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models). Alternatively, you can initiate deployment by starting from your project in AI Studio. Select a project and then select **Deployments** > **+ Create**. -1. On the model's **Details** page, select **Deploy** and then select **Serverless API with Azure AI Content Safety**. +1. On the **Details** page for `Meta-Llama-3.1-405B-Instruct`, select **Deploy** and then select **Serverless API with Azure AI Content Safety**. 1. Select the project in which you want to deploy your models. To use the pay-as-you-go model deployment offering, your workspace must belong to the **East US 2** or **Sweden Central** region. 1. On the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use. You can also select the **Marketplace offer details** tab to learn about pricing for the selected model.-1. If this is your first time deploying the model in the project, you have to subscribe your project for the particular offering (for example, Meta-Llama-3-70B) from Azure Marketplace. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**. +1. If this is your first time deploying the model in the project, you have to subscribe your project for the particular offering (for example, `Meta-Llama-3.1-405B-Instruct`) from Azure Marketplace. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**. > [!NOTE] > Subscribing a project to a particular Azure Marketplace offering (in this case, Meta-Llama-3-70B) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites). To create a deployment: 1. You can always find the endpoint's details, URL, and access keys by navigating to the project page and selecting **Deployments** from the left menu. -To learn about billing for Meta Llama models deployed with pay-as-you-go, see [Cost and quota considerations for Llama 3 models deployed as a service](#cost-and-quota-considerations-for-llama-models-deployed-as-a-service). +To learn about billing for Meta Llama models deployed with pay-as-you-go, see [Cost and quota considerations for Llama 3 models deployed as a service](#cost-and-quota-considerations-for-meta-llama-31-models-deployed-as-a-service). # [Meta Llama 2](#tab/llama-two) To create a deployment: 1. You can return to the Deployments page, select the deployment, and note the endpoint's **Target** URL and the Secret **Key**, which you can use to call the deployment and generate completions. 1. You can always find the endpoint's details, URL, and access keys by navigating to your project and selecting **Deployments** from the left menu. -To learn about billing for Llama models deployed with pay-as-you-go, see [Cost and quota considerations for Llama 3 models deployed as a service](#cost-and-quota-considerations-for-llama-models-deployed-as-a-service). +To learn about billing for Llama models deployed with pay-as-you-go, see [Cost and quota considerations for Llama 3 models deployed as a service](#cost-and-quota-considerations-for-meta-llama-31-models-deployed-as-a-service). Models deployed as a service can be consumed using either the chat or the comple 1. Select your project or hub and then select **Deployments** from the left menu. -1. Find and select the deployment you created. +1. Find and select the `Meta-Llama-3.1-405B-Instruct` deployment you created. 1. Select **Open in playground**. Models deployed as a service can be consumed using either the chat or the comple 1. Make an API request based on the type of model you deployed. - For completions models, such as `Meta-Llama-3-8B`, use the [`/completions`](#completions-api) API.- - For chat models, such as `Meta-Llama-3-8B-Instruct`, use the [`/chat/completions`](#chat-api) API. + - For chat models, such as `Meta-Llama-3.1-405B-Instruct`, use the [`/chat/completions`](#chat-api) API. - For more information on using the APIs, see the [reference](#reference-for-meta-llama-models-deployed-as-a-service) section. + For more information on using the APIs, see the [reference](#reference-for-meta-llama-31-models-deployed-as-a-service) section. # [Meta Llama 2](#tab/llama-two) Models deployed as a service can be consumed using either the chat or the comple - For completions models, such as `Meta-Llama-2-7B`, use the [`/v1/completions`](#completions-api) API or the [Azure AI Model Inference API](../reference/reference-model-inference-api.md) on the route `/completions`. - For chat models, such as `Meta-Llama-2-7B-Chat`, use the [`/v1/chat/completions`](#chat-api) API or the [Azure AI Model Inference API](../reference/reference-model-inference-api.md) on the route `/chat/completions`. - For more information on using the APIs, see the [reference](#reference-for-meta-llama-models-deployed-as-a-service) section. + For more information on using the APIs, see the [reference](#reference-for-meta-llama-31-models-deployed-as-a-service) section. -### Reference for Meta Llama models deployed as a service +### Reference for Meta Llama 3.1 models deployed as a service Llama models accept both the [Azure AI Model Inference API](../reference/reference-model-inference-api.md) on the route `/chat/completions` or a [Llama Chat API](#chat-api) on `/v1/chat/completions`. In the same way, text completions can be generated using the [Azure AI Model Inference API](../reference/reference-model-inference-api.md) on the route `/completions` or a [Llama Completions API](#completions-api) on `/v1/completions` The following is an example response: ## Deploy Meta Llama models to managed compute -Apart from deploying with the pay-as-you-go managed service, you can also deploy Meta Llama models to managed compute in AI Studio. When deployed to managed compute, you can select all the details about the infrastructure running the model, including the virtual machines to use and the number of instances to handle the load you're expecting. Models deployed to managed compute consume quota from your subscription. All the models in the Llama family can be deployed to managed compute. +Apart from deploying with the pay-as-you-go managed service, you can also deploy Meta Llama 3.1 models to managed compute in AI Studio. When deployed to managed compute, you can select all the details about the infrastructure running the model, including the virtual machines to use and the number of instances to handle the load you're expecting. Models deployed to managed compute consume quota from your subscription. The following models from the 3.1 release wave are available on managed compute: +- `Meta-Llama-3.1-8B-Instruct` (FT supported) +- `Meta-Llama-3.1-70B-Instruct` (FT supported) +- `Meta-Llama-3.1-8B` (FT supported) +- `Meta-Llama-3.1-70B` (FT supported) +- `Llama Guard 3 8B` +- `Prompt Guard` -Follow these steps to deploy a model such as `Llama-2-7b-chat` to a real-time endpoint in [Azure AI Studio](https://ai.azure.com). +Follow these steps to deploy a model such as `Meta-Llama-3.1-70B-Instruct ` to a managed compute in [Azure AI Studio](https://ai.azure.com). 1. Choose the model you want to deploy from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models). Follow these steps to deploy a model such as `Llama-2-7b-chat` to a real-time en 1. On the model's **Details** page, select **Deploy** next to the **View license** button. - :::image type="content" source="../media/deploy-monitor/llama/deploy-real-time-endpoint.png" alt-text="A screenshot showing how to deploy a model with the real-time endpoint option." lightbox="../media/deploy-monitor/llama/deploy-real-time-endpoint.png"::: + :::image type="content" source="../media/deploy-monitor/llama/deploy-real-time-endpoint.png" alt-text="A screenshot showing how to deploy a model with the managed compute option." lightbox="../media/deploy-monitor/llama/deploy-real-time-endpoint.png"::: 1. On the **Deploy with Azure AI Content Safety (preview)** page, select **Skip Azure AI Content Safety** so that you can continue to deploy the model using the UI. For reference about how to invoke Llama models deployed to managed compute, see ##### More inference examples -# [Meta Llama 3](#tab/llama-three) --| **Package** | **Sample Notebook** | -|-|-| -| OpenAI SDK (experimental) | [openaisdk.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/meta-llama3/openaisdk.ipynb) | -| LangChain | [langchain.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/meta-llama3/langchain.ipynb) | -| WebRequests | [webrequests.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/meta-llama3/webrequests.ipynb) | -| LiteLLM SDK | [litellm.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/meta-llama3/litellm.ipynb) | --# [Meta Llama 2](#tab/llama-two) - | **Package** | **Sample Notebook** | |-|-|-| OpenAI SDK (experimental) | [openaisdk.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/llama2/openaisdk.ipynb) | -| LangChain | [langchain.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/llama2/langchain.ipynb) | -| WebRequests | [webrequests.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/llama2/webrequests.ipynb) | -| LiteLLM SDK | [litellm.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/llama2/litellm.ipynb) | --+| CLI using CURL and Python web requests | [webrequests.ipynb](https://aka.ms/meta-llama-3.1-405B-instruct-webrequests)| +| OpenAI SDK (experimental) | [openaisdk.ipynb](https://aka.ms/meta-llama-3.1-405B-instruct-openai)| +| LangChain | [langchain.ipynb](https://aka.ms/meta-llama-3.1-405B-instruct-langchain)| +| LiteLLM SDK | [litellm.ipynb](https://aka.ms/meta-llama-3.1-405B-instruct-litellm) | ## Cost and quotas -### Cost and quota considerations for Llama models deployed as a service +### Cost and quota considerations for Meta Llama 3.1 models deployed as a service -Llama models deployed as a service are offered by Meta through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying or [fine-tuning the models](./fine-tune-model-llama.md). +Meta Llama 3.1 models deployed as a service are offered by Meta through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying or [fine-tuning the models](./fine-tune-model-llama.md). Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference and fine-tuning; however, multiple meters are available to track each scenario independently. For more information on how to track costs, see [monitor costs for models offere :::image type="content" source="../media/cost-management/marketplace/costs-model-as-service-cost-details.png" alt-text="A screenshot showing different resources corresponding to different model offers and their associated meters." lightbox="../media/cost-management/marketplace/costs-model-as-service-cost-details.png"::: -Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios. +Quota is managed per deployment. Each deployment has a rate limit of 400,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios. -### Cost and quota considerations for Llama models deployed as managed compute +### Cost and quota considerations for Meta Llama 3.1 models deployed as managed compute -For deployment and inferencing of Llama models with managed compute, you consume virtual machine (VM) core quota that is assigned to your subscription on a per-region basis. When you sign up for Azure AI Studio, you receive a default VM quota for several VM families available in the region. You can continue to create deployments until you reach your quota limit. Once you reach this limit, you can request a quota increase. +For deployment and inferencing of Meta Llama 3.1 models with managed compute, you consume virtual machine (VM) core quota that is assigned to your subscription on a per-region basis. When you sign up for Azure AI Studio, you receive a default VM quota for several VM families available in the region. You can continue to create deployments until you reach your quota limit. Once you reach this limit, you can request a quota increase. ## Content filtering Models deployed as a serverless API with pay-as-you-go are protected by Azure AI ## Next steps - [What is Azure AI Studio?](../what-is-ai-studio.md)-- [Fine-tune a Meta Llama 2 model in Azure AI Studio](fine-tune-model-llama.md)+- [Fine-tune a Meta Llama 3.1 models in Azure AI Studio](fine-tune-model-llama.md) - [Azure AI FAQ article](../faq.yml) - [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md) |
ai-studio | Fine Tune Model Llama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/fine-tune-model-llama.md | description: Learn how to fine-tune Meta Llama models in Azure AI Studio. Previously updated : 5/21/2024 Last updated : 7/23/2024 reviewer: shubhirajMsft Fine-tuning provides significant value by enabling customization and optimizatio In this article, you learn how to fine-tune Meta Llama models in [Azure AI Studio](https://ai.azure.com). -The [Meta Llama family of large language models (LLMs)](./deploy-models-llama.md) is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. The model family also includes fine-tuned versions optimized for dialogue use cases with Reinforcement Learning from Human Feedback (RLHF), called Llama-2-chat. +The [Meta Llama family of large language models (LLMs)](./deploy-models-llama.md) is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. The model family also includes fine-tuned versions optimized for dialogue use cases with Reinforcement Learning from Human Feedback (RLHF), called Llama-Instruct. ## Models-# [Meta Llama 3](#tab/llama-three) +# [Meta Llama 3.1](#tab/llama-three) -Fine-tuning of Llama 3 models is currently not supported. +The following models are available in Azure Marketplace for Llama 3.1 when fine-tuning as a service with pay-as-you-go billing: ++- `Meta-Llama-3.1-80B-Instruct` (preview) +- `Meta-LLama-3.1-8b-Instruct` (preview) + +Fine-tuning of Llama 3.1 models is currently supported in projects located in West US 3. ++> [!IMPORTANT] +> At this time we are not able to do fine-tuning for Llama 3.1 with sequence length of 128K. # [Meta Llama 2](#tab/llama-two) The following models are available in Azure Marketplace for Llama 2 when fine-tuning as a service with pay-as-you-go billing: -- `Llama-2-70b` (preview)-- `Llama-2-13b` (preview)-- `Llama-2-7b` (preview)+- `Meta Llama-2-70b` (preview) +- `Meta Llama-2-13b` (preview) +- `Meta Llama-2-7b` (preview) Fine-tuning of Llama 2 models is currently supported in projects located in West US 3. Fine-tuning of Llama 2 models is currently supported in projects located in West ## Prerequisites -# [Meta Llama 3](#tab/llama-three) +# [Meta Llama 3.1](#tab/llama-three) +++ An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin. +- An [Azure AI hub resource](../how-to/create-azure-ai-resource.md). ++ > [!IMPORTANT] + > For Meta Llama 3.1 models, the pay-as-you-go model fine-tune offering is only available with AI hubs created in **West US 3** regions. ++- An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio. +- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions: ++ - On the Azure subscriptionΓÇöto subscribe the Azure AI project to the Azure Marketplace offering, once for each project, per offering: + - `Microsoft.MarketplaceOrdering/agreements/offers/plans/read` + - `Microsoft.MarketplaceOrdering/agreements/offers/plans/sign/action` + - `Microsoft.MarketplaceOrdering/offerTypes/publishers/offers/plans/agreements/read` + - `Microsoft.Marketplace/offerTypes/publishers/offers/plans/agreements/read` + - `Microsoft.SaaS/register/action` + + - On the resource groupΓÇöto create and use the SaaS resource: + - `Microsoft.SaaS/resources/read` + - `Microsoft.SaaS/resources/write` + + - On the Azure AI projectΓÇöto deploy endpoints (the Azure AI Developer role contains these permissions already): + - `Microsoft.MachineLearningServices/workspaces/marketplaceModelSubscriptions/*` + - `Microsoft.MachineLearningServices/workspaces/serverlessEndpoints/*` ++ For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md). -Fine-tuning of Llama 3 models is currently not supported. # [Meta Llama 2](#tab/llama-two) The supported file type is JSON Lines. Files are uploaded to the default datasto ## Fine-tune a Meta Llama model -# [Meta Llama 3](#tab/llama-three) +# [Meta Llama 3.1](#tab/llama-three) ++To fine-tune a LLama 3.1 model: ++1. Sign in to [Azure AI Studio](https://ai.azure.com). +1. Choose the model you want to fine-tune from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models). ++1. On the model's **Details** page, select **fine-tune**. ++1. Select the project in which you want to fine-tune your models. To use the pay-as-you-go model fine-tune offering, your workspace must belong to the **West US 3** region. +1. On the fine-tune wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use. You can also select the **Marketplace offer details** tab to learn about pricing for the selected model. +1. If this is your first time fine-tuning the model in the project, you have to subscribe your project for the particular offering (for example, Meta-Llama-3-70B) from Azure Marketplace. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and fine-tune**. ++ > [!NOTE] + > Subscribing a project to a particular Azure Marketplace offering (in this case, Meta-Llama-3-70B) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites). ++1. Once you sign up the project for the particular Azure Marketplace offering, subsequent fine-tuning of the _same_ offering in the _same_ project don't require subscribing again. Therefore, you don't need to have the subscription-level permissions for subsequent fine-tune jobs. If this scenario applies to you, select **Continue to fine-tune**. ++1. Enter a name for your fine-tuned model and the optional tags and description. +1. Select training data to fine-tune your model. See [data preparation](#data-preparation) for more information. ++ > [!NOTE] + > If you have your training/validation files in a credential less datastore, you will need to allow workspace managed identity access to their datastore in order to proceed with MaaS finetuning with a credential less storage. On the "Datastore" page, after clicking "Update authentication" > Select the following option: + + ![Use workspace managed identity for data preview and profiling in Azure Machine Learning Studio.](../media/how-to/fine-tune/llama/credentials.png) ++ Make sure all your training examples follow the expected format for inference. To fine-tune models effectively, ensure a balanced and diverse dataset. This involves maintaining data balance, including various scenarios, and periodically refining training data to align with real-world expectations, ultimately leading to more accurate and balanced model responses. + - The batch size to use for training. When set to -1, batch_size is calculated as 0.2% of examples in training set and the max is 256. + - The fine-tuning learning rate is the original learning rate used for pretraining multiplied by this multiplier. We recommend experimenting with values between 0.5 and 2. Empirically, we've found that larger learning rates often perform better with larger batch sizes. Must be between 0.0 and 5.0. + - Number of training epochs. An epoch refers to one full cycle through the data set. ++1. Task parameters are an optional step and an advanced option- Tuning hyperparameter is essential for optimizing large language models (LLMs) in real-world applications. It allows for improved performance and efficient resource usage. The default settings can be used or advanced users can customize parameters like epochs or learning rate. ++1. Review your selections and proceed to train your model. ++Once your model is fine-tuned, you can deploy the model and can use it in your own application, in the playground, or in prompt flow. For more information, see [How to deploy Llama 3.1 family of large language models with Azure AI Studio](./deploy-models-llama.md). -Fine-tuning of Llama 3 models is currently not supported. # [Meta Llama 2](#tab/llama-two) |
ai-studio | Azure Open Ai Gpt 4V Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/azure-open-ai-gpt-4v-tool.md | The prompt flow Azure OpenAI GPT-4 Turbo with Vision tool enables you to use you ## Prerequisites - An Azure subscription. <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">You can create one for free</a>.-- Access granted to Azure OpenAI in the desired Azure subscription.-- Currently, you must apply for access to this service. To apply for access to Azure OpenAI, complete the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. - - An [AI Studio hub](../../how-to/create-azure-ai-resource.md) with a GPT-4 Turbo with Vision model deployed in [one of the regions that support GPT-4 Turbo with Vision](../../../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability). When you deploy from your project's **Deployments** page, select `gpt-4` as the model name and `vision-preview` as the model version. ## Build with the Azure OpenAI GPT-4 Turbo with Vision tool |
ai-studio | Get Started Playground | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/get-started-playground.md | The steps in this quickstart include: ## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.-- Access granted to Azure OpenAI in the desired Azure subscription.-- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. - - You need an Azure AI Studio hub or permissions to create one. Your user role must be **Azure AI Developer**, **Contributor**, or **Owner** on the hub. For more information, see [hubs](../concepts/ai-resources.md) and [Azure AI roles](../concepts/rbac-ai-studio.md). - If your role is **Contributor** or **Owner**, you can [create a hub in this tutorial](#create-a-project-in-azure-ai-studio). - If your role is **Azure AI Developer**, the hub must already be created. |
ai-studio | Hear Speak Playground | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/hear-speak-playground.md | The speech to text and text to speech features can be used together or separatel ## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.-- Access granted to Azure OpenAI in the desired Azure subscription.-- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. - - An [AI Studio hub](../how-to/create-azure-ai-resource.md) with a chat model deployed. For more information about model deployment, see the [resource deployment guide](../../ai-services/openai/how-to/create-resource.md). - An [AI Studio project](../how-to/create-projects.md). |
ai-studio | Multimodal Vision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/multimodal-vision.md | Extra usage fees might apply when using GPT-4 Turbo with Vision and Azure AI Vis ## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.-- Access granted to Azure OpenAI in the desired Azure subscription.- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. - Once you have your Azure subscription, <a href="/azure/ai-services/openai/how-to/create-resource?pivots=web-portal" title="Create an Azure OpenAI resource." target="_blank">create an Azure OpenAI resource </a>. - An [AI Studio hub](../how-to/create-azure-ai-resource.md) with your Azure OpenAI resource added as a connection. |
ai-studio | Copilot Sdk Build Rag | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/copilot-sdk-build-rag.md | This system is able to interpret the intent of the query "how much does it cost? If you navigate to the trace from this flow run, you see this in action. The local traces link shows in the console output before the result of the flow test run. ## Clean up resources |
ai-studio | Deploy Chat Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-chat-web-app.md | The steps in this tutorial are: ## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.-- Access granted to Azure OpenAI in the desired Azure subscription.-- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. - - An [AI Studio hub](../how-to/create-azure-ai-resource.md), [project](../how-to/create-projects.md), and [deployed Azure OpenAI](../how-to/deploy-models-openai.md) chat model. Complete the [AI Studio playground quickstart](../quickstarts/get-started-playground.md) to create these resources if you haven't already. - An [Azure AI Search service connection](../how-to/connections-add.md#create-a-new-connection) to index the sample product and customer data. |
ai-studio | Deploy Copilot Ai Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-copilot-ai-studio.md | The steps in this tutorial are: ## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.-- Access granted to Azure OpenAI in the desired Azure subscription.-- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. - - An [AI Studio hub](../how-to/create-azure-ai-resource.md), [project](../how-to/create-projects.md), and [deployed Azure OpenAI](../how-to/deploy-models-openai.md) chat model. Complete the [AI Studio playground quickstart](../quickstarts/get-started-playground.md) to create these resources if you haven't already. - An [Azure AI Search service connection](../how-to/connections-add.md#create-a-new-connection) to index the sample product and customer data. |
ai-studio | What Is Ai Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/what-is-ai-studio.md | You can [explore AI Studio (including the model catalog)](./how-to/model-catalog But for full functionality there are some requirements: - You need an [Azure account](https://azure.microsoft.com/free/). -- You also need to apply for access to Azure OpenAI Service by completing the form at [https://aka.ms/oai/access](https://aka.ms/oai/access). You receive a follow-up email when your subscription is added. ## Next steps |
aks | Aks Extension Attach Azure Container Registry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-extension-attach-azure-container-registry.md | + + Title: Attach to Azure Container Registry (ACR) using the Azure Kubernetes Service (AKS) extension for Visual Studio Code +description: Learn how to attach to Azure Container Registry (ACR) using the Azure Kubernetes Service (AKS) extension for Visual Studio Code. ++ Last updated : 07/15/2024+++++# Attach to Azure Container Registry (ACR) using the Azure Kubernetes Service (AKS) extension for Visual Studio Code ++In this article, you learn how to attach to Azure Container Registry (ACR) using the Azure Kubernetes Service (AKS) extension for Visual Studio Code. ++## Prerequisites ++Before you begin, make sure you have the following resources: ++* An Azure container registry. If you don't have one, create one using the steps in [Quickstart: Create a private container registry][create-acr-cli]. +* An AKS cluster. If you don't have one, create one using the steps in [Quickstart: Deploy an AKS cluster][deploy-aks-cli]. +* The Azure Kubernetes Service (AKS) extension for Visual Studio Code downloaded. For more information, see [Install the Azure Kubernetes Service (AKS) extension for Visual Studio Code][install-aks-vscode]. ++## Attach your Azure container registry to your AKS cluster ++You can access the screen for attaching your container registry to your AKS cluster using the command palette or the Kubernetes view. ++### [Command palette](#tab/command-palette) ++1. On your keyboard, press `Ctrl+Shift+P` to open the command palette. +2. Enter the following information: ++ * **Subscription**: Select the Azure subscription that holds your resources. + * **ACR Resource Group**: Select the resource group for your container registry. + * **Container Registry**: Select the container registry you want to attach to your cluster. + * **Cluster Resource Group**: Select the resource group for your cluster. + * **Cluster**: Select the cluster you want to attach to your container registry. ++3. Select **Attach**. ++ You should see a green checkmark, which means your container registry is attached to your AKS cluster. ++### [Kubernetes view](#tab/kubernetes-view) ++1. In the Kubernetes tab, under Clouds > Azure > your subscription > Automated Deployments, right click on your cluster and select **Attach ACR to Cluster**. +2. Enter the following information: ++ * **Subscription**: Select the Azure subscription that holds your resources. + * **ACR Resource Group**: Select the resource group for your container registry. + * **Container Registry**: Select the container registry you want to attach to your cluster. + * **Cluster Resource Group**: Select the resource group for your cluster. + * **Cluster**: Select the cluster you want to attach to your container registry. ++3. Select **Attach**. ++ You should see a green checkmark, which means your container registry is attached to your AKS cluster. ++++For more information, see [AKS extension for Visual Studio Code features][aks-vscode-features]. ++## Product support and feedback ++If you have a question or want to offer product feedback, please open an issue on the [AKS extension GitHub repository][aks-vscode-github]. ++## Next steps ++To learn more about other AKS add-ons and extensions, see [Add-ons, extensions, and other integrations for AKS][aks-addons]. ++<!LINKS> +[create-acr-cli]: ../container-registry/container-registry-get-started-azure-cli.md +[deploy-aks-cli]: ./learn/quick-kubernetes-deploy-cli.md +[install-aks-vscode]: ./aks-extension-vs-code.md#installation +[aks-vscode-features]: https://code.visualstudio.com/docs/azure/aksextensions#_features +[aks-vscode-github]: https://github.com/Azure/vscode-aks-tools/issues/new/choose +[aks-addons]: ./integrations.md + |
aks | Aks Extension Draft Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-extension-draft-deployment.md | + + Title: Create a Kubernetes deployment using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code +description: Learn how create a Kubernetes deployment using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code. ++ Last updated : 07/15/2024+++++# Create a Kubernetes deployment using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code ++In this article, you learn how to create a Kubernetes deployment using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code. Automated Deployments provides an easy way to automate the process of scaling, updating, and maintaining your applications. ++## Prerequisites ++Before you begin, make sure you have the following resources: ++* An active folder with code open in Visual Studio Code. +* The Azure Kubernetes Service (AKS) extension for Visual Studio Code downloaded. For more information, see [Install the Azure Kubernetes Service (AKS) extension for Visual Studio Code][install-aks-vscode]. ++## Create a Kubernetes deployment using the Azure Kubernetes Service (AKS) extension ++You can access the screen to create a Kubernetes deployment using the command palette or the explorer view. ++### [Command palette](#tab/command-palette) ++1. On your keyboard, press `Ctrl+Shift+P` to open the command palette. +2. In the search bar, search for and select **Automated Deployments: Create a Deployment**. +3. Enter the following information: ++ * **Subscription**: Select your Azure subscription. + * **Location**: Select a location where you want to save your Kubernetes deployment files. + * **Deployment options**: Select `Kubernetes manifests`, `Helm`, or `Kustomize`. + * **Target port**: Select the port in which your applications listen to in your deployment. This port usually matches what is exposed in your Dockerfile. + * **Service port**: Select the port in which the service listens to for incoming traffic. + * **Namespace**: Select the namespace in which your application will be deployed into. ++4. Select **Create**. +++### [Explorer view](#tab/explorer-view) ++1. Right click on the explorer pane where your active folder is open and select **Create a Deployment**. +2. Enter the following information: ++ * **Subscription**: Select your Azure subscription. + * **Location**: Select a location where you want to save your Kubernetes deployment files. + * **Deployment options**: Select `Kubernetes manifests`, `Helm`, or `Kustomize`. + * **Target port**: Select the port in which your applications listen to in your deployment. This port usually matches what is exposed in your Dockerfile. + * **Service port**: Select the port in which the service listens to for incoming traffic. + * **Namespace**: Select the namespace in which your application will be deployed into. ++3. Select **Create**. ++++For more information, see [AKS extension for Visual Studio Code features][aks-vscode-features]. ++## Product support and feedback + +If you have a question or want to offer product feedback, please open an issue on the [AKS extension GitHub repository][aks-vscode-github]. + +## Next steps + +To learn more about other AKS add-ons and extensions, see [Add-ons, extensions, and other integrations for AKS][aks-addons]. ++<!LINKS> +[install-aks-vscode]: ./aks-extension-vs-code.md#installation +[aks-vscode-features]: https://code.visualstudio.com/docs/azure/aksextensions#_features +[aks-vscode-github]: https://github.com/Azure/vscode-aks-tools/issues/new/choose +[aks-addons]: ./integrations.md + + |
aks | Aks Extension Draft Dockerfile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-extension-draft-dockerfile.md | + + Title: Create a Dockerfile using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code +description: Learn how to create a Dockerfile using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code. ++ Last updated : 07/15/2024+++++# Create a Dockerfile using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code ++In this article, you learn how to create a Dockerfile using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code. A Dockerfile is essential for Kubernetes because it defines the blueprint for creating Docker images. These images encapsulate your application along with its dependencies and environment settings, ensuring consistent deployment across various environments. ++## Prerequisites ++Before you begin, make sure you have the following resources: ++* An active folder with code open in Visual Studio Code. +* The Azure Kubernetes Service (AKS) extension for Visual Studio Code downloaded. For more information, see [Install the Azure Kubernetes Service (AKS) extension for Visual Studio Code][install-aks-vscode]. ++## Create a Dockerfile using the Azure Kubernetes Service (AKS) extension ++You can access the screen to create a Dockerfile using the command palette or the explorer view. ++### [Command palette](#tab/command-palette) ++1. On your keyboard, press `Ctrl+Shift+P` to open the command palette. +2. In the search bar, search for and select **Automated Deployments: Create a Dockerfile**. +3. Enter the following information: ++ * **Location**: Select a location where you want to save your Dockerfile. + * **Programming language**: Select the programming language your app is written in. + * **Programming language version**: Select the programming language version. + * **Application Port**: Select the port. + * **Cluster**: Select the port in which your application listens to for incoming network connections. ++4. Select **Create**. ++### [Explorer view](#tab/explorer-view) ++1. Right click on the explorer pane where your active folder is open and select **Create a Dockerfile**. +2. Enter the following information: ++ * **Location**: Select a location where you want to save your Dockerfile. + * **Programming language**: Select the programming language your app is written in. + * **Programming language version**: Select the programming language version. + * **Application Port**: Select the port. + * **Cluster**: Select the port in which your application listens to for incoming network connections. ++3. Select **Create**. ++++For more information, see [AKS extension for Visual Studio Code features][aks-vscode-features]. ++## Product support and feedback + +If you have a question or want to offer product feedback, please open an issue on the [AKS extension GitHub repository][aks-vscode-github]. + +## Next steps + +To learn more about other AKS add-ons and extensions, see [Add-ons, extensions, and other integrations for AKS][aks-addons]. + +<!LINKS> +[install-aks-vscode]: ./aks-extension-vs-code.md#installation +[aks-vscode-features]: https://code.visualstudio.com/docs/azure/aksextensions#_features +[aks-vscode-github]: https://github.com/Azure/vscode-aks-tools/issues/new/choose +[aks-addons]: ./integrations.md + |
aks | Aks Extension Draft Github Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-extension-draft-github-workflow.md | + + Title: Create a GitHub Workflow using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code +description: Learn how to create a GitHub Workflow using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code. ++ Last updated : 07/15/2024+++++# Create a GitHub Workflow using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code ++In this article, you learn how to create a GitHub Workflow using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code. A GitHub Workflow automates various development tasks, such as building, testing, and deploying code, ensuring consistency and efficiency across the development process. It enhances collaboration by integrating seamlessly with version control, enabling continuous integration and continuous deployment (CI/CD) pipelines, and ensuring that all changes are thoroughly vetted before being merged into the main codebase. ++## Prerequisites ++Before you begin, make sure you have the following resources: ++* An active folder with code open in Visual Studio Code. +* Make sure the current workspace is an active `git` repository. +* The Azure Kubernetes Service (AKS) extension for Visual Studio Code downloaded. For more information, see [Install the Azure Kubernetes Service (AKS) extension for Visual Studio Code][install-aks-vscode]. ++## Create a GitHub Workflow using the Azure Kubernetes Service (AKS) extension ++You can access the screen to create a GitHub Workflow using the command palette or the Kubernetes view. ++### [Command palette](#tab/command-palette) ++1. On your keyboard, press `Ctrl+Shift+P` to open the command palette. +2. Enter the following information: ++ * **Workflow name**: Enter a name for your GitHub Workflow. + * **GitHub repository**: Select the location where want to save your Kubernetes deployment files. + * **Subscription**: Select your Azure subscription. + * **Dockerfile**: Select the Dockerfile that you want to build in the GitHub Action. + * **Build context**: Select a build context. + * **ACR Resource Group**: Select an ACR resource group. + * **Container Registry**: Select a container registry. + * **Azure Container Registry image**: Select or enter an Azure Container Registry image. + * **Cluster Resource Group**: Select your cluster resource group. + * **Cluster**: Select your AKS cluster. + * **Namespace**: Select or enter a namespace in which you will deploy into. + * **Type**: Select the type of deployment option. ++3. Select **Create**. ++### [Kubernetes view](#tab/kubernetes-view) + +1. In the Kubernetes tab, under Clouds > Azure > your subscription > Automated Deployments, right click on your cluster and select **Create a GitHub Workflow**. +2. Enter the following information: ++ * **Workflow name**: Enter a name for your GitHub Workflow. + * **GitHub repository**: Select the location where want to save your Kubernetes deployment files. + * **Subscription**: Select your Azure subscription. + * **Dockerfile**: Select the Dockerfile that you want to build in the GitHub Action. + * **Build context**: Select a build context. + * **ACR Resource Group**: Select an ACR resource group. + * **Container Registry**: Select a container registry. + * **Azure Container Registy image**: Select or enter an Azure Container Registry image. + * **Cluster Resource Group**: Select your cluster resource group. + * **Cluster**: Select your AKS cluster. + * **Namespace**: Select or enter a namespace in which you will deploy into. + * **Type**: Select the type of deployment option. ++3. Select **Create**. ++++For more information, see [AKS extension for Visual Studio Code features][aks-vscode-features]. ++## Product support and feedback + +If you have a question or want to offer product feedback, please open an issue on the [AKS extension GitHub repository][aks-vscode-github]. + +## Next steps + +To learn more about other AKS add-ons and extensions, see [Add-ons, extensions, and other integrations for AKS][aks-addons]. + +<!LINKS> +[install-aks-vscode]: ./aks-extension-vs-code.md#installation +[aks-vscode-features]: https://code.visualstudio.com/docs/azure/aksextensions#_features +[aks-vscode-github]: https://github.com/Azure/vscode-aks-tools/issues/new/choose +[aks-addons]: ./integrations.md + |
aks | Tutorial Kubernetes Deploy Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-deploy-application.md | Title: Kubernetes on Azure tutorial - Deploy an application to Azure Kubernetes Service (AKS) description: In this Azure Kubernetes Service (AKS) tutorial, you deploy a multi-container application to your cluster using images stored in Azure Container Registry. Previously updated : 02/20/2023 Last updated : 06/10/2024 #Customer intent: As a developer, I want to learn how to deploy apps to an Azure Kubernetes Service (AKS) cluster so that I can deploy and run my own applications. In these tutorials, your Azure Container Registry (ACR) instance stores the cont az acr list --resource-group myResourceGroup --query "[].{acrLoginServer:loginServer}" --output table ``` -2. Make sure you're in the cloned *aks-store-demo* directory, and then open the manifest file with a text editor, such as `vi`. -- ```azurecli-interactive - vi aks-store-quickstart.yaml - ``` +2. Make sure you're in the cloned *aks-store-demo* directory, and then open the `aks-store-quickstart.yaml` manifest file with a text editor. 3. Update the `image` property for the containers by replacing *ghcr.io/azure-samples* with your ACR login server name. In these tutorials, your Azure Container Registry (ACR) instance stores the cont ... ``` -4. Save and close the file. In `vi`, use `:wq`. +4. Save and close the file. ### [Azure PowerShell](#tab/azure-powershell) 1. Get your login server address using the [`Get-AzContainerRegistry`][get-azcontainerregistry] cmdlet and query for your login server. Make sure you replace `<acrName>` with the name of your ACR instance. ```azurepowershell-interactive- (Get-AzContainerRegistry -ResourceGroupName myResourceGroup -Name <acrName>).LoginServer + (Get-AzContainerRegistry -ResourceGroupName myResourceGroup -Name $ACRNAME).LoginServer ``` -2. Make sure you're in the cloned *aks-store-demo* directory, and then open the manifest file with a text editor, such as `vi`. -- ```azurepowershell-interactive - vi aks-store-quickstart.yaml - ``` +2. Make sure you're in the cloned *aks-store-demo* directory, and then open the `aks-store-quickstart.yaml` manifest file with a text editor. 3. Update the `image` property for the containers by replacing *ghcr.io/azure-samples* with your ACR login server name. In these tutorials, your Azure Container Registry (ACR) instance stores the cont ... ``` -4. Save and close the file. In `vi`, use `:wq`. +4. Save and close the file. ### [Azure Developer CLI](#tab/azure-azd) In these tutorials, your Azure Container Registry (ACR) instance stores the cont The following example output shows the resources successfully created in the AKS cluster: ```output- deployment.apps/rabbitmq created + statefulset.apps/rabbitmq created + configmap/rabbitmq-enabled-plugins created service/rabbitmq created deployment.apps/order-service created service/order-service created In these tutorials, your Azure Container Registry (ACR) instance stores the cont The following example output shows the resources successfully created in the AKS cluster: ```output- deployment.apps/rabbitmq created + statefulset.apps/rabbitmq created + configmap/rabbitmq-enabled-plugins created service/rabbitmq created deployment.apps/order-service created service/order-service created When the application runs, a Kubernetes service exposes the application front en kubectl get service store-front --watch ``` - Initially, the `EXTERNAL-IP` for the *store-front* service shows as *pending*: + Initially, the `EXTERNAL-IP` for the `store-front` service shows as `<pending>`: ```output store-front LoadBalancer 10.0.34.242 <pending> 80:30676/TCP 5s ``` -2. When the `EXTERNAL-IP` address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. +2. When the `EXTERNAL-IP` address changes from `<pending>` to a public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service: When the application runs, a Kubernetes service exposes the application front en store-front LoadBalancer 10.0.34.242 52.179.23.131 80:30676/TCP 67s ``` -3. View the application in action by opening a web browser to the external IP address of your service. +3. View the application in action by opening a web browser and navigating to the external IP address of your service: `http://<external-ip>`. :::image type="content" source="./learn/media/quick-kubernetes-deploy-cli/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="./learn/media/quick-kubernetes-deploy-cli/aks-store-application.png"::: Navigate to your Azure portal to find your deployment information. 1. Open your [Resource Group][azure-rg] on the Azure portal 1. Navigate to the Kubernetes service for your cluster 1. Select `Services and Ingress` under `Kubernetes Resources`-1. Copy the External IP shown in the column for store-front +1. Copy the External IP shown in the column for the `store-front` service 1. Paste the IP into your browser and visit your store page :::image type="content" source="./learn/media/quick-kubernetes-deploy-cli/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="./learn/media/quick-kubernetes-deploy-cli/aks-store-application.png"::: +## Clean up resources ++Since you validated the application's functionality, you can now remove the cluster from the application. We will deploy the application again in the next tutorial. ++1. Stop and remove the container instances and resources using the [`docker-compose down`][docker-compose-down] command. ++ ```console + kubectl delete -f aks-store-quickstart.yaml + ``` ++1. Check that all the application pods have been removed: ++ ```console + kubectl get pods + ``` + ## Next steps In this tutorial, you deployed a sample Azure application to a Kubernetes cluster in AKS. You learned how to: |
aks | Tutorial Kubernetes Deploy Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-deploy-cluster.md | Title: Kubernetes on Azure tutorial - Create an Azure Kubernetes Service (AKS) cluster description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to create an AKS cluster and use kubectl to connect to the Kubernetes main node. Previously updated : 02/14/2024 Last updated : 06/10/2024 Kubernetes provides a distributed platform for containerized applications. With In this tutorial, part three of seven, you deploy a Kubernetes cluster in AKS. You learn how to: > [!div class="checklist"]-+> > * Deploy an AKS cluster that can authenticate to an Azure Container Registry (ACR). > * Install the Kubernetes CLI, `kubectl`. > * Configure `kubectl` to connect to your AKS cluster. For information about AKS resource limits and region availability, see [Quotas, To allow an AKS cluster to interact with other Azure resources, the Azure platform automatically creates a cluster identity. In this example, the cluster identity is [granted the right to pull images][container-registry-integration] from the ACR instance you created in the previous tutorial. To execute the command successfully, you need to have an **Owner** or **Azure account administrator** role in your Azure subscription. -* Create an AKS cluster using the [`az aks create`][az aks create] command. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region. +* Create an AKS cluster using the [`az aks create`][az aks create] command. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region. We will continue to use the environment variable, `$ACRNAME`, that we set in the [previous tutorial][aks-tutorial-prepare-acr]. If you do not have this environment variable set, set it now to the same value you used previously. ```azurecli-interactive az aks create \ To allow an AKS cluster to interact with other Azure resources, the Azure platfo --name myAKSCluster \ --node-count 2 \ --generate-ssh-keys \- --attach-acr <acrName> + --attach-acr $ACRNAME ``` > [!NOTE] To allow an AKS cluster to interact with other Azure resources, the Azure platfo * Create an AKS cluster using the [`New-AzAksCluster`][new-azakscluster] cmdlet. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region. ```azurepowershell-interactive- New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 2 -GenerateSshKey -AcrNameToAttach <acrName> + New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 2 -GenerateSshKey -AcrNameToAttach $ACRNAME ``` > [!NOTE] To avoid needing an **Owner** or **Azure account administrator** role, you can a ```output NAME STATUS ROLES AGE VERSION- aks-nodepool1-19366578-vmss000002 Ready agent 47h v1.25.6 - aks-nodepool1-19366578-vmss000003 Ready agent 47h v1.25.6 + aks-nodepool1-19366578-vmss000000 Ready agent 47h v1.28.9 + aks-nodepool1-19366578-vmss000001 Ready agent 47h v1.28.9 ``` ### [Azure PowerShell](#tab/azure-powershell) To avoid needing an **Owner** or **Azure account administrator** role, you can a ```output NAME STATUS ROLES AGE VERSION- aks-nodepool1-19366578-vmss000002 Ready agent 47h v1.25.6 - aks-nodepool1-19366578-vmss000003 Ready agent 47h v1.25.6 + aks-nodepool1-19366578-vmss000000 Ready agent 47h v1.28.9 + aks-nodepool1-19366578-vmss000001 Ready agent 47h v1.28.9 ``` ### [Azure Developer CLI](#tab/azure-azd) To avoid needing an **Owner** or **Azure account administrator** role, you can a ```output NAME STATUS ROLES AGE VERSION- aks-nodepool1-19366578-vmss000002 Ready agent 47h v1.25.6 - aks-nodepool1-19366578-vmss000003 Ready agent 47h v1.25.6 + aks-nodepool1-19366578-vmss000000 Ready agent 47h v1.28.9 + aks-nodepool1-19366578-vmss000001 Ready agent 47h v1.28.9 ``` [!INCLUDE [azd-login-ts](./includes/azd/azd-login-ts.md)] |
aks | Tutorial Kubernetes Paas Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-paas-services.md | Title: Kubernetes on Azure tutorial - Use PaaS services with an Azure Kubernetes Service (AKS) cluster description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to use the Azure Service Bus service with your AKS cluster. Previously updated : 10/23/2023 Last updated : 06/10/2024 #Customer intent: As a developer, I want to learn how to use PaaS services with an Azure Kubernetes Service (AKS) cluster so that I can deploy and manage my applications. In previous tutorials, you used a RabbitMQ container to store orders submitted b kubectl get service store-front ``` -2. Navigate to the external IP address of the `store-front` service in your browser. +2. Navigate to the external IP address of the `store-front` service in your browser using `http://<external-ip>`. 3. Place an order by choosing a product and selecting **Add to cart**. 4. Select **Cart** to view your order, and then select **Checkout**. |
aks | Tutorial Kubernetes Prepare Acr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-prepare-acr.md | Title: Kubernetes on Azure tutorial - Create an Azure Container Registry and build images description: In this Azure Kubernetes Service (AKS) tutorial, you create an Azure Container Registry instance and upload sample application container images. Previously updated : 11/28/2023 Last updated : 06/10/2024 Before creating an ACR instance, you need a resource group. An Azure resource gr 2. Create an ACR instance using the [`New-AzContainerRegistry`][new-azcontainerregistry] cmdlet and provide your own unique registry name. The registry name must be unique within Azure and contain 5-50 alphanumeric characters. The rest of this tutorial uses an environment variable, `$ACRNAME`, as a placeholder for the container registry name. You can set this environment variable to your unique ACR name to use in future commands. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput. ```azurepowershell-interactive+ $rand=New-Object System.Random + $RAND=$rand.Next() + $ACRNAME="myregistry$RAND" # Or replace with your own name New-AzContainerRegistry -ResourceGroupName myResourceGroup -Name $ACRNAME -Location eastus -Sku Basic ``` |
aks | Tutorial Kubernetes Prepare App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-prepare-app.md | Title: Kubernetes on Azure tutorial - Prepare an application for Azure Kubernetes Service (AKS) description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to prepare and build a multi-container app with Docker Compose that you can then deploy to AKS. Previously updated : 02/15/2023 Last updated : 06/10/2024 In the next tutorial, you learn how to create a cluster using the `azd` template <!-- LINKS - external --> [docker-compose]: https://docs.docker.com/compose/-[docker-for-linux]: https://docs.docker.com/engine/installation/#supported-platforms -[docker-for-mac]: https://docs.docker.com/docker-for-mac/ -[docker-for-windows]: https://docs.docker.com/docker-for-windows/ +[docker-for-linux]: https://docs.docker.com/desktop/install/linux-install/ +[docker-for-mac]: https://docs.docker.com/desktop/install/mac-install/ +[docker-for-windows]: https://docs.docker.com/desktop/install/windows-install/ [docker-get-started]: https://docs.docker.com/get-started/ [docker-images]: https://docs.docker.com/engine/reference/commandline/images/ [docker-ps]: https://docs.docker.com/engine/reference/commandline/ps/ |
aks | Tutorial Kubernetes Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-scale.md | Title: Kubernetes on Azure tutorial - Scale applications in Azure Kubernetes Service (AKS) description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to scale nodes and pods and implement horizontal pod autoscaling. Previously updated : 03/05/2023 Last updated : 06/10/2024 The following example increases the number of nodes to three in the Kubernetes c Once the cluster successfully scales, your output will be similar to following example output: ```output+ "aadProfile": null, + "addonProfiles": null, "agentPoolProfiles": [ {+ ... "count": 3,- "dnsPrefix": null, - "fqdn": null, - "name": "myAKSCluster", - "osDiskSizeGb": null, + "mode": "System", + "name": "nodepool1", + "osDiskSizeGb": 128, + "osDiskType": "Managed", "osType": "Linux", "ports": null,- "vmSize": "Standard_D2_v2", + "vmSize": "Standard_DS2_v2", "vnetSubnetId": null+ ... }+ ... + ] ``` ### [Azure PowerShell](#tab/azure-powershell) The following example increases the number of nodes to three in the Kubernetes c Once the cluster successfully scales, your output will be similar to following example output: ```output- ProvisioningState : Succeeded - MaxAgentPools : 100 - KubernetesVersion : 1.19.9 - DnsPrefix : myAKSCluster - Fqdn : myakscluster-000a0aa0.hcp.eastus.azmk8s.io - PrivateFQDN : - AgentPoolProfiles : {default} - WindowsProfile : Microsoft.Azure.Commands.Aks.Models.PSManagedClusterWindowsProfile - AddonProfiles : {} - NodeResourceGroup : MC_myresourcegroup_myAKSCluster_eastus - EnableRBAC : True - EnablePodSecurityPolicy : - NetworkProfile : Microsoft.Azure.Commands.Aks.Models.PSContainerServiceNetworkProfile - AadProfile : - ApiServerAccessProfile : - Identity : - LinuxProfile : Microsoft.Azure.Commands.Aks.Models.PSContainerServiceLinuxProfile - ServicePrincipalProfile : Microsoft.Azure.Commands.Aks.Models.PSContainerServiceServicePrincipalProfile - Id : /subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/myresourcegroup/providers/Micros - oft.ContainerService/managedClusters/myAKSCluster - Name : myAKSCluster - Type : Microsoft.ContainerService/ManagedClusters - Location : eastus - Tags : {} + ... + ProvisioningState : Succeeded + MaxAgentPools : 100 + KubernetesVersion : 1.28 + CurrentKubernetesVersion : 1.28.9 + DnsPrefix : myAKSCluster + Fqdn : myakscluster-000a0aa0.hcp.eastus.azmk8s.io + PrivateFQDN : + AzurePortalFQDN : myakscluster-000a0aa0.portal.hcp.eastus.azmk8s.io + AgentPoolProfiles : {default} + ... + ResourceGroupName : myResourceGroup + Id : /subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/myResourceGroup/providers/Mic + rosoft.ContainerService/managedClusters/myAKSCluster + Name : myAKSCluster + Type : Microsoft.ContainerService/ManagedClusters + Location : eastus + Tags : ``` |
aks | Tutorial Kubernetes Upgrade Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-upgrade-cluster.md | Title: Kubernetes on Azure tutorial - Upgrade an Azure Kubernetes Service (AKS) cluster description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to upgrade an existing AKS cluster to the latest available Kubernetes version. Previously updated : 11/02/2023 Last updated : 06/10/2024 If using Azure PowerShell, this tutorial requires Azure PowerShell version 5.9.0 az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster ``` - The following example output shows the current version as *1.26.6* and lists the available versions under `upgrades`: + The following example output shows the current version as *1.28.9* and lists the available versions under `upgrades`: ```output- { - "agentPoolProfiles": null, - "controlPlaneProfile": { - "kubernetesVersion": "1.26.6", + { + "agentPoolProfiles": null, + "controlPlaneProfile": { + "kubernetesVersion": "1.28.9", + ... + "upgrades": [ + { + "isPreview": null, + "kubernetesVersion": "1.29.4" + }, + { + "isPreview": null, + "kubernetesVersion": "1.29.2" + } + ] + }, ...- "upgrades": [ - { - "isPreview": null, - "kubernetesVersion": "1.27.1" - }, - { - "isPreview": null, - "kubernetesVersion": "1.27.3" - } - ] - }, - ... - } + } ``` ### [Azure PowerShell](#tab/azure-powershell) If using Azure PowerShell, this tutorial requires Azure PowerShell version 5.9.0 ```azurepowershell-interactive Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster |- Select-Object -Property Name, KubernetesVersion, Location + Select-Object -Property Name, CurrentKubernetesVersion, Location ``` - The following example output shows the current version as *1.26.6* and the location as *eastus*: + The following example output shows the current version as *1.28.9* and the location as *eastus*: ```output- Name KubernetesVersion Location - - -- -- - myAKSCluster 1.26.6 eastus + Name CurrentKubernetesVersion Location + - -- + myAKSCluster 1.28.9 eastus ``` 2. Check which Kubernetes upgrade releases are available in the region where your cluster resides using the [`Get-AzAksVersion`][get-azaksversion] cmdlet. If using Azure PowerShell, this tutorial requires Azure PowerShell version 5.9.0 ```output Default IsPreview OrchestratorType OrchestratorVersion - - -- Kubernetes 1.27.1 - Kubernetes 1.27.3 + Kubernetes 1.29.4 + Kubernetes 1.29.2 + True Kubernetes 1.28.9 + Kubernetes 1.28.5 + ... ``` ### [Azure portal](#tab/azure-portal) You can either [manually upgrade your cluster](#manually-upgrade-cluster) or [co --kubernetes-version KUBERNETES_VERSION ``` +* You will be prompted to confirm the upgrade operation, and to confirm that you want to upgrade the control plane *and* all the node pools to the selected version of Kubernetes: ++ ```console + Are you sure you want to perform this operation? (y/N): y + Since control-plane-only argument is not specified, this will upgrade the control plane AND all nodepools to version 1.29.2. Continue? (y/N): y + ``` + > [!NOTE] > You can only upgrade one minor version at a time. For example, you can upgrade from *1.14.x* to *1.15.x*, but you can't upgrade from *1.14.x* to *1.16.x* directly. To upgrade from *1.14.x* to *1.16.x*, you must first upgrade from *1.14.x* to *1.15.x*, then perform another upgrade from *1.15.x* to *1.16.x*. - The following example output shows the result of upgrading to *1.27.3*. Notice the `kubernetesVersion` now shows *1.27.3*: + The following example output shows the result of upgrading to *1.29.2*. Notice the `kubernetesVersion` now shows *1.29.2*: ```output {+ ... "agentPoolProfiles": [ {+ ... "count": 3,+ "currentOrchestratorVersion": "1.29.2", "maxPods": 110, "name": "nodepool1",+ "nodeImageVersion": "AKSUbuntu-2204gen2containerd-202405.27.0", + "orchestratorVersion": "1.29.2", "osType": "Linux",- "vmSize": "Standard_DS1_v2", + "upgradeSettings": { + "drainTimeoutInMinutes": null, + "maxSurge": "10%", + "nodeSoakDurationInMinutes": null, + "undrainableNodeBehavior": null + }, + "vmSize": "Standard_DS2_v2", + ... } ],+ ... + "currentKubernetesVersion": "1.29.2", "dnsPrefix": "myAKSClust-myResourceGroup-19da35", "enableRbac": false, "fqdn": "myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io", "id": "/subscriptions/<Subscription ID>/resourcegroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster",- "kubernetesVersion": "1.27.3", + "kubernetesVersion": "1.29.2", "location": "eastus", "name": "myAKSCluster", "type": "Microsoft.ContainerService/ManagedClusters"+ ... } ``` You can either [manually upgrade your cluster](#manually-upgrade-cluster) or [co > [!NOTE] > You can only upgrade one minor version at a time. For example, you can upgrade from *1.14.x* to *1.15.x*, but you can't upgrade from *1.14.x* to *1.16.x* directly. To upgrade from *1.14.x* to *1.16.x*, first upgrade from *1.14.x* to *1.15.x*, then perform another upgrade from *1.15.x* to *1.16.x*. - The following example output shows the result of upgrading to *1.27.3*. Notice the `KubernetesVersion` now shows *1.27.3*: + The following example output shows the result of upgrading to *1.29.2*. Notice the `KubernetesVersion` now shows *1.29.2*: ```output- ProvisioningState : Succeeded - MaxAgentPools : 100 - KubernetesVersion : 1.27.3 - PrivateFQDN : - AgentPoolProfiles : {default} - Name : myAKSCluster - Type : Microsoft.ContainerService/ManagedClusters - Location : eastus - Tags : {} + ... + ProvisioningState : Succeeded + MaxAgentPools : 100 + KubernetesVersion : 1.29.2 + CurrentKubernetesVersion : 1.29.2 + ... + ResourceGroupName : myResourceGroup + Name : myAKSCluster + Type : Microsoft.ContainerService/ManagedClusters + Location : eastus + Tags : ``` #### [Azure portal](#tab/azure-portal) AKS regularly provides new node images. Linux node images are updated weekly, an The following example output shows some of the above events listed during an upgrade: ```output+ LAST SEEN TYPE REASON OBJECT MESSAGE ...- default 2m1s Normal Drain node/aks-nodepool1-96663640-vmss000001 Draining node: [aks-nodepool1-96663640-vmss000001] - ... - default 9m22s Normal Surge node/aks-nodepool1-96663640-vmss000002 Created a surge node [aks-nodepool1-96663640-vmss000002 nodepool1] for agentpool %!s(MISSING) + 5m Normal Drain node/aks-nodepool1-96663640-vmss000000 Draining node: aks-nodepool1-96663640-vmss000000 + 5m Normal Upgrade node/aks-nodepool1-96663640-vmss000000 Deleting node aks-nodepool1-96663640-vmss000000 from API server + 4m Normal Upgrade node/aks-nodepool1-96663640-vmss000000 Successfully reimaged node: aks-nodepool1-96663640-vmss000000 + 4m Normal Upgrade node/aks-nodepool1-96663640-vmss000000 Successfully upgraded node: aks-nodepool1-96663640-vmss000000 + 4m Normal Drain node/aks-nodepool1-96663640-vmss000000 Draining node: aks-nodepool1-96663640-vmss000000 ... ``` AKS regularly provides new node images. Linux node images are updated weekly, an ```output Name Location ResourceGroup KubernetesVersion CurrentKubernetesVersion ProvisioningState Fqdn - - - -- myAKSCluster eastus myResourceGroup 1.27.3 1.27.3 Succeeded myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io + myAKSCluster eastus myResourceGroup 1.29.2 1.29.2 Succeeded myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io ``` ### [Azure PowerShell](#tab/azure-powershell) AKS regularly provides new node images. Linux node images are updated weekly, an The following example output shows the AKS cluster runs *KubernetesVersion 1.27.3*: ```output- Name Location KubernetesVersion ProvisioningState - - -- -- -- - myAKSCluster eastus 1.27.3 Succeeded + Name Location KubernetesVersion ProvisioningState + - -- -- -- + myAKSCluster eastus 1.29.2 Succeeded ``` ### [Azure portal](#tab/azure-portal) |
app-service | Side By Side Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md | description: Learn how to migrate your App Service Environment v2 to App Service Previously updated : 7/11/2024 Last updated : 7/23/2024 # Migration to App Service Environment v3 using the side-by-side migration feature Once you're ready to redirect traffic, you can complete the final step of the mi > > [!NOTE]-> You have 14 days to complete this step. If you don't complete this step in 14 days, your migration is automatically reverted back to an App Service Environment v2. If you need more than 14 days to complete this step, contact support. +> It's important to complete this step as soon as possible. When your App Service Environment is in the hybrid state, it's unable to receive platform upgrades and security patches, which makes it more vulnerable to instability and security threats. > If you discover any issues with your new App Service Environment v3, don't run the command to redirect customer traffic. This command also initiates the deletion of your App Service Environment v2. If you find an issue, contact support. az rest --method get --uri "${ASE_ID}?api-version=2022-03-01" --query properties This step is your opportunity to test and validate your new App Service Environment v3. -Once you confirm your apps are working as expected, you can finalize the migration by running the following command. This command also deletes your old environment. You have 14 days to complete this step. If you don't complete this step in 14 days, your migration is automatically reverted back to an App Service Environment v2. If you need more than 14 days to complete this step, contact support. +Once you confirm your apps are working as expected, you can finalize the migration by running the following command. This command also deletes your old environment. If you find any issues or decide at this point that you no longer want to proceed with the migration, contact support to discuss your options. Don't run the DNS change command since that command completes the migration. |
automation | Manage Runtime Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-runtime-environment.md | Title: Manage Runtime environment and associated runbooks in Azure Automation description: This article tells how to manage runbooks in Runtime environment and associated runbooks Azure Automation Previously updated : 06/28/2024 Last updated : 07/24/2024 An Azure Automation account in supported public region (except Central India, Ge > [!NOTE] > - When you import a package, it might take several minutes. 100MB is the maximum total size of the files that you can import. > - Use *.zip* files for PowerShell runbook types as mentioned [here](/powershell/scripting/developer/module/understanding-a-windows-powershell-module)- > - For Python 3.8 packages, use .tar.gz or .whl files targeting cp38-amd64. + > - For Python 3.8 packages, use .whl files targeting cp38-amd64. > - For Python 3.10 (preview) packages, use .whl files targeting cp310 Linux OS. 1. Select **Next** and in the **Review + Create** tab, verify that the settings are correct. When you select **Create**, Azure runs validation on Runtime environment settings that you have chosen. If the validation passes, you can proceed to create Runtime environment else, the portal indicates the settings that you need to modify. |
automation | Python Packages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/python-packages.md | Title: Manage Python 2 packages in Azure Automation description: This article tells how to manage Python 2 packages in Azure Automation. Previously updated : 04/23/2024 Last updated : 07/23/2024 For information on managing Python 3 packages, see [Manage Python 3 packages](./ :::image type="content" source="media/python-packages/add-python-package.png" alt-text="Screenshot of the Python packages page shows Python packages in the left menu and Add a Python package highlighted."::: -2. On the **Add Python Package** page, select a local package to upload. The package can be a **.whl** or **.tar.gz** file. +2. On the **Add Python Package** page, select a local package to upload. The package can be a **.whl** file. 3. Enter the name and select the **Runtime version** as 2.x.x 4. Select **Import**. |
azure-cache-for-redis | Cache Azure Active Directory For Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-azure-active-directory-for-authentication.md | To use the ACL integration, your client application must assume the identity of For information on using Microsoft Entra ID with Azure CLI, see the [references pages for identity](/cli/azure/redis/identity). +## Disable access key authentication on your cache ++Using Microsoft Entra ID is the secure way to connect your cache. We recommend using Microsoft Entra ID and disabling access keys. ++When you disable access key Authentication for a cache, all existing client connections are terminated, whether they use access keys or Microsoft Entra ID auth-based. You're advised to follow the recommended Redis client best practices to implement proper retry mechanisms for reconnecting MS Entra-based connections, if any. ++Before you disable access keys: ++- Before you disable access keys, Microsoft Entra ID authorization must be enabled. +- Disabling access keys is only available for Basic, Standard, and Premium tier caches. +- For geo-replicated caches, before you disable accces keys, you must: 1) unlink the caches, 2) disable access keys, and finally, 3) relink the caches. ++If you have a cache where access keys are used, and you want to disable access keys, follow this procedure. ++1. In the Azure portal, select the Azure Cache for Redis instance where you'd like to disable access keys. ++1. Select **Authentication** from the Resource menu. ++1. In the working pane, select **Access keys**. ++1. Select **Disable Access Keys Authentication**. Then, select **Save**. ++ :::image type="content" source="media/cache-azure-active-directory-for-authentication/cache-disable-access-keys.png" alt-text="Screenshot showing access keys in the working pane with a red box around Disable Access Key Authentication. "::: ++1. You're asked to confirm that you want to update your configuration. Select **Yes**. ++> [!IMPORTANT] +> When the **Disable Access Key Authentication**" setting is changed for a cache, all existing client connections, using access keys or Microsoft Entra ID, are terminated. Follow the best practices to implement proper retry mechanisms for reconnecting MS Entra-based connections. For more information, see [Connection resilience](cache-best-practices-connection.md). + ## Using data access configuration with your cache If you would like to use a custom access policy instead of Redis Data Owner, go to the **Data Access Configuration** on the Resource menu. For more information, see [Configure a custom data access policy for your application](cache-configure-role-based-access-control.md#configure-a-custom-data-access-policy-for-your-application). |
azure-cache-for-redis | Monitor Cache Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/monitor-cache-reference.md | description: This article contains important reference material you need when yo Last updated 05/13/2024 -+ The following list provides details and more information about the supported Azu - Sets - The number of set operations to the cache during the specified reporting interval. This value is the sum of the following values from the Redis INFO all command: `cmdstat_set`, `cmdstat_hset`, `cmdstat_hmset`, `cmdstat_hsetnx`, `cmdstat_lset`, `cmdstat_mset`, `cmdstat_msetnx`, `cmdstat_setbit`, `cmdstat_setex`, `cmdstat_setrange`, and `cmdstat_setnx`. - Total Keys - - The maximum number of keys in the cache during the past reporting time period. This number maps to `keyspace` from the Redis INFO command. Because of a limitation in the underlying metrics system for caches with clustering enabled, Total Keys return the maximum number of keys of the shard that had the maximum number of keys during the reporting interval. + - The maximum number of keys in the cache during the past reporting time period. This number maps to `keyspace` from the Redis INFO command. + + > [!IMPORTANT] + > Because of a limitation in the underlying metrics system for caches with clustering enabled, Total Keys return the maximum number of keys of the shard that had the maximum number of keys during the reporting interval. + - Total Operations - The total number of commands processed by the cache server during the specified reporting interval. This value maps to `total_commands_processed` from the Redis INFO command. When Azure Cache for Redis is used purely for pub/sub, there are no metrics for `Cache Hits`, `Cache Misses`, `Gets`, or `Sets`, but there are `Total Operations` metrics that reflect the cache usage for pub/sub operations. - Used Memory |
azure-monitor | Itsmc Connections Servicenow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-servicenow.md | For information about installing ITSMC, see [Add the IT Service Management Conne ### OAuth setup -ServiceNow supported versions include Vancouver, Utah, Tokyo, San Diego, Rome, Quebec, Paris, Orlando, New York, Madrid, London, Kingston, Jakarta, Istanbul, Helsinki, and Geneva. +ServiceNow supported versions include Washington, Vancouver, Utah, Tokyo, San Diego, Rome, Quebec, Paris, Orlando, New York, Madrid, London, Kingston, Jakarta, Istanbul, Helsinki, and Geneva. ServiceNow admins must generate a client ID and client secret for their ServiceNow instance. See the following information as required: |
azure-monitor | Azure Web Apps Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-java.md | Title: Monitor Azure app services performance Java | Microsoft Docs description: Application performance monitoring for Azure app services using Java. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 06/23/2023 Last updated : 08/22/2024 ms.devlang: java |
azure-monitor | Kubernetes Monitoring Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-monitoring-enable.md | This article provides onboarding guidance for the following types of clusters. A > [!NOTE] > The Managed Prometheus Arc-Enabled Kubernetes extension does not support the following configurations:-> * Red Hat Openshift distributions - > * Windows nodes +> * Red Hat Openshift distributions, including Azure Red Hat OpenShift (ARO) +> * Windows nodes ## Workspaces |
azure-monitor | Availability Zones | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/availability-zones.md | A subset of the availability zones that support data resilience currently also s | East US | | :white_check_mark: | | | East US 2 | | :white_check_mark: | :white_check_mark: | | South Central US | :white_check_mark: | :white_check_mark: | |+| Spain Central | :white_check_mark: | :white_check_mark: | :white_check_mark: | | West US 2 | | :white_check_mark: | :white_check_mark: | | West US 3 | :white_check_mark: | :white_check_mark: | | | **Asia Pacific** | | | | A subset of the availability zones that support data resilience currently also s Learn more about how to: - [Set up a dedicated cluster](logs-dedicated-clusters.md).-- [Migrate Log Analytics workspaces to availability zone support](../../availability-zones/migrate-monitor-log-analytics.md).+- [Migrate Log Analytics workspaces to availability zone support](../../availability-zones/migrate-monitor-log-analytics.md). |
azure-netapp-files | Azure Government | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md | All [Azure NetApp Files features](whats-new.md) available on Azure public cloud | Azure NetApp Files features | Azure public cloud availability | Azure Government availability | |: |: |: |-| Azure NetApp Files customer-managed keys | Generally available (GA) | No | | Azure NetApp Files large volumes | Generally available (GA) | Generally available [(select regions)](large-volumes-requirements-considerations.md#supported-regions) | ## Portal access |
azure-netapp-files | Configure Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md | Azure NetApp Files customer-managed keys is supported for the following regions: * Italy North * Japan East * Japan West- * Korea Central * Korea South * North Central US Azure NetApp Files customer-managed keys is supported for the following regions: * UAE North * UK South * UK West+* US Gov Arizona +* US Gov Texas +* US Gov Virginia * West Europe * West US * West US 2 |
azure-netapp-files | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md | Azure NetApp Files is updated regularly. This article provides a summary about t ## July 2024 +* [Customer-managed keys for Azure NetApp Files volume encryption](configure-customer-managed-keys.md#supported-regions) is now available in all US Gov regions + * [Azure NetApp Files large volume enhancement:](large-volumes-requirements-considerations.md) increased throughput and maximum size limit of 2-PiB volume (preview) Azure NetApp Files large volumes now support increased maximum throughput and size limits. This update brings an increased size limit to **one PiB,** available via Azure Feature Exposure Control (AFEC), allowing for more extensive and robust data management solutions for various workloads, including HPC, EDA, VDI, and more. |
azure-resource-manager | Bicep Core Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-core-diagnostics.md | + + Title: Bicep warnings and error codes +description: Lists the warnings and error codes. ++ Last updated : 07/23/2024+++# Bicep warning and error codes ++If you need more information about a particular warning or error code, select the **Feedback** button in the upper right corner of the page and specify the code. ++| Code | Description | +||-| +| BCP001 | The following token is not recognized: "{token}". | +| BCP002 | The multi-line comment at this location is not terminated. Terminate it with the */ character sequence. | +| BCP003 | The string at this location is not terminated. Terminate the string with a single quote character. | +| BCP004 | The string at this location is not terminated due to an unexpected new line character. | +| BCP005 | The string at this location is not terminated. Complete the escape sequence and terminate the string with a single unescaped quote character. | +| BCP006 | The specified escape sequence is not recognized. Only the following escape sequences are allowed: {ToQuotedString(escapeSequences)}. | +| BCP007 | This declaration type is not recognized. Specify a metadata, parameter, variable, resource, or output declaration. | +| BCP008 | Expected the "=" token, or a newline at this location. | +| BCP009 | Expected a literal value, an array, an object, a parenthesized expression, or a function call at this location. | +| BCP010 | Expected a valid 64-bit signed integer. | +| BCP011 | The type of the specified value is incorrect. Specify a string, boolean, or integer literal. | +| BCP012 | Expected the "{keyword}" keyword at this location. | +| BCP013 | Expected a parameter identifier at this location. | +| BCP015 | Expected a variable identifier at this location. | +| BCP016 | Expected an output identifier at this location. | +| BCP017 | Expected a resource identifier at this location. | +| BCP018 | Expected the "{character}" character at this location. | +| BCP019 | Expected a new line character at this location. | +| BCP020 | Expected a function or property name at this location. | +| BCP021 | Expected a numeric literal at this location. | +| BCP022 | Expected a property name at this location. | +| BCP023 | Expected a variable or function name at this location. | +| BCP024 | The identifier exceeds the limit of {LanguageConstants.MaxIdentifierLength}. Reduce the length of the identifier. | +| BCP025 | The property "{property}" is declared multiple times in this object. Remove or rename the duplicate properties. | +| BCP026 | The output expects a value of type "{expectedType}" but the provided value is of type "{actualType}". | +| BCP028 | Identifier "{identifier}" is declared multiple times. Remove or rename the duplicates. | +| BCP029 | The resource type is not valid. Specify a valid resource type of format "<types>@<apiVersion>". | +| BCP030 | The output type is not valid. Please specify one of the following types: {ToQuotedString(validTypes)}. | +| BCP031 | The parameter type is not valid. Please specify one of the following types: {ToQuotedString(validTypes)}. | +| BCP032 | The value must be a compile-time constant. | +| <a id='BCP033' />[BCP033](./diagnostics/bcp033.md) | Expected a value of type <data-type> but the provided value is of type <data-type>. | +| BCP034 | The enclosing array expected an item of type "{expectedType}", but the provided item was of type "{actualType}". | +| <a id='BCP035' />[BCP035](./diagnostics/bcp035.md) | The specified <data-type> declaration is missing the following required properties: <property-name>. | +| <a id='BCP036' />[BCP036](./diagnostics/bcp036.md) | The property <property-name> expected a value of type <data-type> but the provided value is of type <data-type>. | +| <a id='BCP037' />[BCP037](./diagnostics/bcp037.md) | The property <property-name> is not allowed on objects of type <type-definition>. | +| <a id='BCP040' />[BCP040](./diagnostics/bcp040.md) | String interpolation is not supported for keys on objects of type <type-definition>. | +| BCP041 | Values of type "{valueType}" cannot be assigned to a variable. | +| BCP043 | This is not a valid expression. | +| BCP044 | Cannot apply operator "{operatorName}" to operand of type "{type}". | +| BCP045 | Cannot apply operator "{operatorName}" to operands of type "{type1}" and "{type2}".{(additionalInfo is null ? string.Empty : " " + additionalInfo)} | +| BCP046 | Expected a value of type "{type}". | +| BCP047 | String interpolation is unsupported for specifying the resource type. | +| BCP048 | Cannot resolve function overload. For details, see the documentation. | +| BCP049 | The array index must be of type "{LanguageConstants.String}" or "{LanguageConstants.Int}" but the provided index was of type "{wrongType}". | +| BCP050 | The specified path is empty. | +| BCP051 | The specified path begins with "/". Files must be referenced using relative paths. | +| BCP052 | The type "{type}" does not contain property "{badProperty}". | +| BCP053 | The type "{type}" does not contain property "{badProperty}". Available properties include {ToQuotedString(availableProperties)}. | +| BCP054 | The type "{type}" does not contain any properties. | +| BCP055 | Cannot access properties of type "{wrongType}". An "{LanguageConstants.Object}" type is required. | +| BCP056 | The reference to name "{name}" is ambiguous because it exists in namespaces {ToQuotedString(namespaces)}. The reference must be fully qualified. | +| BCP057 | The name "{name}" does not exist in the current context. | +| BCP059 | The name "{name}" is not a function. | +| BCP060 | The "variables" function is not supported. Directly reference variables by their symbolic names. | +| BCP061 | The "parameters" function is not supported. Directly reference parameters by their symbolic names. | +| BCP062 | The referenced declaration with name "{name}" is not valid. | +| BCP063 | The name "{name}" is not a parameter, variable, resource or module. | +| BCP064 | Found unexpected tokens in interpolated expression. | +| BCP065 | Function "{functionName}" is not valid at this location. It can only be used as a parameter default value. | +| BCP066 | Function "{functionName}" is not valid at this location. It can only be used in resource declarations. | +| BCP067 | Cannot call functions on type "{wrongType}". An "{LanguageConstants.Object}" type is required. | +| BCP068 | Expected a resource type string. Specify a valid resource type of format "<types>@<apiVersion>". | +| BCP069 | The function "{function}" is not supported. Use the "{@operator}" operator instead. | +| BCP070 | Argument of type "{argumentType}" is not assignable to parameter of type "{parameterType}". | +| BCP071 | Expected {expected}, but got {argumentCount}. | +| <a id='BCP072' />[BCP072](./diagnostics/bcp072.md) | This symbol cannot be referenced here. Only other parameters can be referenced in parameter default values. | +| <a id='BCP073' />[BCP073](./diagnostics/bcp073.md) | The property <property-name> is read-only. Expressions cannot be assigned to read-only properties. | +| BCP074 | Indexing over arrays requires an index of type "{LanguageConstants.Int}" but the provided index was of type "{wrongType}". | +| BCP075 | Indexing over objects requires an index of type "{LanguageConstants.String}" but the provided index was of type "{wrongType}". | +| BCP076 | Cannot index over expression of type "{wrongType}". Arrays or objects are required. | +| BCP077 | The property "{badProperty}" on type "{type}" is write-only. Write-only properties cannot be accessed. | +| BCP078 | The property "{propertyName}" requires a value of type "{expectedType}", but none was supplied. | +| BCP079 | This expression is referencing its own declaration, which is not allowed. | +| BCP080 | The expression is involved in a cycle ("{string.Join("\" -> \"", cycle)}"). | +| BCP081 | Resource type "{resourceTypeReference.FormatName()}" does not have types available. Bicep is unable to validate resource properties prior to deployment, but this will not block the resource from being deployed. | +| BCP082 | The name "{name}" does not exist in the current context. Did you mean "{suggestedName}"? | +| BCP083 | The type "{type}" does not contain property "{badProperty}". Did you mean "{suggestedProperty}"? | +| BCP084 | The symbolic name "{name}" is reserved. Please use a different symbolic name. Reserved namespaces are {ToQuotedString(namespaces.OrderBy(ns => ns))}. | +| BCP085 | The specified file path contains one ore more invalid path characters. The following are not permitted: {ToQuotedString(forbiddenChars.OrderBy(x => x).Select(x => x.ToString()))}. | +| BCP086 | The specified file path ends with an invalid character. The following are not permitted: {ToQuotedString(forbiddenPathTerminatorChars.OrderBy(x => x).Select(x => x.ToString()))}. | +| BCP087 | Array and object literals are not allowed here. | +| BCP088 | The property "{property}" expected a value of type "{expectedType}" but the provided value is of type "{actualStringLiteral}". Did you mean "{suggestedStringLiteral}"? | +| BCP089 | The property "{property}" is not allowed on objects of type "{type}". Did you mean "{suggestedProperty}"? | +| BCP090 | This module declaration is missing a file path reference. | +| BCP091 | An error occurred reading file. {failureMessage} | +| BCP092 | String interpolation is not supported in file paths. | +| BCP093 | File path "{filePath}" could not be resolved relative to "{parentPath}". | +| BCP094 | This module references itself, which is not allowed. | +| BCP095 | The file is involved in a cycle ("{string.Join("\" -> \"", cycle)}"). | +| BCP096 | Expected a module identifier at this location. | +| BCP097 | Expected a module path string. This should be a relative path to another bicep file, e.g. 'myModule.bicep' or '../parent/myModule.bicep' | +| BCP098 | The specified file path contains a "\" character. Use "/" instead as the directory separator character. | +| BCP099 | The "{LanguageConstants.ParameterAllowedPropertyName}" array must contain one or more items. | +| BCP100 | The function "if" is not supported. Use the "?:\" (ternary conditional) operator instead, e.g. condition ? ValueIfTrue : ValueIfFalse | +| BCP101 | The "createArray" function is not supported. Construct an array literal using []. | +| BCP102 | The "createObject" function is not supported. Construct an object literal using {}. | +| BCP103 | The following token is not recognized: "{token}". Strings are defined using single quotes in bicep. | +| BCP104 | The referenced module has errors. | +| BCP105 | Unable to load file from URI "{fileUri}". | +| BCP106 | Expected a new line character at this location. Commas are not used as separator delimiters. | +| BCP107 | The function "{name}" does not exist in namespace "{namespaceType.Name}". | +| BCP108 | The function "{name}" does not exist in namespace "{namespaceType.Name}". Did you mean "{suggestedName}"? | +| BCP109 | The type "{type}" does not contain function "{name}". | +| BCP110 | The type "{type}" does not contain function "{name}". Did you mean "{suggestedName}"? | +| BCP111 | The specified file path contains invalid control code characters. | +| BCP112 | The "{LanguageConstants.TargetScopeKeyword}" cannot be declared multiple times in one file. | +| BCP113 | Unsupported scope for module deployment in a "{LanguageConstants.TargetScopeTypeTenant}" target scope. Omit this property to inherit the current scope, or specify a valid scope. Permissible scopes include tenant: tenant(), named management group: managementGroup(<name>), named subscription: subscription(<subId>), or named resource group in a named subscription: resourceGroup(<subId>, <name>). | +| BCP114 | Unsupported scope for module deployment in a "{LanguageConstants.TargetScopeTypeManagementGroup}" target scope. Omit this property to inherit the current scope, or specify a valid scope. Permissible scopes include current management group: managementGroup(), named management group: managementGroup(<name>), named subscription: subscription(<subId>), tenant: tenant(), or named resource group in a named subscription: resourceGroup(<subId>, <name>). | +| BCP115 | Unsupported scope for module deployment in a "{LanguageConstants.TargetScopeTypeSubscription}" target scope. Omit this property to inherit the current scope, or specify a valid scope. Permissible scopes include current subscription: subscription(), named subscription: subscription(<subId>), named resource group in same subscription: resourceGroup(<name>), named resource group in different subscription: resourceGroup(<subId>, <name>), or tenant: tenant(). | +| BCP116 | Unsupported scope for module deployment in a "{LanguageConstants.TargetScopeTypeResourceGroup}" target scope. Omit this property to inherit the current scope, or specify a valid scope. Permissible scopes include current resource group: resourceGroup(), named resource group in same subscription: resourceGroup(<name>), named resource group in a different subscription: resourceGroup(<subId>, <name>), current subscription: subscription(), named subscription: subscription(<subId>) or tenant: tenant(). | +| BCP117 | An empty indexer is not allowed. Specify a valid expression. | +| BCP118 | Expected the "{" character, the "[" character, or the "if" keyword at this location. | +| BCP119 | Unsupported scope for extension resource deployment. Expected a resource reference. | +| BCP120 | This expression is being used in an assignment to the "{propertyName}" property of the "{objectTypeName}" type, which requires a value that can be calculated at the start of the deployment. | +| BCP121 | Resources: {ToQuotedString(resourceNames)} are defined with this same name in a file. Rename them or split into different modules. | +| BCP122 | Modules: {ToQuotedString(moduleNames)} are defined with this same name and this same scope in a file. Rename them or split into different modules. | +| BCP123 | Expected a namespace or decorator name at this location. | +| BCP124 | The decorator "{decoratorName}" can only be attached to targets of type "{attachableType}", but the target has type "{targetType}". | +| BCP125 | Function "{functionName}" cannot be used as a parameter decorator. | +| BCP126 | Function "{functionName}" cannot be used as a variable decorator. | +| BCP127 | Function "{functionName}" cannot be used as a resource decorator. | +| BCP128 | Function "{functionName}" cannot be used as a module decorator. | +| BCP129 | Function "{functionName}" cannot be used as an output decorator. | +| BCP130 | Decorators are not allowed here. | +| BCP132 | Expected a declaration after the decorator. | +| BCP133 | The unicode escape sequence is not valid. Valid unicode escape sequences range from \\u{0} to \\u{10FFFF}. | +| BCP134 | Scope {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(suppliedScope))} is not valid for this module. Permitted scopes: {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(supportedScopes))}. | +| BCP135 | Scope {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(suppliedScope))} is not valid for this resource type. Permitted scopes: {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(supportedScopes))}. | +| BCP136 | Expected a loop item variable identifier at this location. | +| BCP137 | Loop expected an expression of type "{LanguageConstants.Array}" but the provided value is of type "{actualType}". | +| BCP138 | For-expressions are not supported in this context. For-expressions may be used as values of resource, module, variable, and output declarations, or values of resource and module properties. | +| BCP139 | A resource's scope must match the scope of the Bicep file for it to be deployable. You must use modules to deploy resources to a different scope. | +| BCP140 | The multi-line string at this location is not terminated. Terminate it with "'''. | +| BCP141 | The expression cannot be used as a decorator as it is not callable. | +| BCP142 | Property value for-expressions cannot be nested. | +| BCP143 | For-expressions cannot be used with properties whose names are also expressions. | +| BCP144 | Directly referencing a resource or module collection is not currently supported here. Apply an array indexer to the expression. | +| BCP145 | Output "{identifier}" is declared multiple times. Remove or rename the duplicates. | +| BCP147 | Expected a parameter declaration after the decorator. | +| BCP148 | Expected a variable declaration after the decorator. | +| BCP149 | Expected a resource declaration after the decorator. | +| BCP150 | Expected a module declaration after the decorator. | +| BCP151 | Expected an output declaration after the decorator. | +| BCP152 | Function "{functionName}" cannot be used as a decorator. | +| BCP153 | Expected a resource or module declaration after the decorator. | +| BCP154 | Expected a batch size of at least {limit} but the specified value was "{value}". | +| BCP155 | The decorator "{decoratorName}" can only be attached to resource or module collections. | +| BCP156 | The resource type segment "{typeSegment}" is invalid. Nested resources must specify a single type segment, and optionally can specify an API version using the format "<type>@<apiVersion>". | +| BCP157 | The resource type cannot be determined due to an error in the containing resource. | +| BCP158 | Cannot access nested resources of type "{wrongType}". A resource type is required. | +| BCP159 | The resource "{resourceName}" does not contain a nested resource named "{identifierName}". Known nested resources are: {ToQuotedString(nestedResourceNames)}. | +| BCP160 | A nested resource cannot appear inside of a resource with a for-expression. | +| BCP162 | Expected a loop item variable identifier or "(" at this location. | +| BCP164 | A child resource's scope is computed based on the scope of its ancestor resource. This means that using the "scope" property on a child resource is unsupported. | +| BCP165 | A resource's computed scope must match that of the Bicep file for it to be deployable. This resource's scope is computed from the "scope" property value assigned to ancestor resource "{ancestorIdentifier}". You must use modules to deploy resources to a different scope. | +| BCP166 | Duplicate "{decoratorName}" decorator. | +| BCP167 | Expected the "{" character or the "if" keyword at this location. | +| BCP168 | Length must not be a negative value. | +| BCP169 | Expected resource name to contain {expectedSlashCount} "/" character(s). The number of name segments must match the number of segments in the resource type. | +| BCP170 | Expected resource name to not contain any "/" characters. Child resources with a parent resource reference (via the parent property or via nesting) must not contain a fully-qualified name. | +| BCP171 | Resource type "{resourceType}" is not a valid child resource of parent "{parentResourceType}". | +| BCP172 | The resource type cannot be validated due to an error in parent resource "{resourceName}". | +| BCP173 | The property "{property}" cannot be used in an existing resource declaration. | +| BCP174 | Type validation is not available for resource types declared containing a "/providers/" segment. Please instead use the "scope" property. | +| BCP176 | Values of the "any" type are not allowed here. | +| BCP177 | This expression is being used in the if-condition expression, which requires a value that can be calculated at the start of the deployment.{variableDependencyChainClause}{accessiblePropertiesClause} | +| BCP178 | This expression is being used in the for-expression, which requires a value that can be calculated at the start of the deployment.{variableDependencyChainClause}{accessiblePropertiesClause} | +| BCP179 | Unique resource or deployment name is required when looping. The loop item variable "{itemVariableName}" or the index variable "{indexVariableName}" must be referenced in at least one of the value expressions of the following properties in the loop body: {ToQuotedString(expectedVariantProperties)} | +| BCP180 | Function "{functionName}" is not valid at this location. It can only be used when directly assigning to a module parameter with a secure decorator. | +| BCP181 | This expression is being used in an argument of the function "{functionName}", which requires a value that can be calculated at the start of the deployment.{variableDependencyChainClause}{accessiblePropertiesClause} | +| BCP182 | This expression is being used in the for-body of the variable "{variableName}", which requires values that can be calculated at the start of the deployment.{variableDependencyChainClause}{violatingPropertyNameClause}{accessiblePropertiesClause} | +| BCP183 | The value of the module "params" property must be an object literal. | +| BCP184 | File '{filePath}' exceeded maximum size of {maxSize} {unit}. | +| BCP185 | Encoding mismatch. File was loaded with '{detectedEncoding}' encoding. | +| BCP186 | Unable to parse literal JSON value. Please ensure that it is well-formed. | +| BCP187 | The property "{property}" does not exist in the resource or type definition, although it might still be valid.{TypeInaccuracyClause} | +| BCP188 | The referenced ARM template has errors. Please see [https://aka.ms/arm-template](https://aka.ms/arm-template) for information on how to diagnose and fix the template. | +| BCP189 | (allowedSchemes.Contains(ArtifactReferenceSchemes.Local, StringComparer.Ordinal), allowedSchemes.Any(scheme => !string.Equals(scheme, ArtifactReferenceSchemes.Local, StringComparison.Ordinal))) switch { (false, false) => "Module references are not supported in this context.", (false, true) => $"The specified module reference scheme \"{badScheme}\" is not recognized. Specify a module reference using one of the following schemes: {FormatSchemes()}", (true, false) => $"The specified module reference scheme \"{badScheme}\" is not recognized. Specify a path to a local module file.", (true, true) => $"The specified module reference scheme \"{badScheme}\" is not recognized. Specify a path to a local module file or a module reference using one of the following schemes: {FormatSchemes()}"} | +| BCP190 | The artifact with reference "{artifactRef}" has not been restored. | +| BCP191 | Unable to restore the artifact with reference "{artifactRef}". | +| BCP192 | Unable to restore the artifact with reference "{artifactRef}": {message} | +| BCP193 | {BuildInvalidOciArtifactReferenceClause(aliasName, badRef)} Specify a reference in the format of "{ArtifactReferenceSchemes.Oci}:<artifact-uri>:<tag>", or "{ArtifactReferenceSchemes.Oci}/<module-alias>:<module-name-or-path>:<tag>". | +| BCP194 | {BuildInvalidTemplateSpecReferenceClause(aliasName, badRef)} Specify a reference in the format of "{ArtifactReferenceSchemes.TemplateSpecs}:<subscription-ID>/<resource-group-name>/<template-spec-name>:<version>", or "{ArtifactReferenceSchemes.TemplateSpecs}/<module-alias>:<template-spec-name>:<version>". | +| BCP195 | {BuildInvalidOciArtifactReferenceClause(aliasName, badRef)} The artifact path segment "{badSegment}" is not valid. Each artifact name path segment must be a lowercase alphanumeric string optionally separated by a ".", "_", or \"-\"." | +| BCP196 | The module tag or digest is missing. | +| BCP197 | The tag "{badTag}" exceeds the maximum length of {maxLength} characters. | +| BCP198 | The tag "{badTag}" is not valid. Valid characters are alphanumeric, ".", "_", or "-" but the tag cannot begin with ".", "_", or "-". | +| BCP199 | Module path "{badRepository}" exceeds the maximum length of {maxLength} characters. | +| BCP200 | The registry "{badRegistry}" exceeds the maximum length of {maxLength} characters. | +| BCP201 | Expected a provider specification string of with a valid format at this location. Valid formats are "br:<providerRegistryHost>/<providerRepositoryPath>@<providerVersion>" or "br/<providerAlias>:<providerName>@<providerVersion>". | +| BCP202 | Expected a provider alias name at this location. | +| BCP203 | Using provider statements requires enabling EXPERIMENTAL feature "Extensibility". | +| BCP204 | Provider namespace "{identifier}" is not recognized. | +| BCP205 | Provider namespace "{identifier}" does not support configuration. | +| BCP206 | Provider namespace "{identifier}" requires configuration, but none was provided. | +| BCP207 | Namespace "{identifier}" is declared multiple times. Remove the duplicates. | +| BCP208 | The specified namespace "{badNamespace}" is not recognized. Specify a resource reference using one of the following namespaces: {ToQuotedString(allowedNamespaces)}. | +| BCP209 | Failed to find resource type "{resourceType}" in namespace "{@namespace}". | +| BCP210 | Resource type belonging to namespace "{childNamespace}" cannot have a parent resource type belonging to different namespace "{parentNamespace}". | +| BCP211 | The module alias name "{aliasName}" is invalid. Valid characters are alphanumeric, "_", or "-". | +| BCP212 | The Template Spec module alias name "{aliasName}" does not exist in the {BuildBicepConfigurationClause(configFileUri)}. | +| BCP213 | The OCI artifact module alias name "{aliasName}" does not exist in the {BuildBicepConfigurationClause(configFileUri)}. | +| BCP214 | The Template Spec module alias "{aliasName}" in the {BuildBicepConfigurationClause(configFileUri)} is in valid. The "subscription" property cannot be null or undefined. | +| BCP215 | The Template Spec module alias "{aliasName}" in the {BuildBicepConfigurationClause(configFileUri)} is in valid. The "resourceGroup" property cannot be null or undefined. | +| BCP216 | The OCI artifact module alias "{aliasName}" in the {BuildBicepConfigurationClause(configFileUri)} is invalid. The "registry" property cannot be null or undefined. | +| BCP217 | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The subscription ID "{subscriptionId}" is not a GUID. | +| BCP218 | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The resource group name "{resourceGroupName}" exceeds the maximum length of {maximumLength} characters. | +| BCP219 | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The resource group name "{resourceGroupName}" is invalid. Valid characters are alphanumeric, unicode characters, ".", "_", "-", "(", or ")", but the resource group name cannot end with ".". | +| BCP220 | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The Template Spec name "{templateSpecName}" exceeds the maximum length of {maximumLength} characters. | +| BCP221 | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The Template Spec name "{templateSpecName}" is invalid. Valid characters are alphanumeric, ".", "_", "-", "(", or ")", but the Template Spec name cannot end with ".". | +| BCP222 | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The Template Spec version "{templateSpecVersion}" exceeds the maximum length of {maximumLength} characters. | +| BCP223 | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The Template Spec version "{templateSpecVersion}" is invalid. Valid characters are alphanumeric, ".", "_", "-", "(", or ")", but the Template Spec name cannot end with ".". | +| BCP224 | {BuildInvalidOciArtifactReferenceClause(aliasName, badRef)} The digest "{badDigest}" is not valid. The valid format is a string "sha256:" followed by exactly 64 lowercase hexadecimal digits. | +| BCP225 | The discriminator property "{propertyName}" value cannot be determined at compilation time. Type checking for this object is disabled. | +| BCP226 | Expected at least one diagnostic code at this location. Valid format is "#disable-next-line diagnosticCode1 diagnosticCode2 ...". | +| BCP227 | The type "{resourceType}" cannot be used as a parameter or output type. Extensibility types are currently not supported as parameters or outputs. | +| BCP229 | The parameter "{parameterName}" cannot be used as a resource scope or parent. Resources passed as parameters cannot be used as a scope or parent of a resource. | +| BCP300 | Expected a type literal at this location. Please specify a concrete value or a reference to a literal type. | +| BCP301 | The type name "{reservedName}" is reserved and may not be attached to a user-defined type. | +| BCP302 | The name "{name}" is not a valid type. Please specify one of the following types: {ToQuotedString(validTypes)}. | +| BCP303 | String interpolation is unsupported for specifying the provider. | +| BCP304 | Invalid provider specifier string. Specify a valid provider of format "<providerName>@<providerVersion>". | +| BCP305 | Expected the "with" keyword, "as" keyword, or a new line character at this location. | +| BCP306 | The name "{name}" refers to a namespace, not to a type. | +| BCP307 | The expression cannot be evaluated, because the identifier properties of the referenced existing resource including {ToQuotedString(runtimePropertyNames.OrderBy(x => x))} cannot be calculated at the start of the deployment. In this situation, {accessiblePropertyNamesClause}{accessibleFunctionNamesClause}. | +| BCP308 | The decorator "{decoratorName}" may not be used on statements whose declared type is a reference to a user-defined type. | +| BCP309 | Values of type "{flattenInputType.Name}" cannot be flattened because "{incompatibleType.Name}" is not an array type. | +| BCP311 | The provided index value of "{indexSought}" is not valid for type "{typeName}". Indexes for this type must be between 0 and {tupleLength - 1}. | +| BCP315 | An object type may have at most one additional properties declaration. | +| BCP316 | The "{LanguageConstants.ParameterSealedPropertyName}" decorator may not be used on object types with an explicit additional properties type declaration. | +| BCP317 | Expected an identifier, a string, or an asterisk at this location. | +| BCP318 | The value of type "{possiblyNullType}" may be null at the start of the deployment, which would cause this access expression (and the overall deployment with it) to fail. If you do not know whether the value will be null and the template would handle a null value for the overall expression, use a `.?` (safe dereference) operator to short-circuit the access expression if the base expression's value is null: {accessExpression.AsSafeAccess().ToString()}. If you know the value will not be null, use a non-null assertion operator to inform the compiler that the value will not be null: {SyntaxFactory.AsNonNullable(expression).ToString()}. | +| BCP319 | The type at "{errorSource}" could not be resolved by the ARM JSON template engine. Original error message: "{message}" | +| BCP320 | The properties of module output resources cannot be accessed directly. To use the properties of this resource, pass it as a resource-typed parameter to another module and access the parameter's properties therein. | +| BCP321 | Expected a value of type "{expectedType}" but the provided value is of type "{actualType}". If you know the value will not be null, use a non-null assertion operator to inform the compiler that the value will not be null: {SyntaxFactory.AsNonNullable(expression).ToString()}. | +| BCP322 | The `.?` (safe dereference) operator may not be used on instance function invocations. | +| BCP323 | The `[?]` (safe dereference) operator may not be used on resource or module collections. | +| BCP325 | Expected a type identifier at this location. | +| BCP326 | Nullable-typed parameters may not be assigned default values. They have an implicit default of 'null' that cannot be overridden. | +| <a id='BCP327' />[BCP327](./diagnostics/bcp327.md) | The provided value (which will always be greater than or equal to <value>) is too large to assign to a target for which the maximum allowable value is <max-value>. | +| <a id='BCP328' />[BCP328](./diagnostics/bcp328.md) | The provided value (which will always be less than or equal to <value>) is too small to assign to a target for which the minimum allowable value is <max-value>. | +| BCP329 | The provided value can be as small as {sourceMin} and may be too small to assign to a target with a configured minimum of {targetMin}. | +| BCP330 | The provided value can be as large as {sourceMax} and may be too large to assign to a target with a configured maximum of {targetMax}. | +| BCP331 | A type's "{minDecoratorName}" must be less than or equal to its "{maxDecoratorName}", but a minimum of {minValue} and a maximum of {maxValue} were specified. | +| <a id='BCP332' />[BCP332](./diagnostics/bcp332.md) | The provided value (whose length will always be greater than or equal to <string-length>) is too long to assign to a target for which the maximum allowable length is <max-length>. | +| <a id='BCP333' />[BCP333](./diagnostics/bcp333.md) | The provided value (whose length will always be less than or equal to <string-length>) is too short to assign to a target for which the minimum allowable length is <min-length>. | +| BCP334 | The provided value can have a length as small as {sourceMinLength} and may be too short to assign to a target with a configured minimum length of {targetMinLength}. | +| BCP335 | The provided value can have a length as large as {sourceMaxLength} and may be too long to assign to a target with a configured maximum length of {targetMaxLength}. | +| BCP337 | This declaration type is not valid for a Bicep Parameters file. Specify a "{LanguageConstants.UsingKeyword}", "{LanguageConstants.ParameterKeyword}" or "{LanguageConstants.VariableKeyword}" declaration. | +| BCP338 | Failed to evaluate parameter "{parameterName}": {message} | +| BCP339 | The provided array index value of "{indexSought}" is not valid. Array index should be greater than or equal to 0. | +| BCP340 | Unable to parse literal YAML value. Please ensure that it is well-formed. | +| BCP341 | This expression is being used inside a function declaration, which requires a value that can be calculated at the start of the deployment. {variableDependencyChainClause}{accessiblePropertiesClause} | +| BCP342 | User-defined types are not supported in user-defined function parameters or outputs. | +| BCP344 | Expected an assert identifier at this location. | +| BCP345 | A test declaration can only reference a Bicep File | +| BCP0346 | Expected a test identifier at this location. | +| BCP0347 | Expected a test path string at this location. | +| BCP348 | Using a test declaration statement requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.TestFramework)}". | +| BCP349 | Using an assert declaration requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.Assertions)}". | +| BCP350 | Value of type "{valueType}" cannot be assigned to an assert. Asserts can take values of type 'bool' only. | +| BCP351 | Function "{functionName}" is not valid at this location. It can only be used when directly assigning to a parameter. | +| BCP352 | Failed to evaluate variable "{name}": {message} | +| BCP353 | The {itemTypePluralName} {ToQuotedString(itemNames)} differ only in casing. The ARM deployments engine is not case sensitive and will not be able to distinguish between them. | +| BCP354 | Expected left brace ('{') or asterisk ('*') character at this location. | +| BCP355 | Expected the name of an exported symbol at this location. | +| BCP356 | Expected a valid namespace identifier at this location. | +| BCP358 | This declaration is missing a template file path reference. | +| BCP360 | The '{symbolName}' symbol was not found in (or was not exported by) the imported template. | +| BCP361 | The "@export()" decorator must target a top-level statement. | +| BCP362 | This symbol is imported multiple times under the names {string.Join(", ", importedAs.Select(identifier => $"'{identifier}'"))}. | +| BCP363 | The "{LanguageConstants.TypeDiscriminatorDecoratorName}" decorator can only be applied to object-only union types with unique member types. | +| BCP364 | The property "{discriminatorPropertyName}" must be a required string literal on all union member types. | +| BCP365 | The value "{discriminatorPropertyValue}" for discriminator property "{discriminatorPropertyName}" is duplicated across multiple union member types. The value must be unique across all union member types. | +| BCP366 | The discriminator property name must be "{acceptablePropertyName}" on all union member types. | +| BCP367 | The "{featureName}" feature is temporarily disabled. | +| BCP368 | The value of the "{targetName}" parameter cannot be known until the template deployment has started because it uses a reference to a secret value in Azure Key Vault. Expressions that refer to the "{targetName}" parameter may be used in {LanguageConstants.LanguageFileExtension} files but not in {LanguageConstants.ParamsFileExtension} files. | +| BCP369 | The value of the "{targetName}" parameter cannot be known until the template deployment has started because it uses the default value defined in the template. Expressions that refer to the "{targetName}" parameter may be used in {LanguageConstants.LanguageFileExtension} files but not in {LanguageConstants.ParamsFileExtension} files. | +| BCP372 | The "@export()" decorator may not be applied to variables that refer to parameters, modules, or resource, either directly or indirectly. The target of this decorator contains direct or transitive references to the following unexportable symbols: {ToQuotedString(nonExportableSymbols)}. | +| BCP373 | Unable to import the symbol named "{name}": {message} | +| BCP374 | The imported model cannot be loaded with a wildcard because it contains the following duplicated exports: {ToQuotedString(ambiguousExportNames)}. | +| BCP375 | An import list item that identifies its target with a quoted string must include an 'as <alias>' clause. | +| BCP376 | The "{name}" symbol cannot be imported because imports of kind {exportMetadataKind} are not supported in files of kind {sourceFileKind}. | +| BCP377 | The provider alias name "{aliasName}" is invalid. Valid characters are alphanumeric, "_", or "-". | +| BCP378 | The OCI artifact provider alias "{aliasName}" in the {BuildBicepConfigurationClause(configFileUri)} is invalid. The "registry" property cannot be null or undefined. | +| BCP379 | The OCI artifact provider alias name "{aliasName}" does not exist in the {BuildBicepConfigurationClause(configFileUri)}. | +| BCP380 | Artifacts of type: "{artifactType}" are not supported. | +| BCP381 | Declaring provider namespaces with the "import" keyword has been deprecated. Please use the "provider" keyword instead. | +| BCP383 | The "{typeName}" type is not parameterizable. | +| BCP384 | The "{typeName}" type requires {requiredArgumentCount} argument(s). | +| BCP385 | Using resource-derived types requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.ResourceDerivedTypes)}". | +| BCP386 | The decorator "{decoratorName}" may not be used on statements whose declared type is a reference to a resource-derived type. | +| BCP387 | Indexing into a type requires an integer greater than or equal to 0. | +| BCP388 | Cannot access elements of type "{wrongType}" by index. A tuple type is required. | +| BCP389 | The type "{wrongType}" does not declare an additional properties type. | +| BCP390 | The array item type access operator ('[*]') can only be used with typed arrays. | +| BCP391 | Type member access is only supported on a reference to a named type. | +| BCP392 | "The supplied resource type identifier "{resourceTypeIdentifier}" was not recognized as a valid resource type name." | +| BCP393 | "The type pointer segment "{unrecognizedSegment}" was not recognized. Supported pointer segments are: "properties", "items", "prefixItems", and "additionalProperties"." | +| BCP394 | Resource-derived type expressions must derefence a property within the resource body. Using the entire resource body type is not permitted. | +| BCP395 | Declaring provider namespaces using the '<providerName>@<version>' expression has been deprecated. Please use an identifier instead. | +| BCP396 | The referenced provider types artifact has been published with malformed content. | +| BCP397 | "Provider {name} is incorrectly configured in the {BuildBicepConfigurationClause(configFileUri)}. It is referenced in the "{RootConfiguration.ImplicitProvidersConfigurationKey}" section, but is missing corresponding configuration in the "{RootConfiguration.ProvidersConfigurationKey}" section." | +| BCP398 | "Provider {name} is incorrectly configured in the {BuildBicepConfigurationClause(configFileUri)}. It is configured as built-in in the "{RootConfiguration.ProvidersConfigurationKey}" section, but no built-in provider exists." | +| BCP399 | Fetching az types from the registry requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.DynamicTypeLoading)}". | +| BCP400 | Fetching types from the registry requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.ProviderRegistry)}". | ++## Next steps ++To learn about Bicep, see [Bicep overview](./overview.md). |
azure-resource-manager | Deploy To Management Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-management-group.md | To use an ARM template to create a new Azure subscription in a management group, * [Programmatically create Azure subscriptions for a Microsoft Customer Agreement](../../cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md) * [Programmatically create Azure subscriptions for a Microsoft Partner Agreement](../../cost-management-billing/manage/programmatically-create-subscription-microsoft-partner-agreement.md) -To deploy a template that moves an existing Azure subscription to a new management group, see [Move subscriptions in ARM template](../../governance/management-groups/manage.md#move-subscriptions-in-arm-template) +To deploy a template that moves an existing Azure subscription to a new management group, see [Move subscriptions in ARM template](../../governance/management-groups/manage.md#move-a-subscription-in-an-arm-template) ## Azure Policy |
azure-resource-manager | Bcp033 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp033.md | + + Title: BCP033 +description: Error/warning - Expected a value of type <data-type> but the provided value is of type <data-type>. ++ Last updated : 07/15/2024+++# Bicep error/warning code - BCP033 ++This error/warning occurs when you assign a value of a mismatched data type. ++## Error/warning description ++`Expected a value of type <data-type> but the provided value is of type <data-type>.` ++## Solution ++Use the expected data type. ++## Examples ++The following example raises the error because the expected data type is a string. The actual provided value is an integer: ++```bicep +var myValue = 5 ++output myString string = myValue +``` ++You can fix the error by providing a string value: ++```bicep +var myValue = '5' ++output myString string = myValue +``` ++## Next steps ++For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md). |
azure-resource-manager | Bcp035 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp035.md | + + Title: BCP035 +description: Error/warning - The specified <data-type> declaration is missing the following required properties <property-name>. ++ Last updated : 07/15/2024+++# Bicep error/warning code - BCP035 ++This error/warning occurs when your resource definition is missing a required property. ++## Error/warning description ++`The specified <date-type> declaration is missing the following required properties: <property-name>.` ++## Solution ++Add the missing property to the resource definition. ++## Examples ++The following example raises the warning for **virtualNetworkGateway1** and **virtualNetworkGateway2**: ++```bicep +var networkConnectionName = 'testConnection' +var location = 'eastus' +var vnetGwAId = 'gatewayA' +var vnetGwBId = 'gatewayB' ++resource networkConnection 'Microsoft.Network/connections@2023-11-01' = { + name: networkConnectionName + location: location + properties: { + virtualNetworkGateway1: { + id: vnetGwAId + } + virtualNetworkGateway2: { + id: vnetGwBId + } ++ connectionType: 'Vnet2Vnet' + } +} +``` ++The warning is: ++```warning +The specified "object" declaration is missing the following required properties: "properties". If this is an inaccuracy in the documentation, please report it to the Bicep Team. +``` ++You can verify the missing properties from the [template reference](/azure/templates). If you see the warning from Visual Studio Code, hover the cursor over the resource symbolic name and select **View document** to open the template reference. ++You can fix the issue by adding the missing properties: ++```bicep +var networkConnectionName = 'testConnection' +var location = 'eastus' +var vnetGwAId = 'gatewayA' +var vnetGwBId = 'gatewayB' ++resource networkConnection 'Microsoft.Network/connections@2023-11-01' = { + name: networkConnectionName + location: location + properties: { + virtualNetworkGateway1: { + id: vnetGwAId + properties:{} + } + virtualNetworkGateway2: { + id: vnetGwBId + properties:{} + } ++ connectionType: 'Vnet2Vnet' + } +} +``` ++## Next steps ++For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md). |
azure-resource-manager | Bcp036 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp036.md | + + Title: BCP036 +description: Error/warning - The property <property-name> expected a value of type <data-type> but the provided value is of type <data-type>. ++ Last updated : 07/15/2024+++# Bicep error/warning code - BCP036 ++This error/warning occurs when you assign a value to a property whose expected data type isn't compatible with the type of the assigned value. ++## Error/warning description ++`The property <property-name> expected a value of type <data-type> but the provided value is of type <data-type>.` ++## Solution ++Assign a value with the correct data type. ++## Examples ++The following example raises the error because `sku` is defined as a string, not an integer: ++```bicep +type storageAccountConfigType = { + name: string + sku: string +} ++param foo storageAccountConfigType = { + name: 'myStorage' + sku: 2 +} +``` ++You can fix the issue by assigning a string value to `sku`: ++```bicep +type storageAccountConfigType = { + name: string + sku: string +} ++param foo storageAccountConfigType = { + name: 'myStorage' + sku: 'Standard_LRS' +} +``` ++## Next steps ++For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md). |
azure-resource-manager | Bcp037 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp037.md | + + Title: BCP037 +description: Warning - The property <property-name> is not allowed on objects of type <type-definition>. ++ Last updated : 07/15/2024+++# Bicep warning code - BCP037 ++This warning occurs when you specify a property that isn't defined in a resource type. ++## Warning description ++`The property <property-name> is not allowed on objects of type <type-defintion>.` ++## Solution ++Remove the undefined property. ++## Examples ++The following example raises the warning because `bar` isn't defined in `storageAccountType`: ++```bicep +type storageAccountConfigType = { + name: string + sku: string +} ++param foo storageAccountConfigType = { + name: 'myStorage' + sku: 'Standard_LRS' + bar: 'myBar' +} +``` ++You can fix the issue by removing the property: ++```bicep +type storageAccountConfigType = { + name: string + sku: string +} ++param foo storageAccountConfigType = { + name: 'myStorage' + sku: 'Standard_LRS' +} +``` ++The following example raises the error because `obj` is a sealed type and doesn't define a `baz` property. ++```bicep +@sealed() +type obj = { + foo: string + bar: string +} ++param p obj = { + foo: 'foo' + bar: 'bar' + baz: 'baz' +} +``` ++You can fix the issue by removing the property: ++```bicep +@sealed() +type obj = { + foo: string + bar: string +} ++param p obj = { + foo: 'foo' + bar: 'bar' +} +``` ++## Next steps ++For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md). |
azure-resource-manager | Bcp040 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp040.md | + + Title: BCP040 +description: Error/warning - String interpolation is not supported for keys on objects of type <type-definition>. ++ Last updated : 07/15/2024+++# Bicep error/warning code - BCP040 ++This error/warning occurs when the Bicep compiler can't determine the exact value of an interpolated string key. ++## Error/warning description ++`String interpolation is not supported for keys on objects of type <type-definition>.` ++## Solution ++Remove string interpolation. ++## Examples ++The following example raises the warning because string interpolation is used for specifying the key `sku1`: ++```bicep +var name = 'sku' ++type storageAccountConfigType = { + name: string + sku1: string +} ++param foo storageAccountConfigType = { + name: 'myStorage' + '${name}1': 'Standard_LRS' +} +``` ++You can fix the issue by adding the missing properties: ++```bicep +var name = 'sku' ++type storageAccountConfigType = { + name: string + sku1: string +} ++param foo storageAccountConfigType = { + name: 'myStorage' + sku1: 'Standard_LRS' +} +``` ++## Next steps ++For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md). |
azure-resource-manager | Bcp053 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp053.md | + + Title: BCP053 +description: Error/warning - The type <resource-type> does not contain property <property-name>. Available properties include <property-names>. ++ Last updated : 07/15/2024+++# Bicep error/warning code - BCP053 ++This error/warning occurs when you reference a property that isn't defined in the resource type or [user-defined data type](../user-defined-data-types.md). ++## Error/warning description ++`The type <resource-type> does not contain property <property-name>. Available properties include <property-names>.` ++## Solution ++Reference the correct property name ++## Examples ++The following example raises the error because `Microsoft.Storage/storageAccounts` doesn't contain a property called `bar`. ++```bicep +param location string ++resource storage 'Microsoft.Storage/storageAccounts@2023-04-01' = { + name: 'myStorage' + location: location + sku: { + name: 'Standard_LRS' + } + kind: 'StorageV2' +} ++output foo string = storage.bar +``` ++You can fix the error by referencing a valid property, such as `name`: ++```bicep +param location string ++resource storage 'Microsoft.Storage/storageAccounts@2023-04-01' = { + name: 'myStorage' + location: location + sku: { + name: 'Standard_LRS' + } + kind: 'StorageV2' +} ++output foo string = storage.name +``` ++## Next steps ++For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md). |
azure-resource-manager | Bcp072 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp072.md | + + Title: BCP072 +description: Error - This symbol cannot be referenced here. Only other parameters can be referenced in parameter default values. ++ Last updated : 07/15/2024+++# Bicep error code - BCP072 ++This error occurs when you reference a variable in parameter default values. ++## Error description ++`This symbol cannot be referenced here. Only other parameters can be referenced in parameter default values.` ++## Solution ++Reference another parameter instead. ++## Examples ++The following example raises the error because the parameter default value references a variable: ++```bicep +param foo string = bar ++var bar = 'HelloWorld!' +``` ++You can fix the error by referencing another parameter: ++```bicep +param foo string = bar +param bar string = 'HelloWorld!' ++output outValue string = foo +``` ++## Next steps ++For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md). |
azure-resource-manager | Bcp073 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp073.md | + + Title: BCP073 +description: Warning - The property <property-name> is read-only. Expressions cannot be assigned to read-only properties. ++ Last updated : 07/15/2024+++# Bicep warning code - BCP073 ++This warning occurs when you assign a value to a read-only property. ++## Warning description ++`The property <property-name> is read-only. Expressions cannot be assigned to read-only properties.` ++## Solution ++Remove the property assignment from the file. ++## Examples ++The following example raises the warning because `sku` can only be set on the `storageAccounts` level. It's read-only for services that are under a storage account like `blobServices` and `fileServices`. ++```bicep +param location string ++resource storage 'Microsoft.Storage/storageAccounts@2023-04-01' = { + name: 'mystore' + location: location + sku: { + name: 'Standard_LRS' + } + kind: 'StorageV2' +} ++resource blobService 'Microsoft.Storage/storageAccounts/blobServices@2023-04-01' = { + parent: storage + name: 'default' + sku: {} +} +``` ++You can fix the issue by removing the `sku` property assignment: ++```bicep +param location string ++resource storage 'Microsoft.Storage/storageAccounts@2023-04-01' = { + name: 'mystore' + location: location + sku: { + name: 'Standard_LRS' + } + kind: 'StorageV2' +} ++resource blobService 'Microsoft.Storage/storageAccounts/blobServices@2023-04-01' = { + parent: storage + name: 'default' +} +``` ++## Next steps ++For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md). |
azure-resource-manager | Bcp327 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp327.md | + + Title: BCP327 +description: Error/warning - The provided value (which will always be greater than or equal to <value>) is too large to assign to a target for which the maximum allowable value is <max-value>. ++ Last updated : 07/15/2024+++# Bicep error/warning code - BCP327 ++This error/warning occurs when you assign a value that is greater than the allowable value. ++## Error/warning description ++`The provided value (which will always be greater than or equal to <value>) is too large to assign to a target for which the maximum allowable value is <max-value>.` ++## Solution ++Assign a value that falls within the permitted range. ++## Examples ++The following example raises the error because `13` is greater than maximum allowable value: ++```bicep +@minValue(1) +@maxValue(12) +param month int = 13 ++``` ++You can fix the error by assigning a value within the permitted range: ++```bicep +@minValue(1) +@maxValue(12) +param month int = 12 ++``` ++## Next steps ++For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md). |
azure-resource-manager | Bcp328 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp328.md | + + Title: BCP328 +description: Error/warning - The provided value (which will always be less than or equal to <value>) is too small to assign to a target for which the minimum allowable value is <min-value>. ++ Last updated : 07/15/2024+++# Bicep error/warning code - BCP328 ++This error/warning occurs when you assign a value that is less than the allowable value. ++## Error/warning description ++`The provided value (which will always be less than or equal to <value>) is too small to assign to a target for which the minimum allowable value is <min-value>.` ++## Solution ++Assign a value that falls within the permitted range. ++## Examples ++The following example raises the error because `0` is less than minimum allowable value: ++```bicep +@minValue(1) +@maxValue(12) +param month int = 0 ++``` ++You can fix the error by assigning a value within the permitted range: ++```bicep +@minValue(1) +@maxValue(12) +param month int = 1 +``` ++## Next steps ++For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md). |
azure-resource-manager | Bcp332 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp332.md | + + Title: BCP332 +description: Error/warning - The provided value (whose length will always be greater than or equal to <length>) is too long to assign to a target for which the maximum allowable length is <max-length>. ++ Last updated : 07/15/2024+++# Bicep error/warning code - BCP332 ++This error/warning occurs when a string or array exceeding the allowable length is assigned. ++## Error/warning description ++`The provided value (whose length will always be greater than or equal to <length>) is too long to assign to a target for which the maximum allowable length is <max-length>.` ++## Solution ++Assign a string whose length is within the allowable range. ++## Examples ++The following example raises the error because the value `longerThan10` exceeds the allowable length: ++```bicep +@minLength(3) +@maxLength(10) +param storageAccountName string = 'longerThan10' +``` ++You can fix the error by assigning a string whose length is within the allowable range. ++```bicep +@minLength(3) +@maxLength(10) +param storageAccountName string = 'myStorage' +``` ++## Next steps ++For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md). |
azure-resource-manager | Bcp333 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp333.md | + + Title: BCP333 +description: Error/warning - The provided value (whose length will always be less than or equal to <length>) is too short to assign to a target for which the minimum allowable length is <min-length>. ++ Last updated : 07/15/2024+++# Bicep error/warning code - BCP333 ++This error/warning occurs when an assigned string or array is shorter than the allowable length. ++## Error/warning description ++`The provided value (whose length will always be less than or equal to <length>) is too short to assign to a target for which the minimum allowable length is <min-length>.` ++## Solution ++Assign a string whose length is within the allowable range. ++## Examples ++The following example raises the error because the value `st` is shorter than the allowable length: ++```bicep +@minLength(3) +@maxLength(10) +param storageAccountName string = 'st' +``` ++You can fix the error by assigning a string whose length is within the allowable range. ++```bicep +@minLength(3) +@maxLength(10) +param storageAccountName string = 'myStorage' +``` ++## Next steps ++For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md). |
azure-resource-manager | Move Resource Group And Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resource-group-and-subscription.md | There are some important steps to do before moving a resource. By verifying thes * [Networking move guidance](./move-limitations/networking-move-limitations.md) * [Recovery Services move guidance](../../backup/backup-azure-move-recovery-services-vault.md?toc=/azure/azure-resource-manager/toc.json) * [Virtual Machines move guidance](./move-limitations/virtual-machines-move-limitations.md)- * To move an Azure subscription to a new management group, see [Move subscriptions](../../governance/management-groups/manage.md#move-subscriptions). + * To move an Azure subscription to a new management group, see [Move subscriptions](../../governance/management-groups/manage.md#move-management-groups-and-subscriptions). 1. The destination subscription must be registered for the resource provider of the resource being moved. If not, you receive an error stating that the **subscription is not registered for a resource type**. You might see this error when moving a resource to a new subscription, but that subscription has never been used with that resource type. |
azure-resource-manager | Deploy To Management Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-to-management-group.md | To use an ARM template to create a new Azure subscription in a management group, * [Programmatically create Azure subscriptions for a Microsoft Customer Agreement](../../cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md) * [Programmatically create Azure subscriptions for a Microsoft Partner Agreement](../../cost-management-billing/manage/programmatically-create-subscription-microsoft-partner-agreement.md) -To deploy a template that moves an existing Azure subscription to a new management group, see [Move subscriptions in ARM template](../../governance/management-groups/manage.md#move-subscriptions-in-arm-template) +To deploy a template that moves an existing Azure subscription to a new management group, see [Move subscriptions in ARM template](../../governance/management-groups/manage.md#move-a-subscription-in-an-arm-template) ## Azure Policy |
backup | Backup Sql Server Database From Azure Vm Blade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-database-from-azure-vm-blade.md | + + Title: Back up SQL Server from the Azure VM blade using Azure Backup +description: In this article, learn how to back up SQL Server databases from the Azure VM blade via the Azure portal. + Last updated : 07/23/2024+++++# Back up a SQL Server from the Azure SQL Server VM blade ++This article describes how to use Azure Backup to back up a SQL Server (running in Azure VM) from the SQL VM resource via the Azure portal. ++SQL Server databases are critical workloads that require a low recovery-point objective (RPO) and long-term retention. You can back up SQL Server databases running on Azure virtual machines (VMs) by using [Azure Backup](backup-overview.md). ++>[!Note] +>Learn more about the [SQL backup supported configurations and scenarios](sql-support-matrix.md). ++## Prerequisites ++Before you back up a SQL Server database, see the [backup criteria](backup-sql-server-database-azure-vms.md#prerequisites). ++## Configure backup for SQLServer database ++You can now configure Azure backup for your SQL server running in Azure VM, directly from the SQL VM resource blade. ++To configure backup from the SQL VM blade, follow these steps: ++1. In the [Azure portal](https://portal.azure.com/), go to the *SQL VM resource*. ++ >[!Note] + >SQL Server resource is different from the Virtual Machine resource. ++1. Go to **Settings** > **Backups**. ++ If the backup isnΓÇÖt configured for the VM, the following backup options appear: ++ - **Azure Backup** + - **Automated Backup** ++ :::image type="content" source="./media/backup-sql-server-database-from-azure-vm-blade/select-backups.png" alt-text="Screenshot shows how to select the Backups option on a SQL VM." lightbox="./media/backup-sql-server-database-from-azure-vm-blade/select-backups.png"::: ++1. On the **Azure Backup** blade, select **Enable** to start configuring the backup for the SQL Server using Azure Backup. ++1. To start the backup operation, select an existing Recovery Services vault or [create a new vault](backup-sql-server-database-azure-vms.md#create-a-recovery-services-vault). ++1. Select **Discover** to start discovering databases in the VM. ++ :::image type="content" source="./media/backup-sql-server-database-from-azure-vm-blade/start-database-discovery.png" alt-text="Screenshot shows how to start discovering the SQL database." lightbox="./media/backup-sql-server-database-from-azure-vm-blade/start-database-discovery.png"::: ++ This operation will take some time to run when performed for the first time. ++ :::image type="content" source="./media/backup-sql-server-database-from-azure-vm-blade/database-discovery-in-progress.png" alt-text="Screenshot shows the database discovery operation in progress." lightbox="./media/backup-sql-server-database-from-azure-vm-blade/database-discovery-in-progress.png"::: ++ Azure Backup discovers all SQL Server databases on the VM. During discovery, the following operations run in the background: ++ 1. Azure Backup registers the VM with the vault for workload backup. All databases on the registered VM can only be backed up to this vault. + 1. Azure Backup installs the AzureBackupWindowsWorkload extension on the VM. No agent is installed on the SQL database. + 1. Azure Backup creates the service account NT Service\AzureWLBackupPluginSvc on the VM. + 1. All backup and restore operations use the service account. + 1. NT Service\AzureWLBackupPluginSvc needs SQL sysadmin permissions. All SQL Server VMs created in Azure Marketplace come with the SqlIaaSExtension installed. ++ The AzureBackupWindowsWorkload extension uses the SQLIaaSExtension to automatically get the necessary permissions. ++1. Once the operation is completed, select **Configure backup**. ++ :::image type="content" source="./media/backup-sql-server-database-from-azure-vm-blade/start-database-backup-configuration.png" alt-text="Screenshot shows how to start the database backup configuration." lightbox="./media/backup-sql-server-database-from-azure-vm-blade/start-database-backup-configuration.png"::: ++1. Define a backup policy using one of the following options: ++ 1. Select the default policy as *HourlyLogBackup*. + 1. Select an existing backup policy previously created for SQL. + 1. [Create a new policy](tutorial-sql-backup.md#create-a-backup-policy) based on your RPO and retention range. ++ :::image type="content" source="./media/backup-sql-server-database-from-azure-vm-blade/select-backup-policy.png" alt-text="Screenshot shows how to select a backup policy for the database." lightbox="./media/backup-sql-server-database-from-azure-vm-blade/select-backup-policy.png"::: ++1. Select **Add** to view all the registered availability groups and standalone SQL Server instances. ++1. On **Select items to backup**, expand the list of all the *unprotected databases* in that instance or the *Always On availability group*. ++1. Select the *databases* to protect and select **OK**. ++ :::image type="content" source="./media/backup-sql-server-database-from-azure-vm-blade/confirm-database-selection.png" alt-text="Screenshot shows how to confirm the selection of database for backup." lightbox="./media/backup-sql-server-database-from-azure-vm-blade/confirm-database-selection.png"::: ++1. To optimize backup loads, Azure Backup allows/permits a maximum number of 50 databases in one backup job. ++ 1. To protect more than 50 databases, configure multiple backups. + 1. To enable the entire instance or the Always On availability group, in the AUTOPROTECT drop-down list, select ON, and then select OK. ++1. Select **Enable Backup** to submit the Configure Protection operation and track the configuration progress in the Notifications area of the portal. ++ :::image type="content" source="./media/backup-sql-server-database-from-azure-vm-blade/enable-database-backup.png" alt-text="Screenshot shows how to enable the database backup operation." lightbox="./media/backup-sql-server-database-from-azure-vm-blade/enable-database-backup.png"::: ++1. To get an overview of your configured backups and a summary of backup jobs, go to **Settings** > **Backups** in the SQL VM resource. ++ :::image type="content" source="./media/backup-sql-server-database-from-azure-vm-blade/backup-jobs-summary.png" alt-text="Screenshot shows how to view the backup jobs summary." lightbox="./media/backup-sql-server-database-from-azure-vm-blade/backup-jobs-summary.png"::: ++## Next steps ++- [Restore SQL Server databases on Azure VM](restore-sql-database-azure-vm.md) +- [Manage and monitor backed up SQL Server databases](manage-monitor-sql-database-backup.md) +- [Troubleshoot backups on a SQL Server database](backup-sql-server-azure-troubleshoot.md) +- [FAQ - Backing up SQL Server databases on Azure VMs - Azure Backup | Microsoft Learn](/azure/backup/faq-backup-sql-server) |
cloud-services | Applications Dont Support Tls 1 2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/applications-dont-support-tls-1-2.md | -This article describes how to enable the older TLS protocols (TLS 1.0 and 1.1) as well as applying legacy cipher suites to support the additional protocols on the Windows Server 2019 cloud service web and worker roles. +This article describes how to enable the older TLS protocols (TLS 1.0 and 1.1). It also covers the application of legacy cipher suites to support the additional protocols on the Windows Server 2019 cloud service web and worker roles. -We understand that while we are taking steps to deprecate TLS 1.0 and TLS 1.1, our customers may need to support the older protocols and cipher suites until they can plan for their deprecation. While we don't recommend re-enabling these legacy values, we are providing guidance to help customers. We encourage customers to evaluate the risk of regression before implementing the changes outlined in this article. +We understand that while we're taking steps to deprecate TLS 1.0 and TLS 1.1, our customers may need to support the older protocols and cipher suites in the meantime. While we don't recommend re-enabling these legacy values, we're providing guidance to help customers. We encourage customers to evaluate the risk of regression before implementing the changes outlined in this article. > [!NOTE] > Guest OS Family 6 release enforces TLS 1.2 by explicitly disabling TLS 1.0 and 1.1 and defining a specific set of cipher suites.For more information on Guest OS families see [Guest OS release news](./cloud-services-guestos-update-matrix.md#family-6-releases) ## Dropping support for TLS 1.0, TLS 1.1 and older cipher suites -In support of our commitment to use best-in-class encryption, Microsoft announced plans to start migration away from TLS 1.0 and 1.1 in June of 2017. Since that initial announcement, Microsoft announced our intent to disable Transport Layer Security (TLS) 1.0 and 1.1 by default in supported versions of Microsoft Edge and Internet Explorer 11 in the first half of 2020. Similar announcements from Apple, Google, and Mozilla indicate the direction in which the industry is headed. +In support of our commitment to use best-in-class encryption, Microsoft announced plans to start migration away from TLS 1.0 and 1.1 in June of 2017. Microsoft announced our intent to disable Transport Layer Security (TLS) 1.0 and 1.1 by default in supported versions of Microsoft Edge and Internet Explorer 11 in the first half of 2020. Similar announcements from Apple, Google, and Mozilla indicate the direction in which the industry is headed. For more information, see [Preparing for TLS 1.2 in Microsoft Azure](https://azure.microsoft.com/updates/azuretls12/) ## TLS configuration -The Windows Server 2019 cloud server image is configured with TLS 1.0 and TLS 1.1 disabled at the registry level. This means applications deployed to this version of Windows AND using the Windows stack for TLS negotiation will not allow TLS 1.0 and TLS 1.1 communication. +The Windows Server 2019 cloud server image is configured with TLS 1.0 and TLS 1.1 disabled at the registry level. This means applications deployed to this version of Windows AND using the Windows stack for TLS negotiation won't allow TLS 1.0 and TLS 1.1 communication. The server also comes with a limited set of cipher suites: The server also comes with a limited set of cipher suites: ## Step 1: Create the PowerShell script to enable TLS 1.0 and TLS 1.1 -Use the following code as an example to create a script that enables the older protocols and cipher suites. For the purposes of this documentation, this script will be named: **TLSsettings.ps1**. Store this script on your local desktop for easy access in later steps. +Use the following code as an example to create a script that enables the older protocols and cipher suites. For the purposes of this documentation, this script is named: **TLSsettings.ps1**. Store this script on your local desktop for easy access in later steps. ```powershell # You can use the -SetCipherOrder (or -sco) option to also set the TLS cipher If ($reboot) { ## Step 2: Create a command file -Create a CMD file named **RunTLSSettings.cmd** using the below. Store this script on your local desktop for easy access in later steps. +Create a CMD file named **RunTLSSettings.cmd** using the following script. Store this script on your local desktop for easy access in later steps. ```cmd SET LOG_FILE="%TEMP%\StartupLog.txt" Add the following snippet to your existing service definition file. </Startup> ``` -Here is an example that shows both the worker role and web role. +Here's an example that shows both the worker role and web role. ``` <?xmlversion="1.0" encoding="utf-8"?> To ensure the scripts are uploaded with every update pushed from Visual Studio, ## Step 6: Publish & Validate -Now that the above steps have been complete, publish the update to your existing Cloud Service. +Now that you completed the previous steps, publish the update to your existing Cloud Service. You can use [SSLLabs](https://www.ssllabs.com/) to validate the TLS status of your endpoints |
cloud-services | Automation Manage Cloud Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/automation-manage-cloud-services.md | Title: Manage Azure Cloud Services (classic) using Azure Automation | Microsoft description: Learn about how the Azure Automation service can be used to manage Azure cloud services at scale. Previously updated : 02/21/2023 Last updated : 07/23/2024 -[Azure Automation](https://azure.microsoft.com/services/automation/) is an Azure service for simplifying cloud management through process automation. Using Azure Automation, long-running, manual, error-prone, and frequently repeated tasks can be automated to increase reliability, efficiency, and time to value for your organization. +[Azure Automation](https://azure.microsoft.com/services/automation/) is an Azure service for simplifying cloud management through process automation. When you use Azure Automation, you can automate long-running, manual, error-prone, and frequently repeated tasks to increase reliability, efficiency, and time to value for your organization. -Azure Automation provides a highly reliable and highly available workflow execution engine that scales to meet your needs as your organization grows. In Azure Automation, processes can be kicked off manually, by third-party systems, or at scheduled intervals so that tasks happen exactly when needed. +Azure Automation provides a highly reliable and highly available workflow execution engine that scales to meet your needs as your organization grows. In Azure Automation, processes can be kicked off manually, by non-Microsoft systems, or at scheduled intervals so that tasks happen exactly when needed. -Lower operational overhead and free up IT / DevOps staff to focus on work that adds business value by moving your cloud management tasks to be run automatically by Azure Automation. +Lower operational overhead and free up IT / DevOps staff to focus on work that adds business value by running your cloud management tasks automatically with Azure Automation. ## How can Azure Automation help manage Azure cloud services?-Azure cloud services can be managed in Azure Automation by using the PowerShell cmdlets that are available in the [Azure PowerShell tools](/powershell/). Azure Automation has these cloud service PowerShell cmdlets available out of the box, so that you can perform all of your cloud service management tasks within the service. You can also pair these cmdlets in Azure Automation with the cmdlets for other Azure services, to automate complex tasks across Azure services and third party systems. +Azure cloud services can be managed in Azure Automation by using the PowerShell cmdlets that are available in the [Azure PowerShell tools](/powershell/). Azure Automation has these cloud service PowerShell cmdlets available out of the box, so that you can perform all of your cloud service management tasks within the service. You can also pair these cmdlets in Azure Automation with the cmdlets for other Azure services, to automate complex tasks across Azure services and non-Microsoft systems. ## Next Steps-Now that you've learned the basics of Azure Automation and how it can be used to manage Azure cloud services, follow these links to learn more about Azure Automation. +Now that you covered the basics of Azure Automation and how it can be used to manage Azure cloud services, follow these links to learn more about Azure Automation. * [Azure Automation Overview](../automation/automation-intro.md) * [My first runbook](../automation/learn/powershell-runbook-managed-identity.md) |
cloud-services | Cloud Services Allocation Failures | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-allocation-failures.md | Title: Troubleshooting Cloud Service (classic) allocation failures | Microsoft D description: Troubleshoot an allocation failure when you deploy Azure Cloud Services. Learn how allocation works and why allocation can fail. Previously updated : 02/21/2023 Last updated : 07/23/2024 - -When you deploy instances to a Cloud Service or add new web or worker role instances, Microsoft Azure allocates compute resources. You may occasionally receive errors when performing these operations even before you reach the Azure subscription limits. This article explains the causes of some of the common allocation failures and suggests possible remediation. The information may also be useful when you plan the deployment of your services. +When you deploy instances to a Cloud Service or add new web or worker role instances, Microsoft Azure allocates compute resources. You may occasionally receive errors when performing these operations even before you reach the Azure subscription limits. This article explains the causes of some of the common allocation failures and suggests possible remediation. The information can also be useful when you plan the deployment of your services. [!INCLUDE [support-disclaimer](~/reusable-content/ce-skilling/azure/includes/support-disclaimer.md)] ### Background ΓÇô How allocation works -The servers in Azure datacenters are partitioned into clusters. A new cloud service allocation request is attempted in multiple clusters. When the first instance is deployed to a cloud service(in either staging or production), that cloud service gets pinned to a cluster. Any further deployments for the cloud service will happen in the same cluster. In this article, we'll refer to this as "pinned to a cluster". Diagram 1 below illustrates the case of a normal allocation which is attempted in multiple clusters; Diagram 2 illustrates the case of an allocation that's pinned to Cluster 2 because that's where the existing Cloud Service CS_1 is hosted. +The servers in Azure datacenters are partitioned into clusters. A new cloud service allocation request is attempted in multiple clusters. When the first instance is deployed to a cloud service(in either staging or production), that cloud service gets pinned to a cluster. Any further deployments for the cloud service happen in the same cluster. In this article, we refer to this state as "pinned to a cluster." The following diagram illustrates the case of a normal allocation, which is attempted in multiple clusters. The second diagram illustrates the case of an allocation pinned to Cluster 2 because that's where the existing Cloud Service CS_1 is hosted. ![Allocation Diagram](./media/cloud-services-allocation-failure/Allocation1.png) ### Why allocation failure happens -When an allocation request is pinned to a cluster, there's a higher chance of failing to find free resources since the available resource pool is limited to a cluster. Furthermore, if your allocation request is pinned to a cluster but the type of resource you requested is not supported by that cluster, your request will fail even if the cluster has free resource. Diagram 3 below illustrates the case where a pinned allocation fails because the only candidate cluster does not have free resources. Diagram 4 illustrates the case where a pinned allocation fails because the only candidate cluster does not support the requested VM size, even though the cluster has free resources. +When an allocation request is pinned to a cluster, there's a higher chance of failing to find free resources since the available resource pool is limited to a cluster. Furthermore, if your allocation request is pinned to a cluster but the cluster doesn't support the resource type you requested, your request fails even if the cluster has free resource. The next diagram illustrates the case where a pinned allocation fails because the only candidate cluster doesn't have free resources. Diagram 4 illustrates the case where a pinned allocation fails because the only candidate cluster doesn't support the requested virtual machine (VM) size, even though the cluster has free resources. ![Pinned Allocation Failure](./media/cloud-services-allocation-failure/Allocation2.png) When an allocation request is pinned to a cluster, there's a higher chance of fa In Azure portal, navigate to your cloud service and in the sidebar select *Operation logs (classic)* to view the logs. -See further solutions for the exceptions below: +See these further solutions for the exceptions: |Exception Type |Error Message |Solution | ||||-|FabricInternalServerError |Operation failed with error code 'InternalError' and errorMessage 'The server encountered an internal error. Please retry the request.'.|[Troubleshoot FabricInternalServerError](cloud-services-troubleshoot-fabric-internal-server-error.md)| -|ServiceAllocationFailure |Operation failed with error code 'InternalError' and errorMessage 'The server encountered an internal error. Please retry the request.'.|[Troubleshoot ServiceAllocationFailure](cloud-services-troubleshoot-fabric-internal-server-error.md)| -|LocationNotFoundForRoleSize |The operation '`{Operation ID}`' failed: 'The requested VM tier is currently not available in Region (`{Region ID}`) for this subscription. Please try another tier or deploy to a different location.'.|[Troubleshoot LocationNotFoundForRoleSize](cloud-services-troubleshoot-location-not-found-for-role-size.md)| -|ConstrainedAllocationFailed |Azure operation '`{Operation ID}`' failed with code Compute.ConstrainedAllocationFailed. Details: Allocation failed; unable to satisfy constraints in request. The requested new service deployment is bound to an Affinity Group, or it targets a Virtual Network, or there is an existing deployment under this hosted service. Any of these conditions constrains the new deployment to specific Azure resources. Please retry later or try reducing the VM size or number of role instances. Alternatively, if possible, remove the aforementioned constraints or try deploying to a different region.|[Troubleshoot ConstrainedAllocationFailed](cloud-services-troubleshoot-constrained-allocation-failed.md)| -|OverconstrainedAllocationRequest |The VM size (or combination of VM sizes) required by this deployment cannot be provisioned due to deployment request constraints. If possible, try relaxing constraints such as virtual network bindings, deploying to a hosted service with no other deployment in it and to a different affinity group or with no affinity group, or try deploying to a different region.|[Troubleshoot OverconstrainedAllocationRequest](cloud-services-troubleshoot-overconstrained-allocation-request.md)| +|FabricInternalServerError |Operation failed with error code 'InternalError' and errorMessage 'The server encountered an internal error. Please retry the request.'|[Troubleshoot FabricInternalServerError](cloud-services-troubleshoot-fabric-internal-server-error.md)| +|ServiceAllocationFailure |Operation failed with error code 'InternalError' and errorMessage 'The server encountered an internal error. Please retry the request.'|[Troubleshoot ServiceAllocationFailure](cloud-services-troubleshoot-fabric-internal-server-error.md)| +|LocationNotFoundForRoleSize |The operation '`{Operation ID}`' failed: 'The requested VM tier is currently not available in Region (`{Region ID}`) for this subscription. Please try another tier or deploy to a different location.'|[Troubleshoot LocationNotFoundForRoleSize](cloud-services-troubleshoot-location-not-found-for-role-size.md)| +|ConstrainedAllocationFailed |Azure operation '`{Operation ID}`' failed with code Compute.ConstrainedAllocationFailed. Details: Allocation failed; unable to satisfy constraints in request. The requested new service deployment is bound to an Affinity Group, or it targets a Virtual Network, or there's an existing deployment under this hosted service. Any of these conditions constrains the new deployment to specific Azure resources. Please retry later or try reducing the VM size or number of role instances. Alternatively, if possible, remove the constraints or try deploying to a different region.|[Troubleshoot ConstrainedAllocationFailed](cloud-services-troubleshoot-constrained-allocation-failed.md)| +|OverconstrainedAllocationRequest |The VM size (or combination of VM sizes) required by this deployment can't be provisioned due to deployment request constraints. If possible, try relaxing constraints such as virtual network bindings, deploying to a hosted service with no other deployment in it and to a different affinity group or with no affinity group, or try deploying to a different region.|[Troubleshoot OverconstrainedAllocationRequest](cloud-services-troubleshoot-overconstrained-allocation-request.md)| Example error message: Example error message: Here are the common allocation scenarios that cause an allocation request to be pinned to a single cluster. -* Deploying to Staging Slot - If a cloud service has a deployment in either slot, then the entire cloud service is pinned to a specific cluster. This means that if a deployment already exists in the production slot, then a new staging deployment can only be allocated in the same cluster as the production slot. If the cluster is nearing capacity, the request may fail. -* Scaling - Adding new instances to an existing cloud service must allocate in the same cluster. Small scaling requests can usually be allocated, but not always. If the cluster is nearing capacity, the request may fail. -* Affinity Group - A new deployment to an empty cloud service can be allocated by the fabric in any cluster in that region, unless the cloud service is pinned to an affinity group. Deployments to the same affinity group will be attempted on the same cluster. If the cluster is nearing capacity, the request may fail. -* Affinity Group vNet - Older Virtual Networks were tied to affinity groups instead of regions, and cloud services in these Virtual Networks would be pinned to the affinity group cluster. Deployments to this type of virtual network will be attempted on the pinned cluster. If the cluster is nearing capacity, the request may fail. +* Deploying to Staging Slot - If a cloud service has a deployment in either slot, then the entire cloud service is pinned to a specific cluster. This means that if a deployment already exists in the production slot, then a new staging deployment can only be allocated in the same cluster as the production slot. If the cluster is nearing capacity, the request may fail. +* Scaling - Adding new instances to an existing cloud service must allocate in the same cluster. Small scaling requests can usually be allocated, but not always. If the cluster is nearing capacity, the request may fail. +* Affinity Group - The fabric in any cluster in that region can allocate a new deployment to an empty cloud service, unless the cloud service is pinned to an affinity group. Deployments attempt to use the same affinity group on the same cluster. If the cluster is nearing capacity, the request may fail. +* Affinity Group virtual network - Older Virtual Networks were tied to affinity groups instead of regions, and cloud services in these Virtual Networks would be pinned to the affinity group cluster. Attempted deployments to this type of virtual network occur on the pinned cluster. If the cluster is nearing capacity, the request may fail. ## Solutions Here are the common allocation scenarios that cause an allocation request to be * Deploy the workload to a new cloud service * Update the CNAME or A record to point traffic to the new cloud service * Once zero traffic is going to the old site, you can delete the old cloud service. This solution should incur zero downtime.-2. Delete both production and staging slots - This solution will preserve your existing DNS name, but will cause downtime to your application. +2. Delete both production and staging slots - This solution preserves your existing Domain Name System (DNS) name but causes downtime to your application. * Delete the production and staging slots of an existing cloud service so that the cloud service is empty, and then- * Create a new deployment in the existing cloud service. This will re-attempt to allocation on all clusters in the region. Ensure the cloud service is not tied to an affinity group. -3. Reserved IP - This solution will preserve your existing IP address, but will cause downtime to your application. + * Create a new deployment in the existing cloud service. This solution reattempts allocation on all clusters in the region. Ensure the cloud service isn't tied to an affinity group. +3. Reserved IP - This solution preserves your existing IP address but causes downtime to your application. * Create a ReservedIP for your existing deployment using PowerShell Here are the common allocation scenarios that cause an allocation request to be New-AzureReservedIP -ReservedIPName {new reserved IP name} -Location {location} -ServiceName {existing service name} ``` - * Follow #2 from above, making sure to specify the new ReservedIP in the service's CSCFG. -4. Remove affinity group for new deployments - Affinity Groups are no longer recommended. Follow steps for #1 above to deploy a new cloud service. Ensure cloud service is not in an affinity group. -5. Convert to a Regional Virtual Network - See [How to migrate from Affinity Groups to a Regional Virtual Network (VNet)](/previous-versions/azure/virtual-network/virtual-networks-migrate-to-regional-vnet). + * Follow #2, making sure to specify the new ReservedIP in the service's CSCFG. +4. Remove affinity group for new deployments - Affinity Groups are no longer recommended. Follow steps for #1 to deploy a new cloud service. Ensure cloud service isn't in an affinity group. +5. Convert to a Regional Virtual Network - See [How to migrate from Affinity Groups to a Regional Virtual Network (virtual network)](/previous-versions/azure/virtual-network/virtual-networks-migrate-to-regional-vnet). |
cloud-services | Cloud Services Certs Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-certs-create.md | Title: Cloud Services (classic) and management certificates | Microsoft Docs description: Learn about how to create and deploy certificates for cloud services and for authenticating with the management API in Azure. Previously updated : 02/21/2023 Last updated : 07/23/2024 - -Certificates are used in Azure for cloud services ([service certificates](#what-are-service-certificates)) and for authenticating with the management API ([management certificates](#what-are-management-certificates)). This topic gives a general overview of both certificate types, how to [create](#create) and deploy them to Azure. +Certificates are used in Azure for cloud services ([service certificates](#what-are-service-certificates)) and for authenticating with the management API ([management certificates](#what-are-management-certificates)). This article gives a general overview of both certificate types, how to [create](#create) and deploy them to Azure. -Certificates used in Azure are x.509 v3 certificates and can be signed by another trusted certificate or they can be self-signed. A self-signed certificate is signed by its own creator, therefore it is not trusted by default. Most browsers can ignore this problem. You should only use self-signed certificates when developing and testing your cloud services. +Certificates used in Azure are x.509 v3 certificates. They can self-sign, or another trusted certificate can sign them. A certificate is self-signed when its creator signs it. Self-signed certificates aren't trusted by default, but most browsers can ignore this problem. You should only use self-signed certificates when developing and testing your cloud services. -Certificates used by Azure can contains a public key. Certificates have a thumbprint that provides a means to identify them in an unambiguous way. This thumbprint is used in the Azure [configuration file](cloud-services-configure-ssl-certificate-portal.md) to identify which certificate a cloud service should use. +Certificates used by Azure can contain a public key. Certificates have a thumbprint that provides a means to identify them in an unambiguous way. This thumbprint is used in the Azure [configuration file](cloud-services-configure-ssl-certificate-portal.md) to identify which certificate a cloud service should use. >[!Note] >Azure Cloud Services does not accept AES256-SHA256 encrypted certificate. Certificates used by Azure can contains a public key. Certificates have a thumbp ## What are service certificates? Service certificates are attached to cloud services and enable secure communication to and from the service. For example, if you deployed a web role, you would want to supply a certificate that can authenticate an exposed HTTPS endpoint. Service certificates, defined in your service definition, are automatically deployed to the virtual machine that is running an instance of your role. -You can upload service certificates to Azure either using the Azure portal or by using the classic deployment model. Service certificates are associated with a specific cloud service. They are assigned to a deployment in the service definition file. +You can upload service certificates to Azure either using the Azure portal or by using the classic deployment model. Service certificates are associated with a specific cloud service. The service definition file assigns them to a deployment. -Service certificates can be managed separately from your services, and may be managed by different individuals. For example, a developer may upload a service package that refers to a certificate that an IT manager has previously uploaded to Azure. An IT manager can manage and renew that certificate (changing the configuration of the service) without needing to upload a new service package. Updating without a new service package is possible because the logical name, store name, and location of the certificate is in the service definition file and while the certificate thumbprint is specified in the service configuration file. To update the certificate, it's only necessary to upload a new certificate and change the thumbprint value in the service configuration file. +Service certificates can be managed separately from your services, and different individuals may manage them. For example, a developer may upload a service package that refers to a certificate that an IT manager previously uploaded to Azure. An IT manager can manage and renew that certificate (changing the configuration of the service) without needing to upload a new service package. Updating without a new service package is possible because the logical name, store name, and location of the certificate is in the service definition file and while the certificate thumbprint is specified in the service configuration file. To update the certificate, it's only necessary to upload a new certificate and change the thumbprint value in the service configuration file. >[!Note] >The [Cloud Services FAQ - Configuration and Management](cloud-services-configuration-and-management-faq.yml) article has some helpful information about certificates. ## What are management certificates?-Management certificates allow you to authenticate with the classic deployment model. Many programs and tools (such as Visual Studio or the Azure SDK) use these certificates to automate configuration and deployment of various Azure services. These are not really related to cloud services. +Management certificates allow you to authenticate with the classic deployment model. Many programs and tools (such as Visual Studio or the Azure SDK) use these certificates to automate configuration and deployment of various Azure services. These certificates aren't related to cloud services. > [!WARNING] > Be careful! These types of certificates allow anyone who authenticates with them to manage the subscription they are associated with. Management certificates allow you to authenticate with the classic deployment mo > ### Limitations-There is a limit of 100 management certificates per subscription. There is also a limit of 100 management certificates for all subscriptions under a specific service administratorΓÇÖs user ID. If the user ID for the account administrator has already been used to add 100 management certificates and there is a need for more certificates, you can add a co-administrator to add the additional certificates. +There's a limit of 100 management certificates per subscription. There's also a limit of 100 management certificates for all subscriptions under a specific service administratorΓÇÖs user ID. If the user ID for the account administrator was already used to add 100 management certificates and there's a need for more certificates, you can add a coadministrator to add more certificates. -Additionally, management certificates can not be used with CSP subscriptions as CSP subscriptions only support the Azure Resource Manager deployment model and management certificates use the classic deployment model. Reference [Azure Resource Manager vs classic deployment model](../azure-resource-manager/management/deployment-models.md) and [Understanding Authentication with the Azure SDK for .NET](/dotnet/azure/sdk/authentication) for more information on your options for CSP subscriptions. +Additionally, management certificates canΓÇÖt be used with Cloud Solution Provider (CSP) subscriptions as CSP subscriptions only support the Azure Resource Manager deployment model and management certificates use the classic deployment model. Reference [Azure Resource Manager vs classic deployment model](../azure-resource-manager/management/deployment-models.md) and [Understanding Authentication with the Azure SDK for .NET](/dotnet/azure/sdk/authentication) for more information on your options for CSP subscriptions. <a name="create"></a> ## Create a new self-signed certificate You can use any tool available to create a self-signed certificate as long as th There are two easy ways to create a certificate on Windows, with the `makecert.exe` utility, or IIS. ### Makecert.exe-This utility has been deprecated and is no longer documented here. For more information, see [this MSDN article](/windows/desktop/SecCrypto/makecert). +This utility is retired and is no longer documented here. For more information, see [this Microsoft Developer Network (MSDN) article](/windows/desktop/SecCrypto/makecert). ### PowerShell ```powershell Export-Certificate -Type CERT -Cert $cert -FilePath .\my-cert-file.cer ``` ### Internet Information Services (IIS)-There are many pages on the internet that cover how to do this with IIS. [Here](https://www.sslshopper.com/article-how-to-create-a-self-signed-certificate-in-iis-7.html) is a great one I found that I think explains it well. +There are many pages on the internet that cover how to create certificates with IIS, such as [When to Use an IIS Self Signed Certificate](https://www.sslshopper.com/article-how-to-create-a-self-signed-certificate-in-iis-7.html). ### Linux-[This](../virtual-machines/linux/mac-create-ssh-keys.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) article describes how to create certificates with SSH. +[Quick steps: Create and use an SSH public-private key pair for Linux VMs in Azure](../virtual-machines/linux/mac-create-ssh-keys.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) describes how to create certificates with SSH. ## Next steps [Upload your service certificate to the Azure portal](cloud-services-configure-ssl-certificate-portal.md). |
cloud-services | Cloud Services Choose Me | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-choose-me.md | Title: What is Azure Cloud Services (classic) | Microsoft Docs -description: Learn about what Azure Cloud Services is, specifically that it's designed to support applications that are scalable, reliable, and inexpensive to operate. +description: Learn about what Azure Cloud Services is, specifically its design to support applications that are scalable, reliable, and inexpensive to operate. Previously updated : 02/21/2023 Last updated : 07/23/2024 Azure Cloud Services is an example of a [platform as a service](https://azure.mi ![Azure Cloud Services diagram](./media/cloud-services-choose-me/diagram.png) -More control also means less ease of use. Unless you need the additional control options, it's typically quicker and easier to get a web application up and running in the Web Apps feature of App Service compared to Azure Cloud Services. +More control also means less ease of use. Unless you need the more control options, it's typically quicker and easier to get a web application up and running in the Web Apps feature of App Service compared to Azure Cloud Services. There are two types of Azure Cloud Services roles. The only difference between the two is how your role is hosted on the VMs: -* **Web role**: Automatically deploys and hosts your app through IIS. +* **Web role**: Automatically deploys and hosts your app through Internet Information Services (IIS). -* **Worker role**: Does not use IIS, and runs your app standalone. +* **Worker role**: Doesn't use IIS, and runs your app standalone. For example, a simple application might use just a single web role, serving a website. A more complex application might use a web role to handle incoming requests from users, and then pass those requests on to a worker role for processing. (This communication might use [Azure Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) or [Azure Queue storage](../storage/common/storage-introduction.md).) An Azure Cloud Services application is typically made available to users via a t ## Monitoring Azure Cloud Services also provides monitoring. Like Virtual Machines, it detects a failed physical server and restarts the VMs that were running on that server on a new machine. But Azure Cloud Services also detects failed VMs and applications, not just hardware failures. Unlike Virtual Machines, it has an agent inside each web and worker role, and so it's able to start new VMs and application instances when failures occur. -The PaaS nature of Azure Cloud Services has other implications, too. One of the most important is that applications built on this technology should be written to run correctly when any web or worker role instance fails. To achieve this, an Azure Cloud Services application shouldn't maintain state in the file system of its own VMs. Unlike VMs created with Virtual Machines, writes made to Azure Cloud Services VMs aren't persistent. There's nothing like a Virtual Machines data disk. Instead, an Azure Cloud Services application should explicitly write all state to Azure SQL Database, blobs, tables, or some other external storage. Building applications this way makes them easier to scale and more resistant to failure, which are both important goals of Azure Cloud Services. +The PaaS nature of Azure Cloud Services has other implications, too. One of the most important implications is that you should write applications built on this technology to run correctly when any web or worker role instance fails. To achieve this goal, an Azure Cloud Services application shouldn't maintain state in the file system of its own VMs. Unlike VMs created with Virtual Machines, writes made to Azure Cloud Services VMs aren't persistent. There's nothing like a Virtual Machines data disk. Instead, an Azure Cloud Services application should explicitly write all state to Azure SQL Database, blobs, tables, or some other external storage. Building applications this way makes them easier to scale and more resistant to failure. Scalability and resiliency are both important goals of Azure Cloud Services. ## Next steps * [Create a cloud service app in .NET](cloud-services-dotnet-get-started.md) |
cloud-services | Cloud Services Configure Ssl Certificate Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-configure-ssl-certificate-portal.md | Title: Configure TLS for a cloud service | Microsoft Docs description: Learn how to specify an HTTPS endpoint for a web role and how to upload a TLS/SSL certificate to secure your application. These examples use the Azure portal. Previously updated : 02/21/2023 Last updated : 07/23/2024 Transport Layer Security (TLS), previously known as Secure Socket Layer (SSL) en > The procedures in this task apply to Azure Cloud Services; for App Services, see [this](../app-service/configure-ssl-bindings.md). > -This task uses a production deployment. Information on using a staging deployment is provided at the end of this topic. +This task uses a production deployment. Information on using a staging deployment is provided at the end of this article. -Read [this](cloud-services-how-to-create-deploy-portal.md) first if you have not yet created a cloud service. +Read [How to create and deploy an Azure Cloud Service (classic)](cloud-services-how-to-create-deploy-portal.md) first if you haven't yet created a cloud service. ## Step 1: Get a TLS/SSL certificate-To configure TLS for an application, you first need to get a TLS/SSL certificate that has been signed by a Certificate Authority (CA), a trusted third party who issues certificates for this purpose. If you do not already have one, you need to obtain one from a company that sells TLS/SSL certificates. +To configure TLS for an application, you first need to get a TLS/SSL certificate signed by a Certificate Authority (CA), a trusted partner who issues certificates for this purpose. If you don't already have one, you need to obtain one from a company that sells TLS/SSL certificates. The certificate must meet the following requirements for TLS/SSL certificates in Azure: * The certificate must contain a public key. * The certificate must be created for key exchange, exportable to a Personal Information Exchange (.pfx) file.-* The certificate's subject name must match the domain used to access the cloud service. You cannot obtain a TLS/SSL certificate from a certificate authority (CA) for the cloudapp.net domain. You must acquire a custom domain name to use when access your service. When you request a certificate from a CA, the certificate's subject name must match the custom domain name used to access your application. For example, if your custom domain name is **contoso.com** you would request a certificate from your CA for ***.contoso.com** or **www\.contoso.com**. +* The certificate's subject name must match the domain used to access the cloud service. You can't obtain a TLS/SSL certificate from a certificate authority (CA) for the cloudapp.net domain. You must acquire a custom domain name to use when accessing your service. When you request a certificate from a CA, the certificate's subject name must match the custom domain name used to access your application. For example, if your custom domain name is **contoso.com** you would request a certificate from your CA for ***.contoso.com** or **www\.contoso.com**. * The certificate must use a minimum of 2048-bit encryption. -For test purposes, you can [create](cloud-services-certs-create.md) and use a self-signed certificate. A self-signed certificate is not authenticated through a CA and can use the cloudapp.net domain as the website URL. For example, the following task uses a self-signed certificate in which the common name (CN) used in the certificate is **sslexample.cloudapp.net**. +For test purposes, you can [create](cloud-services-certs-create.md) and use a self-signed certificate. A self-signed certificate isn't authenticated through a CA and can use the cloudapp.net domain as the website URL. For example, the following task uses a self-signed certificate in which the common name (CN) used in the certificate is **sslexample.cloudapp.net**. Next, you must include information about the certificate in your service definition and service configuration files. Your application must be configured to use the certificate, and an HTTPS endpoin </WebRole> ``` - The **Certificates** section defines the name of our certificate, its location, and the name of the store where it is located. + The **Certificates** section defines the name of our certificate, its location, and the name of the store where it's located. Permissions (`permissionLevel` attribute) can be set to one of the following values: Your application must be configured to use the certificate, and an HTTPS endpoin </WebRole> ``` - All the required changes to the service definition file have been - completed; but, you still need to add the certificate information to - the service configuration file. -4. In your service configuration file (CSCFG), ServiceConfiguration.Cloud.cscfg, add a **Certificates** -value with that of your certificate. The following code sample provides - details of the **Certificates** section, except for the thumbprint value. + All the required changes to the service definition file are complete, but you still need to add the certificate information to the service configuration file. ++4. In your service configuration file (CSCFG), ServiceConfiguration.Cloud.cscfg, add a **Certificates** value with that of your certificate. The following code sample provides details of the **Certificates** section, except for the thumbprint value. ```xml <Role name="Deployment"> value with that of your certificate. The following code sample provides (This example uses **sha1** for the thumbprint algorithm. Specify the appropriate value for your certificate's thumbprint algorithm.) -Now that the service definition and service configuration files have -been updated, package your deployment for uploading to Azure. If -you are using **cspack**, don't use the -**/generateConfigurationFile** flag, as that will overwrite the -certificate information you just inserted. +Now that you updated the service definition and service configuration files, package your deployment for uploading to Azure. If +you're using **cspack**, don't use the +**/generateConfigurationFile** flag, as that overwrites the +certificate information you inserted. ## Step 3: Upload a certificate Connect to the Azure portal and... Connect to the Azure portal and... ![Publish your cloud service](media/cloud-services-configure-ssl-certificate-portal/browse.png) -2. Click **Certificates**. +2. Select **Certificates**. ![Click the certificates icon](media/cloud-services-configure-ssl-certificate-portal/certificate-item.png) -3. Click **Upload** at the top of the certificates area. +3. Select **Upload** at the top of the certificates area. ![Click the Upload menu item](media/cloud-services-configure-ssl-certificate-portal/Upload_menu.png) -4. Provide the **File**, **Password**, then click **Upload** at the bottom of the data entry area. +4. Provide the **File**, **Password**, then select **Upload** at the bottom of the data entry area. ## Step 4: Connect to the role instance by using HTTPS Now that your deployment is up and running in Azure, you can connect to it using HTTPS. -1. Click the **Site URL** to open up the web browser. +1. Select the **Site URL** to open up the web browser. ![Click the Site URL](media/cloud-services-configure-ssl-certificate-portal/navigate.png) |
cloud-services | Cloud Services Connect To Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-connect-to-custom-domain.md | Title: Connect a Cloud Service (classic) to a custom Domain Controller | Microso description: Learn how to connect your web/worker roles to a custom AD Domain using PowerShell and AD Domain Extension Previously updated : 02/21/2023 Last updated : 07/23/2024 -We will first set up a Virtual Network (VNet) in Azure. We will then add an Active Directory Domain Controller (hosted on an Azure Virtual Machine) to the VNet. Next, we will add existing cloud service roles to the pre-created VNet, then connect them to the Domain Controller. +We first set up a virtual network in Azure. We then add an Active Directory Domain Controller (hosted on an Azure Virtual Machine) to the virtual network. Next, we add existing cloud service roles to the precreated virtual network, then connect them to the Domain Controller. Before we get started, couple of things to keep in mind: 1. This tutorial uses PowerShell, so make sure you have Azure PowerShell installed and ready to go. To get help with setting up Azure PowerShell, see [How to install and configure Azure PowerShell](/powershell/azure/).-2. Your AD Domain Controller and Web/Worker Role instances need to be in the VNet. +2. Your AD Domain Controller and Web/Worker Role instances need to be in the virtual network. -Follow this step-by-step guide and if you run into any issues, leave us a comment at the end of the article. Someone will get back to you (yes, we do read comments). +Follow this step-by-step guide and if you run into any issues, leave us a comment at the end of the article. The network that is referenced by the cloud service must be a **classic virtual network**. -## Create a Virtual Network -You can create a Virtual Network in Azure using the Azure portal or PowerShell. For this tutorial, PowerShell is used. To create a virtual network using the Azure portal, see [Create a virtual network](../virtual-network/quick-create-portal.md). The article covers creating a virtual network (Resource Manager), but you must create a virtual network (Classic) for cloud services. To do so, in the portal, select **Create a resource**, type *virtual network* in the **Search** box, and then press **Enter**. In the search results, under **Everything**, select **Virtual network**. Under **Select a deployment model**, select **Classic**, then select **Create**. You can then follow the steps in the article. +## Create a virtual network +You can create a virtual network in Azure using the Azure portal or PowerShell. For this tutorial, PowerShell is used. To create a virtual network using the Azure portal, see [Create a virtual network](../virtual-network/quick-create-portal.md). The article covers creating a virtual network (Resource Manager), but you must create a virtual network (Classic) for cloud services. To do so, in the portal, select **Create a resource**, type *virtual network* in the **Search** box, and then press **Enter**. In the search results, under **Everything**, select **virtual network**. Under **Select a deployment model**, select **Classic**, then select **Create**. You can then follow the steps in the article. ```powershell-#Create Virtual Network +#Create virtual network $vnetStr = @"<?xml version="1.0" encoding="utf-8"?> Set-AzureVNetConfig -ConfigurationPath $vnetConfigPath ``` ## Create a Virtual Machine-Once you have completed setting up the Virtual Network, you will need to create an AD Domain Controller. For this tutorial, we will be setting up an AD Domain Controller on an Azure Virtual Machine. +Once you complete setting up the virtual network, you need to create an AD Domain Controller. For this tutorial, we set up an AD Domain Controller on an Azure Virtual Machine (VM). -To do this, create a virtual machine through PowerShell using the following commands: +Create a virtual machine through PowerShell using the following commands: ```powershell # Initialize variables $username = '<your-username>' $password = '<your-password>' $affgrp = '<your- affgrp>' -# Create a VM and add it to the Virtual Network +# Create a VM and add it to the virtual network New-AzureQuickVM -Windows -ServiceName $vmsvc1 -Name $vm1 -ImageName $imgname -AdminUsername $username -Password $password -AffinityGroup $affgrp -SubnetNames $subnetname -VNetName $vnetname ``` ## Promote your Virtual Machine to a Domain Controller-To configure the Virtual Machine as an AD Domain Controller, you will need to log in to the VM and configure it. +To configure the Virtual Machine as an AD Domain Controller, you need to sign in to the VM and configure it. -To log in to the VM, you can get the RDP file through PowerShell, use the following commands: +To sign in to the VM, you can get the remote desktop protocol (RDP) file through PowerShell, use the following commands: ```powershell # Get RDP file Get-AzureRemoteDesktopFile -ServiceName $vmsvc1 -Name $vm1 -LocalPath <rdp-file-path> ``` -Once you are signed in to the VM, set up your Virtual Machine as an AD Domain Controller by following the step-by-step guide on [How to set up your customer AD Domain Controller](https://social.technet.microsoft.com/wiki/contents/articles/12370.windows-server-2012-set-up-your-first-domain-controller-step-by-step.aspx). +Once you sign into the VM, set up your Virtual Machine as an AD Domain Controller by following the step-by-step guide on [How to set up your customer AD Domain Controller](https://social.technet.microsoft.com/wiki/contents/articles/12370.windows-server-2012-set-up-your-first-domain-controller-step-by-step.aspx). -## Add your Cloud Service to the Virtual Network -Next, you need to add your cloud service deployment to the new VNet. To do this, modify your cloud service cscfg by adding the relevant sections to your cscfg using Visual Studio or the editor of your choice. +## Add your Cloud Service to the virtual network +Next, you need to add your cloud service deployment to the new virtual network. To add your cloud service deployment, modify your cloud service cscfg by adding the relevant sections to your cscfg using Visual Studio or the editor of your choice. ```xml <ServiceConfiguration serviceName="[hosted-service-name]" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="[os-family]" osVersion="*"> |
cloud-services | Cloud Services Custom Domain Name Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-custom-domain-name-portal.md | Title: Configure a custom domain name in Cloud Services (classic) | Microsoft Docs -description: Learn how to expose your Azure application or data to the internet on a custom domain by configuring DNS settings. These examples use the Azure portal. +description: Learn how to expose your Azure application or data to the internet on a custom domain by configuring Domain Name System (DNS) settings. These examples use the Azure portal. Previously updated : 02/21/2023 Last updated : 07/23/2024 -When you create a Cloud Service, Azure assigns it to a subdomain of **cloudapp.net**. For example, if your Cloud Service is named "contoso", your users will be able to access your application on a URL like `http://contoso.cloudapp.net`. Azure also assigns a virtual IP address. +When you create a Cloud Service, Azure assigns it to a subdomain of **cloudapp.net**. For example, if your Cloud Service is named `contoso`, your users are able to access your application on a URL like `http://contoso.cloudapp.net`. Azure also assigns a virtual IP address. However, you can also expose your application on your own domain name, such as **contoso.com**. This article explains how to reserve or configure a custom domain name for Cloud Service web roles. Do you already understand what CNAME and A records are? [Jump past the explanati > ## Understand CNAME and A records-CNAME (or alias records) and A records both allow you to associate a domain name with a specific server (or service in this case,) however they work differently. There are also some specific considerations when using A records with Azure Cloud services that you should consider before deciding which to use. +CNAME (or alias records) and A records both allow you to associate a domain name with a specific server (or service in this case); however, they work differently. There are also some specific considerations when using A records with Azure Cloud services that you should consider before deciding which to use. ### CNAME or Alias record-A CNAME record maps a *specific* domain, such as **contoso.com** or **www\.contoso.com**, to a canonical domain name. In this case, the canonical domain name is the **[myapp].cloudapp.net** domain name of your Azure hosted application. Once created, the CNAME creates an alias for the **[myapp].cloudapp.net**. The CNAME entry will resolve to the IP address of your **[myapp].cloudapp.net** service automatically, so if the IP address of the cloud service changes, you do not have to take any action. +A CNAME record maps a *specific* domain, such as **contoso.com** or **www\.contoso.com**, to a canonical domain name. In this case, the canonical domain name is the **[myapp].cloudapp.net** domain name of your Azure hosted application. Once created, the CNAME creates an alias for the **[myapp].cloudapp.net**. The CNAME entry resolves to the IP address of your **[myapp].cloudapp.net** service automatically, so if the IP address of the cloud service changes, you don't have to take any action. > [!NOTE] > Some domain registrars only allow you to map subdomains when using a CNAME record, such as www\.contoso.com, and not root names, such as contoso.com. For more information on CNAME records, see the documentation provided by your registrar, [the Wikipedia entry on CNAME record](https://en.wikipedia.org/wiki/CNAME_record), or the [IETF Domain Names - Implementation and Specification](https://tools.ietf.org/html/rfc1035) document. ### A record-An *A* record maps a domain, such as **contoso.com** or **www\.contoso.com**, *or a wildcard domain* such as **\*.contoso.com**, to an IP address. In the case of an Azure Cloud Service, the virtual IP of the service. So the main benefit of an A record over a CNAME record is that you can have one entry that uses a wildcard, such as \***.contoso.com**, which would handle requests for multiple sub-domains such as **mail.contoso.com**, **login.contoso.com**, or **www\.contso.com**. +An *A* record maps a domain, such as **contoso.com** or **www\.contoso.com**, *or a wildcard domain* such as **\*.contoso.com**, to an IP address. With an Azure Cloud Service, the virtual IP of the service. So the main benefit of an A record over a CNAME record is that you can have one entry that uses a wildcard, such as \***.contoso.com**, which would handle requests for multiple subdomains such as **mail.contoso.com**, **login.contoso.com**, or **www\.contso.com**. > [!NOTE] > Since an A record is mapped to a static IP address, it cannot automatically resolve changes to the IP address of your Cloud Service. The IP address used by your Cloud Service is allocated the first time you deploy to an empty slot (either production or staging.) If you delete the deployment for the slot, the IP address is released by Azure and any future deployments to the slot may be given a new IP address. To create a CNAME record, you must add a new entry in the DNS table for your cus 1. Use one of these methods to find the **.cloudapp.net** domain name assigned to your cloud service. - * Login to the [Azure portal], select your cloud service, look at the **Overview** section and then find the **Site URL** entry. + * Sign into the [Azure portal], select your cloud service, look at the **Overview** section and then find the **Site URL** entry. ![quick glance section showing the site URL][csurl] To create a CNAME record, you must add a new entry in the DNS table for your cus Get-AzureDeployment -ServiceName yourservicename | Select Url ``` - Save the domain name used in the URL returned by either method, as you will need it when creating a CNAME record. -2. Log on to your DNS registrar's website and go to the page for managing DNS. Look for links or areas of the site labeled as **Domain Name**, **DNS**, or **Name Server Management**. -3. Now find where you can select or enter CNAME's. You may have to select the record type from a drop down, or go to an advanced settings page. You should look for the words **CNAME**, **Alias**, or **Subdomains**. + Save the domain name used in the URL returned by either method, as you need it when creating a CNAME record. +2. Sign into your DNS registrar's website and go to the page for managing DNS. Look for links or areas of the site labeled as **Domain Name**, **DNS**, or **Name Server Management**. +3. Now find where you can select or enter CNAMEs. You may have to select the record type from a drop-down or go to an advanced settings page. You should look for the words **CNAME**, **Alias**, or **Subdomains**. 4. You must also provide the domain or subdomain alias for the CNAME, such as **www** if you want to create an alias for **www\.customdomain.com**. If you want to create an alias for the root domain, it may be listed as the '**\@**' symbol in your registrar's DNS tools. 5. Then, you must provide a canonical host name, which is your application's **cloudapp.net** domain in this case. For example, the following CNAME record forwards all traffic from **www\.contoso > (contoso.cloudapp.net), so the forwarding process is invisible to the > end user. > -> The example above only applies to traffic at the **www** subdomain. Since you cannot use wildcards with CNAME records, you must create one CNAME for each domain/subdomain. If you want to direct traffic from subdomains, such as *.contoso.com, to your cloudapp.net address, you can configure a **URL Redirect** or **URL Forward** entry in your DNS settings, or create an A record. +> The preceding example only applies to traffic at the **www** subdomain. Since you cannot use wildcards with CNAME records, you must create one CNAME for each domain/subdomain. If you want to direct traffic from subdomains, such as *.contoso.com, to your cloudapp.net address, you can configure a **URL Redirect** or **URL Forward** entry in your DNS settings, or create an A record. ## Add an A record for your custom domain To create an A record, you must first find the virtual IP address of your cloud service. Then add a new entry in the DNS table for your custom domain by using the tools provided by your registrar. Each registrar has a similar but slightly different method of specifying an A record, but the concepts are the same. 1. Use one of the following methods to get the IP address of your cloud service. - * Login to the [Azure portal], select your cloud service, look at the **Overview** section and then find the **Public IP addresses** entry. + * Sign into the [Azure portal], select your cloud service, look at the **Overview** section and then find the **Public IP addresses** entry. ![quick glance section showing the VIP][vip] To create an A record, you must first find the virtual IP address of your cloud get-azurevm -servicename yourservicename | get-azureendpoint -VM {$_.VM} | select Vip ``` - Save the IP address, as you will need it when creating an A record. -2. Log on to your DNS registrar's website and go to the page for managing DNS. Look for links or areas of the site labeled as **Domain Name**, **DNS**, or **Name Server Management**. -3. Now find where you can select or enter A record's. You may have to select the record type from a drop down, or go to an advanced settings page. -4. Select or enter the domain or subdomain that will use this A record. For example, select **www** if you want to create an alias for **www\.customdomain.com**. If you want to create a wildcard entry for all subdomains, enter '*****'. This will cover all sub-domains such as **mail.customdomain.com**, **login.customdomain.com**, and **www\.customdomain.com**. + Save the IP address, as you need it when creating an A record. +2. Sign into your DNS registrar's website and go to the page for managing DNS. Look for links or areas of the site labeled as **Domain Name**, **DNS**, or **Name Server Management**. +3. Now find where you can select or enter A records. You may have to select the record type from a drop-down, or go to an advanced settings page. +4. Select or enter the domain or subdomain that uses this A record. For example, select **www** if you want to create an alias for **www\.customdomain.com**. If you want to create a wildcard entry for all subdomains, enter `*****`. This entry covers all subdomains such as **mail.customdomain.com**, **login.customdomain.com**, and **www\.customdomain.com**. If you want to create an A record for the root domain, it may be listed as the '**\@**' symbol in your registrar's DNS tools.-5. Enter the IP address of your cloud service in the provided field. This associates the domain entry used in the A record with the IP address of your cloud service deployment. +5. Enter the IP address of your cloud service in the provided field. This step associates the domain entry used in the A record with the IP address of your cloud service deployment. For example, the following A record forwards all traffic from **contoso.com** to **137.135.70.239**, the IP address of your deployed application: This example demonstrates creating an A record for the root domain. If you wish ## Next steps * [How to Manage Cloud Services](cloud-services-how-to-manage-portal.md)-* [How to Map CDN Content to a Custom Domain](../cdn/cdn-map-content-to-custom-domain.md) +* [How to Map Content Delivery Network (CDN) Content to a Custom Domain](../cdn/cdn-map-content-to-custom-domain.md) * [General configuration of your cloud service](cloud-services-how-to-configure-portal.md). * Learn how to [deploy a cloud service](cloud-services-how-to-create-deploy-portal.md). * Configure [TLS/SSL certificates](cloud-services-configure-ssl-certificate-portal.md). |
cloud-services | Cloud Services Diagnostics Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-diagnostics-powershell.md | Title: Enable diagnostics in Azure Cloud Services (classic) using PowerShell | M description: Learn how to use PowerShell to enable collecting diagnostic data from an Azure Cloud Service with the Azure Diagnostics extension. Previously updated : 02/21/2023 Last updated : 07/23/2024 -You can collect diagnostic data like application logs, performance counters etc. from a Cloud Service using the Azure Diagnostics extension. This article describes how to enable the Azure Diagnostics extension for a Cloud Service using PowerShell. See [How to install and configure Azure PowerShell](/powershell/azure/) for the prerequisites needed for this article. +You can collect diagnostic data like application logs, performance counters etc. from a Cloud Service using the Azure Diagnostics extension. This article describes how to enable the Azure Diagnostics extension for a Cloud Service using PowerShell. See [How to install and configure Azure PowerShell](/powershell/azure/) for the prerequisites needed for this article. ## Enable diagnostics extension as part of deploying a Cloud Service This approach is applicable to continuous integration type of scenarios, where the diagnostics extension can be enabled as part of deploying the cloud service. When creating a new Cloud Service deployment, you can enable the diagnostics extension by passing in the *ExtensionConfiguration* parameter to the [New-AzureDeployment](/powershell/module/servicemanagement/azure/new-azuredeployment) cmdlet. The *ExtensionConfiguration* parameter takes an array of diagnostics configurations that can be created using the [New-AzureServiceDiagnosticsExtensionConfig](/powershell/module/servicemanagement/azure/new-azureservicediagnosticsextensionconfig) cmdlet. $workerrole_diagconfig = New-AzureServiceDiagnosticsExtensionConfig -Role "Worke New-AzureDeployment -ServiceName $service_name -Slot Production -Package $service_package -Configuration $service_config -ExtensionConfiguration @($webrole_diagconfig,$workerrole_diagconfig) ``` -If the diagnostics configuration file specifies a `StorageAccount` element with a storage account name, then the `New-AzureServiceDiagnosticsExtensionConfig` cmdlet will automatically use that storage account. For this to work, the storage account needs to be in the same subscription as the Cloud Service being deployed. +If the diagnostics configuration file specifies a `StorageAccount` element with a storage account name, then the `New-AzureServiceDiagnosticsExtensionConfig` cmdlet automatically uses that storage account. For this configuration to work, the storage account needs to be in the same subscription as the Cloud Service being deployed. -From Azure SDK 2.6 onward the extension configuration files generated by the MSBuild publish target output will include the storage account name based on the diagnostics configuration string specified in the service configuration file (.cscfg). The script below shows you how to parse the Extension configuration files from the publish target output and configure diagnostics extension for each role when deploying the cloud service. +From Azure SDK 2.6 onward, the extension configuration files generated by the MSBuild publish target output includes the storage account name based on the diagnostics configuration string specified in the service configuration file (.cscfg). The following script shows you how to parse the Extension configuration files from the publish target output and configure diagnostics extension for each role when deploying the cloud service. ```powershell $service_name = "MyService" foreach ($extPath in $diagnosticsExtensions) New-AzureDeployment -ServiceName $service_name -Slot Production -Package $service_package -Configuration $service_config -ExtensionConfiguration $diagnosticsConfigurations ``` -Visual Studio Online uses a similar approach for automated deployments of Cloud Services with the diagnostics extension. See [Publish-AzureCloudDeployment.ps1](https://github.com/Microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureCloudPowerShellDeploymentV1/Publish-AzureCloudDeployment.ps1) for a complete example. +Visual Studio Codespace uses a similar approach for automated deployments of Cloud Services with the diagnostics extension. See [Publish-AzureCloudDeployment.ps1](https://github.com/Microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureCloudPowerShellDeploymentV1/Publish-AzureCloudDeployment.ps1) for a complete example. -If no `StorageAccount` was specified in the diagnostics configuration, then you need to pass in the *StorageAccountName* parameter to the cmdlet. If the *StorageAccountName* parameter is specified, then the cmdlet will always use the storage account that is specified in the parameter and not the one that is specified in the diagnostics configuration file. +If no `StorageAccount` was specified in the diagnostics configuration, then you need to pass in the *StorageAccountName* parameter to the cmdlet. If you specify the *StorageAccountName* parameter, then the cmdlet uses the storage account specified in the parameter and not the one specified in the diagnostics configuration file. -If the diagnostics storage account is in a different subscription from the Cloud Service, then you need to explicitly pass in the *StorageAccountName* and *StorageAccountKey* parameters to the cmdlet. The *StorageAccountKey* parameter is not needed when the diagnostics storage account is in the same subscription, as the cmdlet can automatically query and set the key value when enabling the diagnostics extension. However, if the diagnostics storage account is in a different subscription, then the cmdlet might not be able to get the key automatically and you need to explicitly specify the key through the *StorageAccountKey* parameter. +If the diagnostics storage account is in a different subscription from the Cloud Service, then you need to explicitly pass in the *StorageAccountName* and *StorageAccountKey* parameters to the cmdlet. The *StorageAccountKey* parameter isn't needed when the diagnostics storage account is in the same subscription, as the cmdlet can automatically query and set the key value when enabling the diagnostics extension. However, if the diagnostics storage account is in a different subscription, then the cmdlet might not be able to get the key automatically and you need to explicitly specify the key through the *StorageAccountKey* parameter. ```powershell $webrole_diagconfig = New-AzureServiceDiagnosticsExtensionConfig -Role "WebRole" -DiagnosticsConfigurationPath $webrole_diagconfigpath -StorageAccountName $diagnosticsstorage_name -StorageAccountKey $diagnosticsstorage_key Remove-AzureServiceDiagnosticsExtension -ServiceName "MyService" -Role "WebRole" ``` ## Next Steps-* For additional guidance on using Azure diagnostics and other techniques to troubleshoot problems, see [Enabling Diagnostics in Azure Cloud Services and Virtual Machines](cloud-services-dotnet-diagnostics.md). +* For more information on using Azure diagnostics and other techniques to troubleshoot problems, see [Enabling Diagnostics in Azure Cloud Services and Virtual Machines](cloud-services-dotnet-diagnostics.md). * The [Diagnostics Configuration Schema](../azure-monitor/agents/diagnostics-extension-schema-windows.md) explains the various xml configurations options for the diagnostics extension. * To learn how to enable the diagnostics extension for Virtual Machines, see [Create a Windows Virtual machine with monitoring and diagnostics using Azure Resource Manager Template](../virtual-machines/extensions/diagnostics-template.md) |
cloud-services | Cloud Services Disaster Recovery Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-disaster-recovery-guidance.md | Title: Handling an Azure service disruption that impacts Azure Cloud Services (classic) -description: Learn what to do in the event of an Azure service disruption that impacts Azure Cloud Services. +description: Learn what to do if an Azure service disruption that impacts Azure Cloud Services. Previously updated : 02/21/2023 Last updated : 07/23/2024 -# What to do in the event of an Azure service disruption that impacts Azure Cloud Services (classic) +# What to do if an Azure service disruption that impacts Azure Cloud Services (classic) [!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)] -At Microsoft, we work hard to make sure that our services are always available to you when you need them. Forces beyond our control sometimes impact us in ways that cause unplanned service disruptions. +At Microsoft, we work hard to make sure that our services are always available to you when you need them. Forces beyond our control sometimes affect us in ways that cause unplanned service disruptions. Microsoft provides a Service Level Agreement (SLA) for its services as a commitment for uptime and connectivity. The SLA for individual Azure services can be found at [Azure Service Level Agreements](https://azure.microsoft.com/support/legal/sla/). Azure already has many built-in platform features that support highly available applications. For more about these services, read [Disaster recovery and high availability for Azure applications](/azure/architecture/framework/resiliency/backup-and-recovery). -This article covers a true disaster recovery scenario, when a whole region experiences an outage due to major natural disaster or widespread service interruption. These are rare occurrences, but you must prepare for the possibility that there is an outage of an entire region. If an entire region experiences a service disruption, the locally redundant copies of your data would temporarily be unavailable. If you have enabled geo-replication, three additional copies of your Azure Storage blobs and tables are stored in a different region. In the event of a complete regional outage or a disaster in which the primary region is not recoverable, Azure remaps all of the DNS entries to the geo-replicated region. +This article covers a true disaster recovery scenario, when a whole region experiences an outage due to major natural disaster or widespread service interruption. These scenarios are rare occurrences, but you must prepare for the possibility that there's an outage of an entire region. If an entire region experiences a service disruption, the locally redundant copies of your data would temporarily be unavailable. If you enabled geo-replication, three extra copies of your Azure Storage blobs and tables are stored in a different region. If a complete regional outage or a disaster in which the primary region isn't recoverable occurs, Azure remaps all of the Domain Name System (DNS) entries to the geo-replicated region. > [!NOTE] > Be aware that you do not have any control over this process, and it will only occur for datacenter-wide service disruptions. Because of this, you must also rely on other application-specific backup strategies to achieve the highest level of availability. For more information, see [Disaster recovery and high availability for applications built on Microsoft Azure](/azure/architecture/framework/resiliency/backup-and-recovery). If you would like to be able to affect your own failover, you might want to consider the use of [read-access geo-redundant storage (RA-GRS)](../storage/common/storage-redundancy.md), which creates a read-only copy of your data in another region. The most robust disaster recovery solution involves maintaining multiple deploym ![Balancing Azure Cloud Services across regions with Azure Traffic Manager](./media/cloud-services-disaster-recovery-guidance/using-azure-traffic-manager.png) -For the fastest response to the loss of a region, it is important that you configure Traffic Manager's [endpoint monitoring](../traffic-manager/traffic-manager-monitoring.md). +For the fastest response to the loss of a region, it's important that you configure Traffic Manager's [endpoint monitoring](../traffic-manager/traffic-manager-monitoring.md). ## Option 2: Deploy your application to a new region Maintaining multiple active deployments as described in the previous option incurs additional ongoing costs. If your recovery time objective (RTO) is flexible enough and you have the original code or compiled Cloud Services package, you can create a new instance of your application in another region and update your DNS records to point to the new deployment. Depending on your application data sources, you may need to check the recovery p ## Option 3: Wait for recovery-In this case, no action on your part is required, but your service will be unavailable until the region is restored. You can see the current service status on the [Azure Service Health Dashboard](https://azure.microsoft.com/status/). +In this case, no action on your part is required, but your service is unavailable until the region is restored. You can see the current service status on the [Azure Service Health Dashboard](https://azure.microsoft.com/status/). ## Next steps To learn more about how to implement a disaster recovery and high availability strategy, see [Disaster recovery and high availability for Azure applications](/azure/architecture/framework/resiliency/backup-and-recovery). |
cloud-services | Cloud Services Dotnet Diagnostics Trace Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-diagnostics-trace-flow.md | Title: Trace the flow in Cloud Services (classic) Application with Azure Diagnos description: Add tracing messages to an Azure application to help debugging, measuring performance, monitoring, traffic analysis, and more. Previously updated : 02/21/2023 Last updated : 07/23/2024 -Tracing is a way for you to monitor the execution of your application while it is running. You can use the [System.Diagnostics.Trace](/dotnet/api/system.diagnostics.trace), [System.Diagnostics.Debug](/dotnet/api/system.diagnostics.debug), and [System.Diagnostics.TraceSource](/dotnet/api/system.diagnostics.tracesource) classes to record information about errors and application execution in logs, text files, or other devices for later analysis. For more information about tracing, see [Tracing and Instrumenting Applications](/dotnet/framework/debug-trace-profile/tracing-and-instrumenting-applications). +Tracing is a way for you to monitor the execution of your application while it's running. You can use the [System.Diagnostics.Trace](/dotnet/api/system.diagnostics.trace), [System.Diagnostics.Debug](/dotnet/api/system.diagnostics.debug), and [System.Diagnostics.TraceSource](/dotnet/api/system.diagnostics.tracesource) classes to record information about errors and application execution in logs, text files, or other devices for later analysis. For more information about tracing, see [Tracing and Instrumenting Applications](/dotnet/framework/debug-trace-profile/tracing-and-instrumenting-applications). ## Use trace statements and trace switches-Implement tracing in your Cloud Services application by adding the [DiagnosticMonitorTraceListener](/previous-versions/azure/reference/ee758610(v=azure.100)) to the application configuration and making calls to System.Diagnostics.Trace or System.Diagnostics.Debug in your application code. Use the configuration file *app.config* for worker roles and the *web.config* for web roles. When you create a new hosted service using a Visual Studio template, Azure Diagnostics is automatically added to the project and the DiagnosticMonitorTraceListener is added to the appropriate configuration file for the roles that you add. +Implement tracing in your Cloud Services application by adding the [DiagnosticMonitorTraceListener](/previous-versions/azure/reference/ee758610(v=azure.100)) to the application configuration and making calls to System.Diagnostics.Trace or System.Diagnostics.Debug in your application code. Use the configuration file *app.config* for worker roles and the *web.config* for web roles. When you create a new hosted service using a Visual Studio template, Azure Diagnostics is automatically added to the project, and the DiagnosticMonitorTraceListener is added to the appropriate configuration file for the roles that you add. For information on placing trace statements, see [How to: Add Trace Statements to Application Code](/dotnet/framework/debug-trace-profile/how-to-add-trace-statements-to-application-code). -By placing [Trace Switches](/dotnet/framework/debug-trace-profile/trace-switches) in your code, you can control whether tracing occurs and how extensive it is. This lets you monitor the status of your application in a production environment. This is especially important in a business application that uses multiple components running on multiple computers. For more information, see [How to: Configure Trace Switches](/dotnet/framework/debug-trace-profile/how-to-create-initialize-and-configure-trace-switches). +By placing [Trace Switches](/dotnet/framework/debug-trace-profile/trace-switches) in your code, you can control whether tracing occurs and how extensive it is. Tracing lets you monitor the status of your application in a production environment. Monitoring application status is especially important in a business application that uses multiple components running on multiple computers. For more information, see [How to: Configure Trace Switches](/dotnet/framework/debug-trace-profile/how-to-create-initialize-and-configure-trace-switches). ## Configure the trace listener in an Azure application-Trace, Debug and TraceSource, require you set up "listeners" to collect and record the messages that are sent. Listeners collect, store, and route tracing messages. They direct the tracing output to an appropriate target, such as a log, window, or text file. Azure Diagnostics uses the [DiagnosticMonitorTraceListener](/previous-versions/azure/reference/ee758610(v=azure.100)) class. +Trace, Debug, and TraceSource require you set up "listeners" to collect and record the messages that are sent. Listeners collect, store, and route tracing messages. They direct the tracing output to an appropriate target, such as a log, window, or text file. Azure Diagnostics uses the [DiagnosticMonitorTraceListener](/previous-versions/azure/reference/ee758610(v=azure.100)) class. -Before you complete the following procedure, you must initialize the Azure diagnostic monitor. To do this, see [Enabling Diagnostics in Microsoft Azure](cloud-services-dotnet-diagnostics.md). +Before you complete the following procedure, you must initialize the Azure diagnostic monitor. To initialize the Azure diagnostic monitor, see [Enabling Diagnostics in Microsoft Azure](cloud-services-dotnet-diagnostics.md). -Note that if you use the templates that are provided by Visual Studio, the configuration of the listener is added automatically for you. +> [!NOTE] +> If you use the templates that are provided by Visual Studio, the configuration of the listener is added automatically for you. ### Add a trace listener 1. Open the web.config or app.config file for your role. -2. Add the following code to the file. Change the Version attribute to use the version number of the assembly you are referencing. The assembly version does not necessarily change with each Azure SDK release unless there are updates to it. +2. Add the following code to the file. Change the Version attribute to use the version number of the assembly you're referencing. The assembly version doesn't necessarily change with each Azure SDK release unless there are updates to it. ```xml <system.diagnostics> Note that if you use the templates that are provided by Visual Studio, the confi ``` > [!IMPORTANT]- > Make sure you have a project reference to the Microsoft.WindowsAzure.Diagnostics assembly. Update the version number in the xml above to match the version of the referenced Microsoft.WindowsAzure.Diagnostics assembly. + > Make sure you have a project reference to the Microsoft.WindowsAzure.Diagnostics assembly. Update the version number in the preceding xml to match the version of the referenced Microsoft.WindowsAzure.Diagnostics assembly. 3. Save the config file. After you complete the steps to add the listener, you can add trace statements t ### To add trace statement to your code 1. Open a source file for your application. For example, the \<RoleName>.cs file for the worker role or web role.-2. Add the following using directive if it has not already been added: +2. Add the following using directive if it isn't present: ``` using System.Diagnostics; ```-3. Add Trace statements where you want to capture information about the state of your application. You can use a variety of methods to format the output of the Trace statement. For more information, see [How to: Add Trace Statements to Application Code](/dotnet/framework/debug-trace-profile/how-to-add-trace-statements-to-application-code). +3. Add Trace statements where you want to capture information about the state of your application. You can use various methods to format the output of the Trace statement. For more information, see [How to: Add Trace Statements to Application Code](/dotnet/framework/debug-trace-profile/how-to-add-trace-statements-to-application-code). 4. Save the source file. |
cloud-services | Cloud Services Dotnet Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-diagnostics.md | Title: How to use Azure diagnostics (.NET) with Cloud Services (classic) | Micro description: Using Azure diagnostics to gather data from Azure cloud Services for debugging, measuring performance, monitoring, traffic analysis, and more. Previously updated : 02/21/2023 Last updated : 07/23/2024 -This walkthrough describes how to implement an Azure worker role that emits telemetry data using the .NET EventSource class. Azure Diagnostics is used to collect the telemetry data and store it in an Azure storage account. When creating a worker role, Visual Studio automatically enables Diagnostics 1.0 as part of the solution in Azure SDKs for .NET 2.4 and earlier. The following instructions describe the process for creating the worker role, disabling Diagnostics 1.0 from the solution, and deploying Diagnostics 1.2 or 1.3 to your worker role. +This walkthrough describes how to implement an Azure worker role that emits telemetry data using the .NET EventSource class. Azure Diagnostics is used to collect the telemetry data and store it in an Azure storage account. When you create a worker role, Visual Studio automatically enables Diagnostics 1.0 as part of the solution in Azure Software Development Kits (SDKs) for .NET 2.4 and earlier. The following instructions describe the process for creating the worker role, disabling Diagnostics 1.0 from the solution, and deploying Diagnostics 1.2 or 1.3 to your worker role. ### Prerequisites-This article assumes you have an Azure subscription and are using Visual Studio with the Azure SDK. If you do not have an Azure subscription, you can sign up for the [Free Trial][Free Trial]. Make sure to [Install and configure Azure PowerShell version 0.8.7 or later][Install and configure Azure PowerShell version 0.8.7 or later]. +This article assumes you have an Azure subscription and are using Visual Studio with the Azure SDK. If you don't have an Azure subscription, you can sign up for the [Free Trial][Free Trial]. Make sure to [Install and configure Azure PowerShell version 0.8.7 or later][Install and configure Azure PowerShell version 0.8.7 or later]. ### Step 1: Create a Worker Role 1. Launch **Visual Studio**.-2. Create an **Azure Cloud Service** project from the **Cloud** template that targets .NET Framework 4.5. Name the project "WadExample" and click Ok. -3. Select **Worker Role** and click Ok. The project will be created. +2. Create an **Azure Cloud Service** project from the **Cloud** template that targets .NET Framework 4.5. Name the project "WadExample" and select Ok. +3. Select **Worker Role** and select Ok. The project is created. 4. In **Solution Explorer**, double-click the **WorkerRole1** properties file.-5. In the **Configuration** tab, un-check **Enable Diagnostics** to disable Diagnostics 1.0 (Azure SDK 2.4 and earlier). +5. In the **Configuration** tab, uncheck **Enable Diagnostics** to disable Diagnostics 1.0 (Azure SDK 2.4 and earlier). 6. Build your solution to verify that you have no errors. ### Step 2: Instrument your code-Replace the contents of WorkerRole.cs with the following code. The class SampleEventSourceWriter, inherited from the [EventSource Class][EventSource Class], implements four logging methods: **SendEnums**, **MessageMethod**, **SetOther** and **HighFreq**. The first parameter to the **WriteEvent** method defines the ID for the respective event. The Run method implements an infinite loop that calls each of the logging methods implemented in the **SampleEventSourceWriter** class every 10 seconds. +Replace the contents of WorkerRole.cs with the following code. The class SampleEventSourceWriter, inherited from the [EventSource Class][EventSource Class], implements four logging methods: **SendEnums**, **MessageMethod**, **SetOther**, and **HighFreq**. The first parameter to the **WriteEvent** method defines the ID for the respective event. The Run method implements an infinite loop that calls each of the logging methods implemented in the **SampleEventSourceWriter** class every 10 seconds. ```csharp using Microsoft.WindowsAzure.ServiceRuntime; namespace WorkerRole1 3. In the **Microsoft Azure Publish Settings** dialog, select **Create New…**. 4. In the **Create Cloud Service and Storage Account** dialog, enter a **Name** (for example, "WadExample") and select a region or affinity group. 5. Set the **Environment** to **Staging**.-6. Modify any other **Settings** as appropriate and click **Publish**. -7. After deployment has completed, verify in the Azure portal that your cloud service is in a **Running** state. +6. Modify any other **Settings** as appropriate and select **Publish**. +7. After the deployment completes, verify in the Azure portal that your cloud service is in a **Running** state. ### Step 4: Create your Diagnostics configuration file and install the extension 1. Download the public configuration file schema definition by executing the following PowerShell command: namespace WorkerRole1 ```powershell (Get-AzureServiceAvailableExtension -ExtensionName 'PaaSDiagnostics' -ProviderNamespace 'Microsoft.Azure.Diagnostics').PublicConfigurationSchema | Out-File -Encoding utf8 -FilePath 'WadConfig.xsd' ```-2. Add an XML file to your **WorkerRole1** project by right-clicking on the **WorkerRole1** project and select **Add** -> **New Item…** -> **Visual C# items** -> **Data** -> **XML File**. Name the file "WadExample.xml". +2. Add an XML file to your **WorkerRole1** project by right-clicking on the **WorkerRole1** project and select **Add** -> **New Item…** -> **Visual C# items** -> **Data** -> **XML File**. Name the file `WadExample.xml`. ![CloudServices_diag_add_xml](./media/cloud-services-dotnet-diagnostics/AddXmlFile.png)-3. Associate the WadConfig.xsd with the configuration file. Make sure the WadExample.xml editor window is the active window. Press **F4** to open the **Properties** window. Click the **Schemas** property in the **Properties** window. Click the **…** in the **Schemas** property. Click the **Add…** button and navigate to the location where you saved the XSD file and select the file WadConfig.xsd. Click **OK**. +3. Associate the WadConfig.xsd with the configuration file. Make sure the WadExample.xml editor window is the active window. Press **F4** to open the **Properties** window. Select the **Schemas** property in the **Properties** window. Select the **…** in the **Schemas** property. Select the **Add…** button and navigate to the location where you saved the .xsd file and select the file WadConfig.xsd. Select **OK**. 4. Replace the contents of the WadExample.xml configuration file with the following XML and save the file. This configuration file defines a couple performance counters to collect: one for CPU utilization and one for memory utilization. Then the configuration defines the four events corresponding to the methods in the SampleEventSourceWriter class. Set-AzureServiceDiagnosticsExtension -StorageContext $storageContext -Diagnostic ``` ### Step 6: Look at your telemetry data-In the Visual Studio **Server Explorer**, navigate to the wadexample storage account. After the cloud service has been running about five (5) minutes, you should see the tables **WADEnumsTable**, **WADHighFreqTable**, **WADMessageTable**, **WADPerformanceCountersTable** and **WADSetOtherTable**. Double-click one of the tables to view the telemetry that has been collected. +In the Visual Studio **Server Explorer**, navigate to the wadexample storage account. After the cloud service has been running about five (5) minutes, you should see the tables **WADEnumsTable**, **WADHighFreqTable**, **WADMessageTable**, **WADPerformanceCountersTable**, and **WADSetOtherTable**. Double-click one of the tables to view the collected telemetry. ![CloudServices_diag_tables](./media/cloud-services-dotnet-diagnostics/WadExampleTables.png) The Diagnostics configuration file defines values that are used to initialize di If you have trouble, see [Troubleshooting Azure Diagnostics](../azure-monitor/agents/diagnostics-extension-troubleshooting.md) for help with common problems. ## Next Steps-[See a list of related Azure virtual-machine diagnostic articles](../azure-monitor/agents/diagnostics-extension-overview.md) to change the data you are collecting, troubleshoot problems or learn more about diagnostics in general. +[See a list of related Azure virtual-machine diagnostic articles](../azure-monitor/agents/diagnostics-extension-overview.md) to change the data you collect, troubleshoot problems, or learn more about diagnostics in general. [EventSource Class]: /dotnet/api/system.diagnostics.tracing.eventsource |
cloud-services | Cloud Services Dotnet Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-get-started.md | Title: Get started with Azure Cloud Services (classic) and ASP.NET | Microsoft Docs -description: Learn how to create a multi-tier app using ASP.NET MVC and Azure. The app runs in a cloud service, with web role and worker role. It uses Entity Framework, SQL Database, and Azure Storage queues and blobs. +description: Learn how to create a multi-tier app using ASP.NET Model-View-Controller (MVC) and Azure. The app runs in a cloud service, with web role and worker role. It uses Entity Framework, SQL Database, and Azure Storage queues and blobs. Previously updated : 02/21/2023 Last updated : 07/23/2024 -This tutorial shows how to create a multi-tier .NET application with an ASP.NET MVC front-end, and deploy it to an [Azure cloud service](cloud-services-choose-me.md). The application uses [Azure SQL Database](/previous-versions/azure/ee336279(v=azure.100)), the [Azure Blob service](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/unstructured-blob-storage), and the [Azure Queue service](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/queue-centric-work-pattern). You can [download the Visual Studio project](https://code.msdn.microsoft.com/Simple-Azure-Cloud-Service-e01df2e4) from the MSDN Code Gallery. +This tutorial shows you how to create a multi-tier .NET application with an ASP.NET Model-View-Controller (MVC) front-end and deploy it to an [Azure cloud service](cloud-services-choose-me.md). The application uses [Azure SQL Database](/previous-versions/azure/ee336279(v=azure.100)), the [Azure Blob service](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/unstructured-blob-storage), and the [Azure Queue service](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/queue-centric-work-pattern). You can [download the Visual Studio project](https://code.msdn.microsoft.com/Simple-Azure-Cloud-Service-e01df2e4) from the Microsoft Developer Network (MSDN) Code Gallery. The tutorial shows you how to build and run the application locally, how to deploy it to Azure and run in the cloud, and how to build it from scratch. You can start by building from scratch and then do the test and deploy steps afterward if you prefer. The application uses the [queue-centric work pattern](https://www.asp.net/aspnet ## Alternative architecture: App Service and WebJobs This tutorial shows how to run both front-end and back-end in an Azure cloud service. An alternative is to run the front-end in [Azure App Service](../app-service/index.yml) and use the [WebJobs](../app-service/webjobs-create.md) feature for the back-end. For a tutorial that uses WebJobs, see [Get Started with the Azure WebJobs SDK](https://github.com/Azure/azure-webjobs-sdk/wiki). For information about how to choose the services that best fit your scenario, see [Azure App Service, Cloud Services, and virtual machines comparison](/azure/architecture/guide/technology-choices/compute-decision-tree). -## What you'll learn +## Learning goals * How to enable your machine for Azure development by installing the Azure SDK. * How to create a Visual Studio cloud service project with an ASP.NET MVC web role and a worker role. * How to test the cloud service project locally, using the Azure Storage Emulator. This tutorial shows how to run both front-end and back-end in an Azure cloud ser * How to use the Azure Queue service for communication between tiers. ## Prerequisites-The tutorial assumes that you understand [basic concepts about Azure cloud services](cloud-services-choose-me.md) such as *web role* and *worker role* terminology. It also assumes that you know how to work with [ASP.NET MVC](https://www.asp.net/mvc/tutorials/mvc-5/introduction/getting-started) or [Web Forms](https://www.asp.net/web-forms/tutorials/aspnet-45/getting-started-with-aspnet-45-web-forms/introduction-and-overview) projects in Visual Studio. The sample application uses MVC, but most of the tutorial also applies to Web Forms. +The tutorial assumes that you understand [basic concepts about Azure cloud services](cloud-services-choose-me.md) such as *web role* and *worker role* terminology. It also assumes that you know how to work with [ASP.NET MVC](https://www.asp.net/mvc/tutorials/mvc-5/introduction/getting-started) or [Web Forms](https://www.asp.net/web-forms/tutorials/aspnet-45/getting-started-with-aspnet-45-web-forms/introduction-and-overview) projects in Visual Studio. The sample application uses MVC, but most of the tutorial also applies to Web Forms. -You can run the app locally without an Azure subscription, but you'll need one to deploy the application to the cloud. If you don't have an account, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A55E3C668) or [sign up for a free trial](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A55E3C668). +You can run the app locally without an Azure subscription, but you need one to deploy the application to the cloud. If you don't have an account, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A55E3C668) or [sign up for a free trial](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A55E3C668). The tutorial instructions work with any of the following products: The tutorial instructions work with any of the following products: If you don't have one of these, Visual Studio may be installed automatically when you install the Azure SDK. ## Application architecture-The app stores ads in a SQL database, using Entity Framework Code First to create the tables and access the data. For each ad, the database stores two URLs, one for the full-size image and one for the thumbnail. +The app stores ads in an SQL database, using Entity Framework Code First to create the tables and access the data. For each ad, the database stores two URLs, one for the full-size image and one for the thumbnail. ![This is an image of an Ad table](./media/cloud-services-dotnet-get-started/adtable.png) When a user uploads an image, the front-end running in a web role stores the ima 1. Download and unzip the [completed solution](https://code.msdn.microsoft.com/Simple-Azure-Cloud-Service-e01df2e4). 2. Start Visual Studio. 3. From the **File** menu choose **Open Project**, navigate to where you downloaded the solution, and then open the solution file.-4. Press CTRL+SHIFT+B to build the solution. +4. To build the solution, press CTRL+SHIFT+B. - By default, Visual Studio automatically restores the NuGet package content, which was not included in the *.zip* file. If the packages don't restore, install them manually by going to the **Manage NuGet Packages for Solution** dialog box and clicking the **Restore** button at the top right. + By default, Visual Studio automatically restores the NuGet package content, which wasn't included in the *.zip* file. If the packages don't restore, install them manually by going to the **Manage NuGet Packages for Solution** dialog box and clicking the **Restore** button at the top right. 5. In **Solution Explorer**, make sure that **ContosoAdsCloudService** is selected as the startup project. 6. If you're using Visual Studio 2015 or higher, change the SQL Server connection string in the application *Web.config* file of the ContosoAdsWeb project and in the *ServiceConfiguration.Local.cscfg* file of the ContosoAdsCloudService project. In each case, change "(localdb)\v11.0" to "(localdb)\MSSQLLocalDB".-7. Press CTRL+F5 to run the application. +7. To run the application, press CTRL+F5. When you run a cloud service project locally, Visual Studio automatically invokes the Azure *compute emulator* and Azure *storage emulator*. The compute emulator uses your computer's resources to simulate the web role and worker role environments. The storage emulator uses a [SQL Server Express LocalDB](/sql/database-engine/configure-windows/sql-server-2016-express-localdb) database to simulate Azure cloud storage. The first time you run a cloud service project, it takes a minute or so for the emulators to start up. When emulator startup is finished, the default browser opens to the application home page. ![Contoso Ads architecture 1](./media/cloud-services-dotnet-get-started/home.png)-8. Click **Create an Ad**. -9. Enter some test data and select a *.jpg* image to upload, and then click **Create**. +8. Select **Create an Ad**. +9. Enter some test data and select a *.jpg* image to upload, and then select **Create**. ![Image shows Create page](./media/cloud-services-dotnet-get-started/create.png) - The app goes to the Index page, but it doesn't show a thumbnail for the new ad because that processing hasn't happened yet. + The app goes to the Index page, but it doesn't show a thumbnail for the new ad because that processing has yet to happen. 10. Wait a moment and then refresh the Index page to see the thumbnail. ![Index page](./media/cloud-services-dotnet-get-started/list.png)-11. Click **Details** for your ad to see the full-size image. +11. Select **Details** for your ad to see the full-size image. ![Details page](./media/cloud-services-dotnet-get-started/details.png) You've been running the application entirely on your local computer, with no connection to the cloud. The storage emulator stores the queue and blob data in a SQL Server Express LocalDB database, and the application stores the ad data in another LocalDB database. Entity Framework Code First automatically created the ad database the first time the web app tried to access it. -In the following section you'll configure the solution to use Azure cloud resources for queues, blobs, and the application database when it runs in the cloud. If you wanted to continue to run locally but use cloud storage and database resources, you could do that. It's just a matter of setting connection strings, which you'll see how to do. +In the following section, you configure the solution to use Azure cloud resources for queues, blobs, and the application database when it runs in the cloud. If you wanted to continue to run locally but use cloud storage and database resources, you could do that. It's just a matter of setting connection strings, which you see how to do. ## Deploy the application to Azure-You'll do the following steps to run the application in the cloud: +You do the following steps to run the application in the cloud: * Create an Azure cloud service. * Create a database in Azure SQL Database. You'll do the following steps to run the application in the cloud: * Deploy the project to your Azure cloud service. ### Create an Azure cloud service-An Azure cloud service is the environment the application will run in. +An Azure cloud service is the environment the application runs in. 1. In your browser, open the [Azure portal](https://portal.azure.com).-2. Click **Create a resource > Compute > Cloud Service**. +2. Select **Create a resource > Compute > Cloud Service**. -3. In the DNS name input box, enter a URL prefix for the cloud service. +3. In the Domain Name System (DNS) name input box, enter a URL prefix for the cloud service. - This URL has to be unique. You'll get an error message if the prefix you choose is already in use. -4. Specify a new Resource group for the service. Click **Create new** and then type a name in the Resource group input box, such as CS_contososadsRG. + This URL has to be unique. You get an error message if the prefix you choose is already in use. +4. Specify a new Resource group for the service. Select **Create new** and then type a name in the Resource group input box, such as CS_contososadsRG. 5. Choose the region where you want to deploy the application. - This field specifies which datacenter your cloud service will be hosted in. For a production application, you'd choose the region closest to your customers. For this tutorial, choose the region closest to you. -5. Click **Create**. + This field specifies which datacenter your cloud service is hosted in. For a production application, you'd choose the region closest to your customers. For this tutorial, choose the region closest to you. +5. Select **Create**. In the following image, a cloud service is created with the URL CSvccontosoads.cloudapp.net. ![Image shows New Cloud Service](./media/cloud-services-dotnet-get-started/newcs.png) ### Create a database in Azure SQL Database-When the app runs in the cloud, it will use a cloud-based database. +When the app runs in the cloud, it uses a cloud-based database. -1. In the [Azure portal](https://portal.azure.com), click **Create a resource > Databases > SQL Database**. +1. In the [Azure portal](https://portal.azure.com), select **Create a resource > Databases > SQL Database**. 2. In the **Database Name** box, enter *contosoads*.-3. In the **Resource group**, click **Use existing** and select the resource group used for the cloud service. -4. In the following image, click **Server - Configure required settings** and **Create a new server**. +3. In the **Resource group**, choose **Use existing** and select the resource group used for the cloud service. +4. In the following image, select **Server - Configure required settings** and **Create a new server**. ![Tunnel to database server](./media/cloud-services-dotnet-get-started/newdb.png) When the app runs in the cloud, it will use a cloud-based database. 6. Enter an administrator **Login Name** and **Password**. - If you selected **Create a new server**, you aren't entering an existing name and password here. You're entering a new name and password that you're defining now to use later when you access the database. If you selected a server that you created previously, you'll be prompted for the password to the administrative user account you already created. + If you selected **Create a new server**, you aren't entering an existing name and password here. You're entering a new name and password that you're defining now to use later when you access the database. If you selected a server that you created previously, the portal prompts you for the password to the administrative user account you already created. 7. Choose the same **Location** that you chose for the cloud service. - When the cloud service and database are in different datacenters (different regions), latency will increase and you will be charged for bandwidth outside the data center. Bandwidth within a data center is free. + When the cloud service and database are in different datacenters (different regions), latency increases and you incur charges for bandwidth outside the data center. Bandwidth within a data center is free. 8. Check **Allow azure services to access server**.-9. Click **Select** for the new server. +9. Select **Select** for the new server. ![New server](./media/cloud-services-dotnet-get-started/newdbserver.png)-10. Click **Create**. +10. Choose **Create**. ### Create an Azure storage account An Azure storage account provides resources for storing queue and blob data in the cloud. -In a real-world application, you would typically create separate accounts for application data versus logging data, and separate accounts for test data versus production data. For this tutorial, you'll use just one account. +In a real-world application, you would typically create separate accounts for application data versus logging data, and separate accounts for test data versus production data. For this tutorial, you use just one account. -1. In the [Azure portal](https://portal.azure.com), click **Create a resource > Storage > Storage account - blob, file, table, queue**. +1. In the [Azure portal](https://portal.azure.com), select **Create a resource > Storage > Storage account - blob, file, table, queue**. 2. In the **Name** box, enter a URL prefix. - This prefix plus the text you see under the box will be the unique URL to your storage account. If the prefix you enter has already been used by someone else, you'll have to choose a different prefix. + This prefix plus the text you see under the box is the unique URL to your storage account. If the prefix you enter is already in use by someone else, choose a different prefix. 3. Set the **Deployment model** to *Classic*. 4. Set the **Replication** drop-down list to **Locally redundant storage**. When geo-replication is enabled for a storage account, the stored content is replicated to a secondary datacenter to enable failover if a major disaster occurs in the primary location. Geo-replication can incur additional costs. For test and development accounts, you generally don't want to pay for geo-replication. For more information, see [Create, manage, or delete a storage account](../storage/common/storage-account-create.md). -5. In the **Resource group**, click **Use existing** and select the resource group used for the cloud service. +5. In the **Resource group**, select **Use existing** and select the resource group used for the cloud service. 6. Set the **Location** drop-down list to the same region you chose for the cloud service. - When the cloud service and storage account are in different datacenters (different regions), latency will increase and you will be charged for bandwidth outside the data center. Bandwidth within a data center is free. + When the cloud service and storage account are in different datacenters (different regions), latency increases and you incur charges for bandwidth outside the data center. Bandwidth within a data center is free. - Azure affinity groups provide a mechanism to minimize the distance between resources in a data center, which can reduce latency. This tutorial does not use affinity groups. For more information, see [How to Create an Affinity Group in Azure](/previous-versions/azure/reference/gg715317(v=azure.100)). -7. Click **Create**. + Azure affinity groups provide a mechanism to minimize the distance between resources in a data center, which can reduce latency. This tutorial doesn't use affinity groups. For more information, see [How to Create an Affinity Group in Azure](/previous-versions/azure/reference/gg715317(v=azure.100)). +7. Choose **Create**. ![New storage account](./media/cloud-services-dotnet-get-started/newstorage.png) In a real-world application, you would typically create separate accounts for ap The web project and the worker role project each has its own database connection string, and each needs to point to the database in Azure SQL Database when the app runs in Azure. -You'll use a [Web.config transform](https://www.asp.net/mvc/tutorials/deployment/visual-studio-web-deployment/web-config-transformations) for the web role and a cloud service environment setting for the worker role. +You use a [Web.config transform](https://www.asp.net/mvc/tutorials/deployment/visual-studio-web-deployment/web-config-transformations) for the web role and a cloud service environment setting for the worker role. > [!NOTE] > In this section and the next section, you store credentials in project files. [Don't store sensitive data in public source code repositories](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/source-control#secrets). You'll use a [Web.config transform](https://www.asp.net/mvc/tutorials/deployment ``` Leave the file open for editing.-2. In the [Azure portal](https://portal.azure.com), click **SQL Databases** in the left pane, click the database you created for this tutorial, and then click **Show connection strings**. +2. In the [Azure portal](https://portal.azure.com), choose **SQL Databases** in the left pane, select the database you created for this tutorial, and then select **Show connection strings**. ![Show connection strings](./media/cloud-services-dotnet-get-started/showcs.png) You'll use a [Web.config transform](https://www.asp.net/mvc/tutorials/deployment 4. In the connection string that you pasted into the *Web.Release.config* transform file, replace `{your_password_here}` with the password you created for the new SQL database. 5. Save the file. 6. Select and copy the connection string (without the surrounding quotation marks) for use in the following steps for configuring the worker role project.-7. In **Solution Explorer**, under **Roles** in the cloud service project, right-click **ContosoAdsWorker** and then click **Properties**. +7. In **Solution Explorer**, under **Roles** in the cloud service project, right-click **ContosoAdsWorker** and then select **Properties**. ![Screenshot that highlights the Properties menu option.](./media/cloud-services-dotnet-get-started/rolepropertiesworker.png)-8. Click the **Settings** tab. +8. Choose the **Settings** tab. 9. Change **Service Configuration** to **Cloud**. 10. Select the **Value** field for the `ContosoAdsDbConnectionString` setting, and then paste the connection string that you copied from the previous section of the tutorial. You'll use a [Web.config transform](https://www.asp.net/mvc/tutorials/deployment 11. Save your changes. ### Configure the solution to use your Azure storage account when it runs in Azure-Azure storage account connection strings for both the web role project and the worker role project are stored in environment settings in the cloud service project. For each project, there is a separate set of settings to be used when the application runs locally and when it runs in the cloud. You'll update the cloud environment settings for both web and worker role projects. +Azure storage account connection strings for both the web role project and the worker role project are stored in environment settings in the cloud service project. For each project, there's a separate set of settings to be used when the application runs locally and when it runs in the cloud. You update the cloud environment settings for both web and worker role projects. -1. In **Solution Explorer**, right-click **ContosoAdsWeb** under **Roles** in the **ContosoAdsCloudService** project, and then click **Properties**. +1. In **Solution Explorer**, right-click **ContosoAdsWeb** under **Roles** in the **ContosoAdsCloudService** project, and then select **Properties**. ![Image shows Role properties](./media/cloud-services-dotnet-get-started/roleproperties.png)-2. Click the **Settings** tab. In the **Service Configuration** drop-down box, choose **Cloud**. +2. Choose the **Settings** tab. In the **Service Configuration** drop-down box, choose **Cloud**. ![Cloud configuration](./media/cloud-services-dotnet-get-started/sccloud.png)-3. Select the **StorageConnectionString** entry, and you'll see an ellipsis (**...**) button at the right end of the line. Click the ellipsis button to open the **Create Storage Account Connection String** dialog box. +3. Select the **StorageConnectionString** entry, and you see an ellipsis (**...**) button at the right end of the line. Choose the ellipsis button to open the **Create Storage Account Connection String** dialog box. ![Open Connection String Create box](./media/cloud-services-dotnet-get-started/opencscreate.png)-4. In the **Create Storage Connection String** dialog box, click **Your subscription**, choose the storage account that you created earlier, and then click **OK**. If you're not already logged in, you'll be prompted for your Azure account credentials. +4. In the **Create Storage Connection String** dialog box, select **Your subscription**, choose the storage account that you created earlier, and then select **OK**. The explorer prompts you for your Azure account credentials if you still need to sign in. ![Create Storage Connection String](./media/cloud-services-dotnet-get-started/createstoragecs.png) 5. Save your changes. Azure storage account connection strings for both the web role project and the w This connection string is used for logging. 7. Follow the same procedure that you used for the **ContosoAdsWeb** role to set both connection strings for the **ContosoAdsWorker** role. Don't forget to set **Service Configuration** to **Cloud**. -The role environment settings that you have configured using the Visual Studio UI are stored in the following files in the ContosoAdsCloudService project: +The role environment settings that you configured using the Visual Studio UI are stored in the following files in the ContosoAdsCloudService project: * *ServiceDefinition.csdef* - Defines the setting names. * *ServiceConfiguration.Cloud.cscfg* - Provides values for when the app runs in the cloud. And the *ServiceConfiguration.Cloud.cscfg* file includes the values you entered </Role> ``` -The `<Instances>` setting specifies the number of virtual machines that Azure will run the worker role code on. The [Next steps](#next-steps) section includes links to more information about scaling out a cloud service, +The `<Instances>` setting specifies the number of virtual machines that Azure runs the worker role code on. The [Next steps](#next-steps) section includes links to more information about scaling out a cloud service, ### Deploy the project to Azure 1. In **Solution Explorer**, right-click the **ContosoAdsCloudService** cloud project and then select **Publish**. ![Publish menu](./media/cloud-services-dotnet-get-started/pubmenu.png)-2. In the **Sign in** step of the **Publish Azure Application** wizard, click **Next**. +2. In the **Sign in** step of the **Publish Azure Application** wizard, select **Next**. ![Sign in step](./media/cloud-services-dotnet-get-started/pubsignin.png)-3. In the **Settings** step of the wizard, click **Next**. +3. In the **Settings** step of the wizard, select **Next**. ![Settings step](./media/cloud-services-dotnet-get-started/pubsettings.png) The default settings in the **Advanced** tab are fine for this tutorial. For information about the advanced tab, see [Publish Azure Application Wizard](/visualstudio/azure/vs-azure-tools-publish-azure-application-wizard).-4. In the **Summary** step, click **Publish**. +4. In the **Summary** step, select **Publish**. ![Summary step](./media/cloud-services-dotnet-get-started/pubsummary.png) The **Azure Activity Log** window opens in Visual Studio.-5. Click the right arrow icon to expand the deployment details. +5. Choose the right arrow icon to expand the deployment details. The deployment can take up to 5 minutes or more to complete. ![Azure Activity Log window](./media/cloud-services-dotnet-get-started/waal.png)-6. When the deployment status is complete, click the **Web app URL** to start the application. +6. When the deployment status is complete, select the **Web app URL** to start the application. 7. You can now test the app by creating, viewing, and editing some ads, as you did when you ran the application locally. > [!NOTE] The `<Instances>` setting specifies the number of virtual machines that Azure wi > ## Create the application from scratch-If you haven't already downloaded -[the completed application](https://code.msdn.microsoft.com/Simple-Azure-Cloud-Service-e01df2e4), do that now. You'll copy files from the downloaded project into the new project. +If you still need to download [the completed application](https://code.msdn.microsoft.com/Simple-Azure-Cloud-Service-e01df2e4), do that now. Copy the files from the downloaded project into the new project. Creating the Contoso Ads application involves the following steps: After the solution is created, you'll review the code that is unique to cloud se ### Create a cloud service Visual Studio solution 1. In Visual Studio, choose **New Project** from the **File** menu. 2. In the left pane of the **New Project** dialog box, expand **Visual C#** and choose **Cloud** templates, and then choose the **Azure Cloud Service** template.-3. Name the project and solution ContosoAdsCloudService, and then click **OK**. +3. Name the project and solution ContosoAdsCloudService, and then select **OK**. ![New Project](./media/cloud-services-dotnet-get-started/newproject.png) 4. In the **New Azure Cloud Service** dialog box, add a web role and a worker role. Name the web role ContosoAdsWeb, and name the worker role ContosoAdsWorker. (Use the pencil icon in the right-hand pane to change the default names of the roles.) ![New Cloud Service Project](./media/cloud-services-dotnet-get-started/newcsproj.png)-5. When you see the **New ASP.NET Project** dialog box for the web role, choose the MVC template, and then click **Change Authentication**. +5. When you see the **New ASP.NET Project** dialog box for the web role, choose the MVC template, and then select **Change Authentication**. ![Change Authentication](./media/cloud-services-dotnet-get-started/chgauth.png)-6. In the **Change Authentication** dialog box, choose **No Authentication**, and then click **OK**. +6. In the **Change Authentication** dialog box, choose **No Authentication**, and then select **OK**. ![No Authentication](./media/cloud-services-dotnet-get-started/noauth.png)-7. In the **New ASP.NET Project** dialog, click **OK**. +7. In the **New ASP.NET Project** dialog, select **OK**. 8. In **Solution Explorer**, right-click the solution (not one of the projects), and choose **Add - New Project**.-9. In the **Add New Project** dialog box, choose **Windows** under **Visual C#** in the left pane, and then click the **Class Library** template. -10. Name the project *ContosoAdsCommon*, and then click **OK**. +9. In the **Add New Project** dialog box, choose **Windows** under **Visual C#** in the left pane, and then select the **Class Library** template. +10. Name the project *ContosoAdsCommon*, and then select **OK**. You need to reference the Entity Framework context and the data model from both web and worker role projects. As an alternative, you could define the EF-related classes in the web role project and reference that project from the worker role project. But in the alternative approach, your worker role project would have a reference to web assemblies that it doesn't need. ### Update and add NuGet packages 1. Open the **Manage NuGet Packages** dialog box for the solution. 2. At the top of the window, select **Updates**.-3. Look for the *WindowsAzure.Storage* package, and if it's in the list, select it and select the web and worker projects to update it in, and then click **Update**. +3. Look for the *WindowsAzure.Storage* package, and if it's in the list, select it and select the web and worker projects to update it in, and then select **Update**. - The storage client library is updated more frequently than Visual Studio project templates, so you'll often find that the version in a newly-created project needs to be updated. + The storage client library is updated more frequently than Visual Studio project templates, so you may find that the version in a newly created project needs to be updated. 4. At the top of the window, select **Browse**. 5. Find the *EntityFramework* NuGet package, and install it in all three projects. 6. Find the *Microsoft.WindowsAzure.ConfigurationManager* NuGet package, and install it in the worker role project. ### Set project references-1. In the ContosoAdsWeb project, set a reference to the ContosoAdsCommon project. Right-click the ContosoAdsWeb project, and then click **References** - **Add References**. In the **Reference Manager** dialog box, select **Solution ΓÇô Projects** in the left pane, select **ContosoAdsCommon**, and then click **OK**. +1. In the ContosoAdsWeb project, set a reference to the ContosoAdsCommon project. Right-click the ContosoAdsWeb project, and then select **References** - **Add References**. In the **Reference Manager** dialog box, select **Solution ΓÇô Projects** in the left pane, select **ContosoAdsCommon**, and then select **OK**. 2. In the ContosoAdsWorker project, set a reference to the ContosoAdsCommon project. - ContosoAdsCommon will contain the Entity Framework data model and context class, which will be used by both the front-end and back-end. + ContosoAdsCommon contains the Entity Framework data model and context class, which uses both the front-end and back-end. 3. In the ContosoAdsWorker project, set a reference to `System.Drawing`. This assembly is used by the back-end to convert images to thumbnails. In this section, you configure Azure Storage and SQL connection strings for test If you're using Visual Studio 2015 or higher, replace "v11.0" with "MSSQLLocalDB". 2. Save your changes.-3. In the ContosoAdsCloudService project, right-click ContosoAdsWeb under **Roles**, and then click **Properties**. +3. In the ContosoAdsCloudService project, right-click ContosoAdsWeb under **Roles**, and then select **Properties**. ![Role properties image](./media/cloud-services-dotnet-get-started/roleproperties.png) -4. In the **ContosoAdsWeb [Role]** properties window, click the **Settings** tab, and then click **Add Setting**. +4. In the **ContosoAdsWeb [Role]** properties window, select the **Settings** tab, and then select **Add Setting**. Leave **Service Configuration** set to **All Configurations**. 5. Add a setting named *StorageConnectionString*. Set **Type** to *ConnectionString*, and set **Value** to *UseDevelopmentStorage=true*. In this section, you configure Azure Storage and SQL connection strings for test ![New connection string](./media/cloud-services-dotnet-get-started/scall.png) 6. Save your changes. 7. Follow the same procedure to add a storage connection string in the ContosoAdsWorker role properties.-8. Still in the **ContosoAdsWorker [Role]** properties window, add another connection string: +8. While still in the **ContosoAdsWorker [Role]** properties window, add another connection string: * Name: ContosoAdsDbConnectionString * Type: String In this section, you configure Azure Storage and SQL connection strings for test ``` ### Add code files-In this section, you copy code files from the downloaded solution into the new solution. The following sections will show and explain key parts of this code. +In this section, you copy code files from the downloaded solution into the new solution. The following sections show and explain key parts of this code. -To add files to a project or a folder, right-click the project or folder and click **Add** - **Existing Item**. Select the files you want and then click **Add**. If asked whether you want to replace existing files, click **Yes**. +To add files to a project or a folder, right-click the project or folder and select **Add** - **Existing Item**. Select the files you want and then select **Add**. If asked whether you want to replace existing files, select **Yes**. 1. In the ContosoAdsCommon project, delete the *Class1.cs* file and add in its place the *Ad.cs* and *ContosoAdscontext.cs* files from the downloaded project. 2. In the ContosoAdsWeb project, add the following files from the downloaded project. To add files to a project or a folder, right-click the project or folder and cli * In the *Views\Ad* folder (create the folder first): five *.cshtml* files. 3. In the ContosoAdsWorker project, add *WorkerRole.cs* from the downloaded project. -You can now build and run the application as instructed earlier in the tutorial, and the app will use local database and storage emulator resources. +You can now build and run the application as instructed earlier in the tutorial, and the app uses local database and storage emulator resources. -The following sections explain the code related to working with the Azure environment, blobs, and queues. This tutorial does not explain how to create MVC controllers and views using scaffolding, how to write Entity Framework code that works with SQL Server databases, or the basics of asynchronous programming in ASP.NET 4.5. For information about these topics, see the following resources: +The following sections explain the code related to working with the Azure environment, blobs, and queues. This tutorial doesn't explain how to create MVC controllers and views using scaffolding, how to write Entity Framework code that works with SQL Server databases, or the basics of asynchronous programming in ASP.NET 4.5. For information about these topics, see the following resources: * [Get started with MVC 5](https://www.asp.net/mvc/tutorials/mvc-5/introduction/getting-started) * [Get started with EF 6 and MVC 5](https://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc) public class Ad ``` ### ContosoAdsCommon - ContosoAdsContext.cs-The ContosoAdsContext class specifies that the Ad class is used in a DbSet collection, which Entity Framework will store in a SQL database. +The ContosoAdsContext class specifies that the Ad class is used in a DbSet collection, which Entity Framework stores in an SQL database. ```csharp public class ContosoAdsContext : DbContext public class ContosoAdsContext : DbContext } ``` -The class has two constructors. The first of them is used by the web project, and specifies the name of a connection string that is stored in the Web.config file. The second constructor enables you to pass in the actual connection string used by the worker role project, since it doesn't have a Web.config file. You saw earlier where this connection string was stored, and you'll see later how the code retrieves the connection string when it instantiates the DbContext class. +The class has two constructors. The first of them is used by the web project, and specifies the name of a connection string that is stored in the Web.config file. The second constructor enables you to pass in the actual connection string used by the worker role project, since it doesn't have a Web.config file. You saw earlier where this connection string was stored. Later, you see how the code retrieves the connection string when it instantiates the DbContext class. ### ContosoAdsWeb - Global.asax.cs-Code that is called from the `Application_Start` method creates an *images* blob container and an *images* queue if they don't already exist. This ensures that whenever you start using a new storage account, or start using the storage emulator on a new computer, the required blob container and queue will be created automatically. +Code that is called from the `Application_Start` method creates an *images* blob container and an *images* queue if they don't already exist. This code ensures that whenever you use a new storage account or use the storage emulator on a new computer, the code automatically creates the required blob container and queue. The code gets access to the storage account by using the storage connection string from the *.cscfg* file. An `<input>` element tells the browser to provide a file selection dialog. ### ContosoAdsWorker - WorkerRole.cs - OnStart method The Azure worker role environment calls the `OnStart` method in the `WorkerRole` class when the worker role is getting started, and it calls the `Run` method when the `OnStart` method finishes. -The `OnStart` method gets the database connection string from the *.cscfg* file and passes it to the Entity Framework DbContext class. The SQLClient provider is used by default, so the provider does not have to be specified. +The `OnStart` method gets the database connection string from the *.cscfg* file and passes it to the Entity Framework DbContext class. The SQLClient provider is used by default, so the provider doesn't have to be specified. ```csharp var dbConnString = CloudConfigurationManager.GetSetting("ContosoAdsDbConnectionString"); public override void Run() } ``` -After each iteration of the loop, if no queue message was found, the program sleeps for a second. This prevents the worker role from incurring excessive CPU time and storage transaction costs. The Microsoft Customer Advisory Team tells a story about a developer who forgot to include this, deployed to production, and left for vacation. When they got back, their oversight cost more than the vacation. +After each iteration of the loop, if no queue message was found, the program sleeps for a second. This sleep prevents the worker role from incurring excessive CPU time and storage transaction costs. The Microsoft Customer Advisory Team tells a story about a developer who forgot to include this sleep function, deployed to production, and left for vacation. When they got back, their oversight cost more than the vacation. -Sometimes the content of a queue message causes an error in processing. This is called a *poison message*, and if you just logged an error and restarted the loop, you could endlessly try to process that message. Therefore the catch block includes an if statement that checks to see how many times the app has tried to process the current message, and if it has been more than 5 times, the message is deleted from the queue. +Sometimes the content of a queue message causes an error in processing. This kind of message is called a *poison message*. If you merely logged an error and restarted the loop, you could endlessly try to process that message. Therefore, the catch block includes an if statement that checks to see how many times the app tried to process the current message. If the count is higher than five times, the message is deleted from the queue. `ProcessQueueMessage` is called when a queue message is found. This code reads the database to get the image URL, converts the image to a thumb In case something doesn't work while you're following the instructions in this tutorial, here are some common errors and how to resolve them. ### ServiceRuntime.RoleEnvironmentException-The `RoleEnvironment` object is provided by Azure when you run an application in Azure or when you run locally using the Azure Compute Emulator. If you get this error when you're running locally, make sure that you have set the ContosoAdsCloudService project as the startup project. This sets up the project to run using the Azure Compute Emulator. +The `RoleEnvironment` object is provided by Azure when you run an application in Azure or when you run locally using the Azure Compute Emulator. If you get this error when you're running locally, make sure that you set the ContosoAdsCloudService project as the startup project. This setting makes the project run using the Azure Compute Emulator. -One of the things the application uses the Azure RoleEnvironment for is to get the connection string values that are stored in the *.cscfg* files, so another cause of this exception is a missing connection string. Make sure that you created the StorageConnectionString setting for both Cloud and Local configurations in the ContosoAdsWeb project, and that you created both connection strings for both configurations in the ContosoAdsWorker project. If you do a **Find All** search for StorageConnectionString in the entire solution, you should see it 9 times in 6 files. +One of the things the application uses the Azure RoleEnvironment for is to get the connection string values that are stored in the *.cscfg* files, so another cause of this exception is a missing connection string. Make sure that you created the StorageConnectionString setting for both Cloud and Local configurations in the ContosoAdsWeb project, and that you created both connection strings for both configurations in the ContosoAdsWorker project. If you do a **Find All** search for StorageConnectionString in the entire solution, you should see it nine times in six files. -### Cannot override to port xxx. New port below minimum allowed value 8080 for protocol http -Try changing the port number used by the web project. Right-click the ContosoAdsWeb project, and then click **Properties**. Click the **Web** tab, and then change the port number in the **Project Url** setting. +### Can't override to port xxx. New port below minimum allowed value 8080 for protocol http +Try changing the port number used by the web project. Right-click the ContosoAdsWeb project, and then select **Properties**. Choose the **Web** tab, and then change the port number in the **Project Url** setting. For another alternative that might resolve the problem, see the following section. ### Other errors when running locally-By default new cloud service projects use the Azure Compute Emulator express to simulate the Azure environment. This is a lightweight version of the full compute emulator, and under some conditions the full emulator will work when the express version does not. +By default new cloud service projects use the Azure Compute Emulator express to simulate the Azure environment. The Azure Compute Emulator is a lightweight version of the full compute emulator, and under some conditions the full emulator works when the express version doesn't. -To change the project to use the full emulator, right-click the ContosoAdsCloudService project, and then click **Properties**. In the **Properties** window click the **Web** tab, and then click the **Use Full Emulator** radio button. +To change the project to use the full emulator, right-click the ContosoAdsCloudService project, and then select **Properties**. In the **Properties** window, select the **Web** tab, and then select the **Use Full Emulator** radio button. In order to run the application with the full emulator, you have to open Visual Studio with administrator privileges. ## Next steps-The Contoso Ads application has intentionally been kept simple for a getting-started tutorial. For example, it doesn't implement [dependency injection](https://www.asp.net/mvc/tutorials/hands-on-labs/aspnet-mvc-4-dependency-injection) or the [repository and unit of work patterns](https://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/advanced-entity-framework-scenarios-for-an-mvc-web-application#repo), it doesn't [use an interface for logging](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/monitoring-and-telemetry#log), it doesn't use [EF Code First Migrations](https://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/migrations-and-deployment-with-the-entity-framework-in-an-asp-net-mvc-application) to manage data model changes or [EF Connection Resiliency](https://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/connection-resiliency-and-command-interception-with-the-entity-framework-in-an-asp-net-mvc-application) to manage transient network errors, and so forth. --Here are some cloud service sample applications that demonstrate more real-world coding practices, listed from less complex to more complex: --* [PhluffyFotos](https://code.msdn.microsoft.com/PhluffyFotos-Sample-7ecffd31). Similar in concept to Contoso Ads but implements more features and more real-world coding practices. -* [Azure Cloud Service Multi-Tier Application with Tables, Queues, and Blobs](https://code.msdn.microsoft.com/windowsazure/Windows-Azure-Multi-Tier-eadceb36). Introduces Azure Storage tables as well as blobs and queues. Based on an older version of the Azure SDK for .NET, will require some modifications to work with the current version. +The Contoso Ads application is intentionally made simple for a getting-started tutorial. For example, it doesn't implement [dependency injection](https://www.asp.net/mvc/tutorials/hands-on-labs/aspnet-mvc-4-dependency-injection) or the [repository and unit of work patterns](https://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/advanced-entity-framework-scenarios-for-an-mvc-web-application#repo). It doesn't [use an interface for logging](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/monitoring-and-telemetry#log), it doesn't use [EF Code First Migrations](https://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/migrations-and-deployment-with-the-entity-framework-in-an-asp-net-mvc-application) to manage data model changes or [EF Connection Resiliency](https://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/connection-resiliency-and-command-interception-with-the-entity-framework-in-an-asp-net-mvc-application) to manage transient network errors, and so forth. For general information about developing for the cloud, see [Building Real-World Cloud Apps with Azure](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/introduction). -For a video introduction to Azure Storage best practices and patterns, see Microsoft Azure Storage ΓÇô What's New, Best Practices and Patterns. - For more information, see the following resources: * [How to manage Cloud Services](cloud-services-how-to-manage-portal.md) |
cloud-services | Cloud Services Dotnet Install Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-install-dotnet.md | Title: Install .NET on Azure Cloud Services (classic) roles description: This article describes how to manually install the .NET Framework on your cloud service web and worker roles. Previously updated : 02/21/2023 Last updated : 07/23/2024 To add the installer for a *web* role: To add the installer for a *worker* role: * Right-click your *worker* role and select **Add** > **Existing Item**. Select the .NET installer and add it to the role. -When files are added in this way to the role content folder, they're automatically added to your cloud service package. The files are then deployed to a consistent location on the virtual machine. Repeat this process for each web and worker role in your cloud service so that all roles have a copy of the installer. +When files are added in this way to the role content folder, they automatically add to your cloud service package. The files are then deployed to a consistent location on the virtual machine. Repeat this process for each web and worker role in your cloud service so that all roles have a copy of the installer. > [!NOTE] > You should install .NET Framework 4.6.2 on your cloud service role even if your application targets .NET Framework 4.6. The Guest OS includes the Knowledge Base [update 3098779](https://support.microsoft.com/kb/3098779) and [update 3097997](https://support.microsoft.com/kb/3097997). Issues can occur when you run your .NET applications if .NET Framework 4.6 is installed on top of the Knowledge Base updates. To avoid these issues, install .NET Framework 4.6.2 rather than version 4.6. For more information, see the [Knowledge Base article 3118750](https://support.microsoft.com/kb/3118750) and [4340191](https://support.microsoft.com/kb/4340191). You can use startup tasks to perform operations before a role starts. Installing 2. Create a file named **install.cmd** and add the following install script to the file. - The script checks whether the specified version of the .NET Framework is already installed on the machine by querying the registry. If the .NET Framework version is not installed, then the .NET Framework web installer is opened. To help troubleshoot any issues, the script logs all activity to the file startuptasklog-(current date and time).txt that is stored in **InstallLogs** local storage. + The script checks whether the specified version of the .NET Framework is present on your machine by querying the registry. If the .NET Framework version isn't installed, then the .NET Framework web installer is opened. To help troubleshoot any issues, the script logs all activity to the file startuptasklog-(current date and time).txt that is stored in **InstallLogs** local storage. > [!IMPORTANT] > Use a basic text editor like Windows Notepad to create the install.cmd file. If you use Visual Studio to create a text file and change the extension to .cmd, the file might still contain a UTF-8 byte order mark. This mark can cause an error when the first line of the script is run. To avoid this error, make the first line of the script a REM statement that can be skipped by the byte order processing. You can use startup tasks to perform operations before a role starts. Installing EXIT /B 0 ``` -3. Add the install.cmd file to each role by using **Add** > **Existing Item** in **Solution Explorer** as described earlier in this topic. +3. Add the install.cmd file to each role by using **Add** > **Existing Item** in **Solution Explorer** as described earlier in this article. After this step is complete, all roles should have the .NET installer file and the install.cmd file. To configure Diagnostics, open the diagnostics.wadcfgx file and add the followin This XML configures Diagnostics to transfer the files in the log directory in the **NETFXInstall** resource to the Diagnostics storage account in the **netfx-install** blob container. ## Deploy your cloud service-When you deploy your cloud service, the startup tasks install the .NET Framework if it's not already installed. Your cloud service roles are in the *busy* state while the framework is being installed. If the framework installation requires a restart, the service roles might also restart. +When you deploy your cloud service, the startup tasks install the .NET Framework (if necessary). Your cloud service roles are in the *busy* state while the framework is being installed. If the framework installation requires a restart, the service roles might also restart. -## Additional resources +## Next steps * [Installing the .NET Framework][Installing the .NET Framework] * [Determine which .NET Framework versions are installed][How to: Determine Which .NET Framework Versions Are Installed] * [Troubleshooting .NET Framework installations][Troubleshooting .NET Framework Installations] |
cloud-services | Cloud Services Enable Communication Role Instances | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-enable-communication-role-instances.md | Title: Communication for Roles in Cloud Services (classic) | Microsoft Docs description: Role instances in Cloud Services can have endpoints (http, https, tcp, udp) defined for them that communicate with the outside or between other role instances. Previously updated : 02/21/2023 Last updated : 07/23/2024 -Cloud service roles communicate through internal and external connections. External connections are called **input endpoints** while internal connections are called **internal endpoints**. This topic describes how to modify the [service definition](cloud-services-model-and-package.md#csdef) to create endpoints. +Cloud service roles communicate through internal and external connections. External connections are called **input endpoints** while internal connections are called **internal endpoints**. This article describes how to modify the [service definition](cloud-services-model-and-package.md#csdef) to create endpoints. ## Input endpoint-The input endpoint is used when you want to expose a port to the outside. You specify the protocol type and the port of the endpoint which then applies for both the external and internal ports for the endpoint. If you want, you can specify a different internal port for the endpoint with the [localPort](/previous-versions/azure/reference/gg557552(v=azure.100)#inputendpoint) attribute. +The input endpoint is used when you want to expose a port to the outside. You specify the protocol type and the port of the endpoint, which then applies for both the external and internal ports for the endpoint. If you want, you can specify a different internal port for the endpoint with the [localPort](/previous-versions/azure/reference/gg557552(v=azure.100)#inputendpoint) attribute. The input endpoint can use the following protocols: **http, https, tcp, udp**. To create an input endpoint, add the **InputEndpoint** child element to the **En ``` ## Instance input endpoint-Instance input endpoints are similar to input endpoints but allows you map specific public-facing ports for each individual role instance by using port forwarding on the load balancer. You can specify a single public-facing port, or a range of ports. +Instance input endpoints are similar to input endpoints but allow you to map specific public-facing ports for each individual role instance by using port forwarding on the load balancer. You can specify a single public-facing port, or a range of ports. The instance input endpoint can only use **tcp** or **udp** as the protocol. To create an instance input endpoint, add the **InstanceInputEndpoint** child el ``` ## Internal endpoint-Internal endpoints are available for instance-to-instance communication. The port is optional and if omitted, a dynamic port is assigned to the endpoint. A port range can be used. There is a limit of five internal endpoints per role. +Internal endpoints are available for instance-to-instance communication. The port is optional and if omitted, a dynamic port is assigned to the endpoint. A port range can be used. There's a limit of five internal endpoints per role. The internal endpoint can use the following protocols: **http, tcp, udp, any**. You can also use a port range. ## Worker roles vs. Web roles-There is one minor difference with endpoints when working with both worker and web roles. The web role must have at minimum a single input endpoint using the **HTTP** protocol. +There's one minor difference with endpoints when working with both worker and web roles. The web role must have at minimum a single input endpoint using the **HTTP** protocol. ```xml <Endpoints> There is one minor difference with endpoints when working with both worker and w ``` ## Using the .NET SDK to access an endpoint-The Azure Managed Library provides methods for role instances to communicate at runtime. From code running within a role instance, you can retrieve information about the existence of other role instances and their endpoints, as well as information about the current role instance. +The Azure Managed Library provides methods for role instances to communicate at runtime. From code running within a role instance, you can retrieve information about the existence of other role instances and their endpoints. You can also obtain information about the current role instance. > [!NOTE] > You can only retrieve information about role instances that are running in your cloud service and that define at least one internal endpoint. You cannot obtain data about role instances running in a different service. The Azure Managed Library provides methods for role instances to communicate at You can use the [Instances](/previous-versions/azure/reference/ee741904(v=azure.100)) property to retrieve instances of a role. First use the [CurrentRoleInstance](/previous-versions/azure/reference/ee741907(v=azure.100)) to return a reference to the current role instance, and then use the [Role](/previous-versions/azure/reference/ee741918(v=azure.100)) property to return a reference to the role itself. -When you connect to a role instance programmatically through the .NET SDK, it's relatively easy to access the endpoint information. For example, after you've already connected to a specific role environment, you can get the port of a specific endpoint with this code: +When you connect to a role instance programmatically through the .NET SDK, it's relatively easy to access the endpoint information. For example, after you connect to a specific role environment, you can get the port of a specific endpoint with this code: ```csharp int port = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["StandardWeb"].IPEndpoint.Port; ``` -The **Instances** property returns a collection of **RoleInstance** objects. This collection always contains the current instance. If the role does not define an internal endpoint, the collection includes the current instance but no other instances. The number of role instances in the collection will always be 1 in the case where no internal endpoint is defined for the role. If the role defines an internal endpoint, its instances are discoverable at runtime, and the number of instances in the collection will correspond to the number of instances specified for the role in the service configuration file. +The **Instances** property returns a collection of **RoleInstance** objects. This collection always contains the current instance. If the role doesn't define an internal endpoint, the collection includes the current instance but no other instances. The number of role instances in the collection is always one in the case where no internal endpoint is defined for the role. If the role defines an internal endpoint, its instances are discoverable at runtime, and the number of instances in the collection corresponds to the number of instances specified for the role in the service configuration file. > [!NOTE] > The Azure Managed Library does not provide a means of determining the health of other role instances, but you can implement such health assessments yourself if your service needs this functionality. You can use [Azure Diagnostics](cloud-services-dotnet-diagnostics.md) to obtain information about running role instances. > > -To determine the port number for an internal endpoint on a role instance, you can use the [`InstanceEndpoints`](/previous-versions/azure/reference/ee741917(v=azure.100)) property to return a Dictionary object that contains endpoint names and their corresponding IP addresses and ports. The [`IPEndpoint`](/previous-versions/azure/reference/ee741919(v=azure.100)) property returns the IP address and port for a specified endpoint. The `PublicIPEndpoint` property returns the port for a load balanced endpoint. The IP address portion of the `PublicIPEndpoint` property is not used. +To determine the port number for an internal endpoint on a role instance, you can use the [`InstanceEndpoints`](/previous-versions/azure/reference/ee741917(v=azure.100)) property to return a Dictionary object that contains endpoint names and their corresponding IP addresses and ports. The [`IPEndpoint`](/previous-versions/azure/reference/ee741919(v=azure.100)) property returns the IP address and port for a specified endpoint. The `PublicIPEndpoint` property returns the port for a load balanced endpoint. The IP address portion of the `PublicIPEndpoint` property isn't used. -Here is an example that iterates role instances. +Here's an example that iterates role instances. ```csharp foreach (RoleInstance roleInst in RoleEnvironment.CurrentRoleInstance.Role.Instances) foreach (RoleInstance roleInst in RoleEnvironment.CurrentRoleInstance.Role.Insta } ``` -Here is an example of a worker role that gets the endpoint exposed through the service definition and starts listening for connections. +Here's an example of a worker role that gets the endpoint exposed through the service definition and starts listening for connections. > [!WARNING] > This code will only work for a deployed service. When running in the Azure Compute Emulator, service configuration elements that create direct port endpoints (**InstanceInputEndpoint** elements) are ignored. Only allows network traffic from **WebRole1** to **WorkerRole1**, **WebRole1** t </ServiceDefinition> ``` -An XML schema reference for the elements used above can be found [here](/previous-versions/azure/reference/gg557551(v=azure.100)). +An XML schema reference for the elements used can be found [here](/previous-versions/azure/reference/gg557551(v=azure.100)). ## Next steps Read more about the Cloud Service [model](cloud-services-model-and-package.md). |
cloud-services | Cloud Services Guestos Family 2 3 4 Retirement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-family-2-3-4-retirement.md | Title: Guest OS family 2, 3, and 4 retirement notice | Microsoft Docs -description: Information about when the Azure Guest OS Family 2, 3, and 4 retirement happened and how to determine if you're affected. +description: Information about when the Azure Guest OS Family 2, 3, and 4 retirement happened and how to determine if their retirement affects you. Previously updated : 07/08/2024 Last updated : 07/23/2024 foreach($subscription in Get-AzureSubscription) { } ``` -Your cloud services are impacted by this retirement if the `osFamily` column in the script output contains a `2`, `3`, `4`, or is empty. If empty, the default `osFamily` attribute will point to `osFamily` `5`. +This retirement affects your cloud services if the `osFamily` column in the script output contains a `2`, `3`, `4`, or is empty. If empty, the default `osFamily` attribute points to `osFamily` `5`. ## Recommendations -If you're affected, we recommend you migrate your Cloud Service or [Cloud Services Extended Support](../cloud-services-extended-support/overview.md) roles to one of the supported Guest OS Families: +If this retirement affects you, we recommend you migrate your Cloud Service or [Cloud Services Extended Support](../cloud-services-extended-support/overview.md) roles to one of the supported Guest OS Families: **Guest OS family 7.x**ΓÇ»- Windows Server 2022ΓÇ»*(recommended)* |
cloud-services | Cloud Services Guestos Family1 Retirement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-family1-retirement.md | Title: Guest OS family 1 retirement notice | Microsoft Docs -description: Provides information about when the Azure Guest OS Family 1 retirement happened and how to determine if you are affected +description: Provides information about when the Azure Guest OS Family 1 retirement happened and how to determine if its retirement affects you. Previously updated : 02/21/2023 Last updated : 07/23/2024 -**Sept 2, 2014** The Azure Guest operating system (Guest OS) Family 1.x, which is based on the Windows Server 2008 operating system, was officially retired. All attempts to deploy new services or upgrade existing services using Family 1 will fail with an error message informing you that the Guest OS Family 1 has been retired. +**Sept 2, 2014** The Azure Guest operating system (Guest OS) Family 1.x, which is based on the Windows Server 2008 operating system, was officially retired. All attempts to deploy new services or upgrade existing services using Family 1 fail with an error message informing you that the Guest OS Family 1 is retired. -**November 3, 2014** Extended support for Guest OS Family 1 ended and it is fully retired. All services still on Family 1 will be impacted. We may stop those services at any time. There is no guarantee your services will continue to run unless you manually upgrade them yourself. +**November 3, 2014** Extended support for Guest OS Family 1 ended. Guest OS Family 1 is retired. This retirement affects all services still on Family 1. We may stop those services at any time. There's no guarantee your services continue to run unless you manually upgrade them yourself. -If you have additional questions, visit the [Microsoft Q&A question page for Cloud Services](/answers/topics/azure-cloud-services.html) or [contact Azure support](https://azure.microsoft.com/support/options/). +If you have other questions, visit the [Microsoft Question & Answer page for Cloud Services](/answers/topics/azure-cloud-services.html) or [contact Azure support](https://azure.microsoft.com/support/options/). ## Are you affected? -Your Cloud Services are affected if any one of the following applies: +This retirement affects your cloud services if any one of the following applies: 1. You have a value of "osFamily = "1" explicitly specified in the ServiceConfiguration.cscfg file for your Cloud Service.-2. You do not have a value for osFamily explicitly specified in the ServiceConfiguration.cscfg file for your Cloud Service. Currently, the system uses the default value of "1" in this case. +2. You don't have a value for osFamily explicitly specified in the ServiceConfiguration.cscfg file for your Cloud Service. Currently, the system uses the default value of "1" in this case. 3. The Azure portal lists your Guest Operating System family value as "Windows Server 2008". To find which of your cloud services are running which OS Family, you can run the following script in Azure PowerShell, though you must [set up Azure PowerShell](/powershell/azure/) first. For more information on the script, see [Azure Guest OS Family 1 End of Life: June 2014](/archive/blogs/ryberry/azure-guest-os-family-1-end-of-life-june-2014). foreach($subscription in Get-AzureSubscription) { } ``` -Your cloud services will be impacted by OS Family 1 retirement if the osFamily column in the script output is empty or contains a "1". +The OS Family 1 retirement affects your cloud services if the osFamily column in the script output is empty or contains a "1". -## Recommendations if you are affected +## Recommendations We recommend you migrate your Cloud Service roles to one of the supported Guest OS Families: We recommend you migrate your Cloud Service roles to one of the supported Guest 1. Ensure that your application is using SDK 1.3 and above with .NET framework 3.5 or 4.0. 2. Set the osFamily attribute to "2" in the ServiceConfiguration.cscfg file, and redeploy your cloud service. -## Extended Support for Guest OS Family 1 ended Nov 3, 2014 +## Extended Support for Guest OS Family 1 ended November 3, 2014 Cloud services on Guest OS family 1 are no longer supported. Migrate off family 1 as soon as possible to avoid service disruption. |
cloud-services | Cloud Services Guestos Msrc Releases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md | Title: List of updates applied to the Azure Guest OS | Microsoft Docs -description: This article lists the Microsoft Security Response Center updates applied to different Azure Guest OS. See if an update applies to the Guest OS you are using. +description: This article lists the Microsoft Security Response Center updates applied to different Azure Guest OS. See if an update applies to your Guest OS. ms.assetid: d0a272a9-ed01-4f4c-a0b3-bd5e841bdd77 Previously updated : 07/01/2024 Last updated : 07/23/2024 # Azure Guest OS-The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in. +The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to your Guest OS. Updates always carry forward for the particular [family][family-explain] they were introduced in. ## June 2024 Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates | Rel 18-07 | [4338613], [4338600], [4338605] |.NET 3.5, 4.x, 4.5x Security |4.56|July 10, 2018 | | Rel 18-07 | [4338832] |Flash |3.63, 4.76, 5.21 |July 10, 2018 | | Rel 18-07 | [4339093] |Internet Explorer |2.76, 3.63, 4.76 |July 10, 2018 |-| N/A | [4284826] |June non-security rollup |2.76 |June 12, 2018 | -| N/A | [4284855] |June non-security rollup |3.63 |June 12, 2018 | -| N/A | [4284815] |June non-security rollup |4.56 |June 12, 2018 | +| N/A | [4284826] |June nonsecurity rollup |2.76 |June 12, 2018 | +| N/A | [4284855] |June nonsecurity rollup |3.63 |June 12, 2018 | +| N/A | [4284815] |June nonsecurity rollup |4.56 |June 12, 2018 | ## June 2018 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | The following tables show the Microsoft Security Response Center (MSRC) updates | Rel 18-06 | [4284878] |Windows Security only |4.55 |June 12, 2018 | | Rel 18-06 | [4230450] |Internet Explorer |2.75, 3.62, 4.75 |June 12, 2018 | | Rel 18-06 | [4287903] |Flash |3.62, 4.75, 5.20 |June 12, 2018 |-| N/A | [4103718] |May non-security rollup |2.75 |May 8, 2018 | -| N/A | [4103730] |May non-security rollup |3.62 |May 8, 2018 | -| N/A | [4103725] |May non-security rollup |4.55 |May 8, 2018 | -| N/A | [4040980], [4040977] |Sept ΓÇÖ17 .NET non-security rollup |2.75 |November 14, 2017 | -| N/A | [4095874] |May .NET 3.5 non-security release |2.75 |May 8, 2018 | -| N/A | [4096495] |May .NET 4.x non-security release |2.75 |May 8, 2018 | -| N/A | [4040975] |Sept ΓÇÖ17 .NET non-security rollup |3.62 |November 14, 2017 | -| N/A | [4095872] |May .NET 3.5 non-security release |3.62 |May 8, 2018 | -| N/A | [4096494] |May .NET 4.x non-security release |3.62 |May 8, 2018 | -| N/A | [4096416] |May .NET 4.5x non-security release |3.62 |May 8, 2018 | -| N/A | [4040974], [4040972] |Sept ΓÇÖ17 .NET non-security rollup |4.55 |November 14, 2017 | -| N/A | [4043763] |Oct ΓÇÖ17 .NET non-security rollup |4.55 |September 12, 2017 | -| N/A | [4095876] |May .NET 4.x non-security release |4.55 |May 8, 2018 | -| N/A | [4096417] |May .NET 4.5x non-security release |4.55 |May 8, 2018 | +| N/A | [4103718] |May nonsecurity rollup |2.75 |May 8, 2018 | +| N/A | [4103730] |May nonsecurity rollup |3.62 |May 8, 2018 | +| N/A | [4103725] |May nonsecurity rollup |4.55 |May 8, 2018 | +| N/A | [4040980], [4040977] |Sept ΓÇÖ17 .NET nonsecurity rollup |2.75 |November 14, 2017 | +| N/A | [4095874] |May .NET 3.5 nonsecurity release |2.75 |May 8, 2018 | +| N/A | [4096495] |May .NET 4.x nonsecurity release |2.75 |May 8, 2018 | +| N/A | [4040975] |Sept ΓÇÖ17 .NET nonsecurity rollup |3.62 |November 14, 2017 | +| N/A | [4095872] |May .NET 3.5 nonsecurity release |3.62 |May 8, 2018 | +| N/A | [4096494] |May .NET 4.x nonsecurity release |3.62 |May 8, 2018 | +| N/A | [4096416] |May .NET 4.5x nonsecurity release |3.62 |May 8, 2018 | +| N/A | [4040974], [4040972] |Sept ΓÇÖ17 .NET nonsecurity rollup |4.55 |November 14, 2017 | +| N/A | [4043763] |Oct ΓÇÖ17 .NET nonsecurity rollup |4.55 |September 12, 2017 | +| N/A | [4095876] |May .NET 4.x nonsecurity release |4.55 |May 8, 2018 | +| N/A | [4096417] |May .NET 4.5x nonsecurity release |4.55 |May 8, 2018 | | N/A | [4132216] |May SSU |5.20 |May 8, 2018 | ## May 2018 Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates | Rel 18-05 | [4054856] |.NET 4.7x Security |5.19 |May 8, 2018 | | Rel 18-05 | [4103768] |Internet Explorer |2.74, 3.61, 4.74 |May 8, 2018 | | Rel 18-05 | [4103729] |Flash |3.61, 4.74, 5.19 |May 8, 2018 |-| N/A | [4093118] |April non-security rollup |2.73 |April 10, 2018 | -| N/A | [4093123] |April non-security rollup |3.61 |April 10, 2018 | -| N/A | [4093114] |April non-security rollup |4.74 |April 10, 2018 | +| N/A | [4093118] |April nonsecurity rollup |2.73 |April 10, 2018 | +| N/A | [4093123] |April nonsecurity rollup |3.61 |April 10, 2018 | +| N/A | [4093114] |April nonsecurity rollup |4.74 |April 10, 2018 | | N/A | [4093137] |April SSU |5.19 |April 10, 2018 | | N/A | [4093753] |Timezone update |2.74, 3.61, 4.74 |April 10, 2018 | The following tables show the Microsoft Security Response Center (MSRC) updates | Rel 18-04 | [4093115] |Windows Security only |4.53 |April 10, 2018 | | Rel 18-04 | [4092946] |Internet Explorer |2.73, 3.60, 4.53 |April 10, 2018 | | Rel 18-04 | [4093110] |Flash |3.60, 4.53, 5.18 |April 10, 2018 |-| N/A | [4088875] |March non-security rollup |2.73 |March 13, 2018 | -| N/A | [4099950] |March non-security rollup pre-requisite|2.73 |March 13, 2018 | -| N/A | [4088877] |March non-security rollup |3.60 |March 13, 2018 | -| N/A | [4088876] |March non-security rollup |4.53 |March 13, 2018 | +| N/A | [4088875] |March nonsecurity rollup |2.73 |March 13, 2018 | +| N/A | [4099950] |March nonsecurity rollup prerequisite|2.73 |March 13, 2018 | +| N/A | [4088877] |March nonsecurity rollup |3.60 |March 13, 2018 | +| N/A | [4088876] |March nonsecurity rollup |4.53 |March 13, 2018 | ## March 2018 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | The following tables show the Microsoft Security Response Center (MSRC) updates | Rel 18-03 | [4088878], [4088880], [4088879] |Windows Security only |2.72, 3.59, 4.52 |March 13, 2018 | | Rel 18-03 | [4089187] |Internet Explorer |2.72, 3.59, 4.52 |March 13, 2018 | | Rel 18-03 | [4074595] |Flash |3.59, 4.52, 5.17 |March 13, 2018 |-| N/A | [4074598] |February non-security rollup |2.72 |February 13, 2018 | -| N/A | [4074593] |February non-security rollup |3.59 |February 13, 2018 | -| N/A | [4074594] |February non-security rollup |4.52 |February 13, 2018 | +| N/A | [4074598] |February nonsecurity rollup |2.72 |February 13, 2018 | +| N/A | [4074593] |February nonsecurity rollup |3.59 |February 13, 2018 | +| N/A | [4074594] |February nonsecurity rollup |4.52 |February 13, 2018 | | N/A | [4074837] |Timezone update |2.72, 3.59, 4.52 |February 13, 2018 | The following tables show the Microsoft Security Response Center (MSRC) updates | Rel 18-02 | [4074587], [4074589], [4074597] |Windows Security only |2.71, 3.58, 4.51 |February 13, 2018 | | Rel 18-02 | [4074736] |Internet Explorer |2.71, 3.58, 4.51 |February 13, 2018 | | Rel 18-02 | [4074595] |Flash |3.58, 4.51, 5.16 |February 13, 2018 |-| N/A | [4056894] |January non-security rollup |2.71 |January 4, 2018 | -| N/A | [4056896] |January non-security rollup |3.58 |January 4, 2018 | -| N/A | [4056895] |January non-security rollup |4.51 |January 4, 2018 | +| N/A | [4056894] |January nonsecurity rollup |2.71 |January 4, 2018 | +| N/A | [4056896] |January nonsecurity rollup |3.58 |January 4, 2018 | +| N/A | [4056895] |January nonsecurity rollup |4.51 |January 4, 2018 | | N/A | [4054176], [4054172] |January .NET rollup |2.71 |January 4, 2018 | | N/A | [4054175], [4054171] |January .NET rollup |3.58 |January 4, 2018 | | N/A | [4054177], [4054170] |January .NET rollup |4.51 |January 4, 2018 | The following tables show the Microsoft Security Response Center (MSRC) updates | | | | | | | Rel 18-01 | [4056898], [4056897], [4056899] |Windows Security only |2.70, 3.57, 4.50 |January 3, 2018 | | Rel 18-01 | [4056890], [4056892] |Windows Security only |5.15 |January 3, 2018 |-| N/A | [4054518] |December non-security rollup |2.70 |December 12, 2017 | -| N/A | [4054520] |December non-security rollup |3.57 |December 12, 2017 | -| N/A | [4054519] |December non-security rollup |4.50 |December 12, 2017 | +| N/A | [4054518] |December nonsecurity rollup |2.70 |December 12, 2017 | +| N/A | [4054520] |December nonsecurity rollup |3.57 |December 12, 2017 | +| N/A | [4054519] |December nonsecurity rollup |4.50 |December 12, 2017 | | N/A | [4051956] |January timezone update |2.70, 3.57, 4.50 |December 12, 2017 | The following tables show the Microsoft Security Response Center (MSRC) updates | Rel 17-12 | [4054521], [4054522], [4054523] |Windows Security only |2.69, 3.56, 4.49 |December 12, 2017 | | Rel 17-12 | [4052978] |Internet Explorer |2.69, 3.56, 4.49 |December 12, 2017 | | Rel 17-12 | [4052978] |Flash |3.56, 4.49, 5.14 |December 12, 2017 |-| N/A | [4048957] |November non-security rollup |2.69 |November 14, 2017 | -| N/A | [4048959] |November non-security rollup |3.56 |November 14, 2017 | -| N/A | [4048958] |November non-security rollup |4.49 |November 14, 2017 | +| N/A | [4048957] |November nonsecurity rollup |2.69 |November 14, 2017 | +| N/A | [4048959] |November nonsecurity rollup |3.56 |November 14, 2017 | +| N/A | [4048958] |November nonsecurity rollup |4.49 |November 14, 2017 | | N/A | [4049068] |December Timezone update |2.69, 3.56, 4.49 |December 12, 2017 | ## November 2017 Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates | Rel 17-11 | [4048960], [4048962], [4048961] |Windows Security only |2.68, 3.55, 4.48 |November 14, 2017 | | Rel 17-11 | [4047206] |Internet Explorer |2.68, 3.55, 4.48 |November 14, 2017 | | Rel 17-11 | [4048951] |Flash |3.55, 4.48, 5.13 |November 14, 2017 |-| N/A | [4041681] |October non-security rollup |2.68 |October 10, 2017 | -| N/A | [4041690] |October non-security rollup |3.55 |October 10, 2017 | -| N/A | [4041693] |October non-security rollup |4.48 |October 10, 2017 | +| N/A | [4041681] |October nonsecurity rollup |2.68 |October 10, 2017 | +| N/A | [4041690] |October nonsecurity rollup |3.55 |October 10, 2017 | +| N/A | [4041693] |October nonsecurity rollup |4.48 |October 10, 2017 | | N/A | [3191566] |Update for Windows Management Framework 5.1 |2.68 |November 14, 2017 | | N/A | [3191565] |Update for Windows Management Framework 5.1 |3.55 |November 14, 2017 | | N/A | [3191564] |Update for Windows Management Framework 5.1 |4.48 |November 14, 2017 | The following tables show the Microsoft Security Response Center (MSRC) updates | Rel 17-10 | [4041678], [4041679], [4041687] |Windows Security only |2.67, 3.54, 4.47 |October 10, 2017 | | Rel 17-10 | [4040685], |Internet Explorer |2.67, 3.54, 4.47 |October 10, 2017 | | Rel 17-10 | [4041681], [4041690], [4041693] |Windows Monthly Rollups |2.67, 3.54, 4.47 |October 10, 2017 |-| N/A | [4038777] |September non-security rollup |2.67 |September 12, 2017 | -| N/A | [4038799] |September non-security rollup |3.54 |September 12, 2017 | -| N/A | [4038792] |September non-security rollup |4.47 |September 12, 2017 | -| N/A | [4040980] |September .NET non-security rollup |2.67 |September 12, 2017 | -| N/A | [4040979] |September .NET non-security rollup |3.54 |September 12, 2017 | -| N/A | [4040981] |September .NET non-security rollup |4.47 |September 12, 2017 | +| N/A | [4038777] |September nonsecurity rollup |2.67 |September 12, 2017 | +| N/A | [4038799] |September nonsecurity rollup |3.54 |September 12, 2017 | +| N/A | [4038792] |September nonsecurity rollup |4.47 |September 12, 2017 | +| N/A | [4040980] |September .NET nonsecurity rollup |2.67 |September 12, 2017 | +| N/A | [4040979] |September .NET nonsecurity rollup |3.54 |September 12, 2017 | +| N/A | [4040981] |September .NET nonsecurity rollup |4.47 |September 12, 2017 | ## September 2017 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | The following tables show the Microsoft Security Response Center (MSRC) updates | Rel 17-09 | [4040966], [4040960], [4040965], [4040959], [4033988], [4040955], [4040967], [4040958]|September .NET update |2.66, 3.53, 4.46 |September 12, 2017 | | Rel 17-09 | [4036586] |Internet explorer |2.66, 3.53, 4.46 |September 12, 2017 | | CVE-2017-8704 | [4038782] |Denial of Service |5.11 |September 12, 2017 |-| N/A | [4034664] |August non-security rollup |2.66 |August 8, 2017 | -| N/A | [4034665] |August non-security rollup |5.11 |August 8, 2017 | -| N/A | [4034681] |August non-security rollup |4.46 |August 8, 2017 | +| N/A | [4034664] |August nonsecurity rollup |2.66 |August 8, 2017 | +| N/A | [4034665] |August nonsecurity rollup |5.11 |August 8, 2017 | +| N/A | [4034681] |August nonsecurity rollup |4.46 |August 8, 2017 | ## August 2017 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | The following tables show the Microsoft Security Response Center (MSRC) updates | Rel 17-07 | [4034733] |Internet Explorer |2.65, 3.52, 4.45, 5.10 |August 8, 2017 | | Rel 17-07 | [4034664], [4034665], [4034681] |Windows Monthly Rollups |2.65, 3.52, 4.45 |August 8, 2017 | | Rel 17-07 | [4034668], [4034660], [4034658], [4034674] |Re-release of CVE-2017-0071, Re-release of CVE-2017-0228 |5.10 |August 8, 2017 |-| Rel 17-07 | [4025341] |July non-security rollup |2.65 |July 11, 2017 | -| Rel 17-07 | [4025331] |July non-security rollup |3.52 |July 11, 2017 | -| Rel 17-07 | [4025336] |July non-security rollup |4.45 |July 11, 2017 | +| Rel 17-07 | [4025341] |July nonsecurity rollup |2.65 |July 11, 2017 | +| Rel 17-07 | [4025331] |July nonsecurity rollup |3.52 |July 11, 2017 | +| Rel 17-07 | [4025336] |July nonsecurity rollup |4.45 |July 11, 2017 | ## July 2017 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | The following tables show the Microsoft Security Response Center (MSRC) updates | Rel 17-07 | [4025376] |Flash |3.51, 4.44, 5.9 |July 11, 2017 | | Rel 17-07 | [4025252] |Internet Explorer |2.64, 3.51, 4.44 |July 11, 2017 | | N/A | [4020322] |Timezone Update |2.64, 3.51, 4.44 |July 11, 2017 |-| N/A | [4022719] |June non-security rollup |2.64 |June 13, 2017 | -| N/A | [4022724] |June non-security rollup |3.51 |June 13, 2017 | -| N/A | [4022726] |June non-security rollup |4.44 |June 13, 2017 | +| N/A | [4022719] |June nonsecurity rollup |2.64 |June 13, 2017 | +| N/A | [4022724] |June nonsecurity rollup |3.51 |June 13, 2017 | +| N/A | [4022726] |June nonsecurity rollup |4.44 |June 13, 2017 | ## June 2017 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | The following tables show the Microsoft Security Response Center (MSRC) updates | Rel 17-06 | [4022730] |Security update for Adobe Flash Player |3.50, 4.43, 5.8 |June 13, 2017 | | Rel 17-06 | [4015217], [4015221], [4015583], [4015550], [4015219] |Re-release of CVE-2017-0167 |4.43, 5.8 |April 11, 2017 | | N/A | [4023136] |Timezone update |2.63, 3.50, 4.43 |June 13, 2017 |-| N/A | [4019264] |May non-security rollup |2.63 |June 13, 2017 | -| N/A | [4014545] |May .NET non-security rollup |2.63 |April 11, 2017 | -| N/A | [4014508] |May .NET non-security rollup |2.63 |May 9, 2017 | -| N/A | [4014511] |May .NET non-security rollup |2.63 |May 9, 2017 | -| N/A | [4014514] |May .NET non-security rollup |2.63 |May 9, 2017 | -| N/A | [4019216] |May non-security rollup |3.50 |May 9, 2017 | -| N/A | 4014503 |May .NET non-security rollup |3.50 |May 9, 2017 | -| N/A | [4014506] |May .NET non-security rollup |3.50 |May 9, 2017 | -| N/A | [4014509] |May .NET non-security rollup |3.50 |May 9, 2017 | -| N/A | [4014513] |May .NET non-security rollup |3.50 |May 9, 2017 | -| N/A | [4019215] |May non-security rollup |4.43 |May 9, 2017 | -| N/A | [4014505] |May .NET non-security rollup |4.43 |May 9, 2017 | -| N/A | [4014507] |May .NET non-security rollup |4.43 |May 9, 2017 | -| N/A | [4014510] |May .NET non-security rollup |4.43 |May 9, 2017 | -| N/A | [4014512] |May .NET non-security rollup |4.43 |May 9, 2017 | +| N/A | [4019264] |May nonsecurity rollup |2.63 |June 13, 2017 | +| N/A | [4014545] |May .NET nonsecurity rollup |2.63 |April 11, 2017 | +| N/A | [4014508] |May .NET nonsecurity rollup |2.63 |May 9, 2017 | +| N/A | [4014511] |May .NET nonsecurity rollup |2.63 |May 9, 2017 | +| N/A | [4014514] |May .NET nonsecurity rollup |2.63 |May 9, 2017 | +| N/A | [4019216] |May nonsecurity rollup |3.50 |May 9, 2017 | +| N/A | 4014503 |May .NET nonsecurity rollup |3.50 |May 9, 2017 | +| N/A | [4014506] |May .NET nonsecurity rollup |3.50 |May 9, 2017 | +| N/A | [4014509] |May .NET nonsecurity rollup |3.50 |May 9, 2017 | +| N/A | [4014513] |May .NET nonsecurity rollup |3.50 |May 9, 2017 | +| N/A | [4019215] |May nonsecurity rollup |4.43 |May 9, 2017 | +| N/A | [4014505] |May .NET nonsecurity rollup |4.43 |May 9, 2017 | +| N/A | [4014507] |May .NET nonsecurity rollup |4.43 |May 9, 2017 | +| N/A | [4014510] |May .NET nonsecurity rollup |4.43 |May 9, 2017 | +| N/A | [4014512] |May .NET nonsecurity rollup |4.43 |May 9, 2017 | ## May 2017 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | The following tables show the Microsoft Security Response Center (MSRC) updates | Rel 17-05 | [4022345] |Microsoft Security Advisory |5.7 | May 9, 2017 | | Rel 17-05 | [4021279] |.NET /ASP.NET Core Advisory |2.62, 3.49, 4.42, 5.7 | May 9, 2017 | | N/A | [4012864] |Timezone Update |2.62, 3.49, 4.42 | May 9, 2017 |-| N/A | [4014565] |April .NET non-security rollup |2.62 | April 11, 2017 | -| N/A | [4014559] |April .NET non-security rollup |2.62 | April 11, 2017 | +| N/A | [4014565] |April .NET nonsecurity rollup |2.62 | April 11, 2017 | +| N/A | [4014559] |April .NET nonsecurity rollup |2.62 | April 11, 2017 | | N/A | [4015549] |April non-Security Rollup |2.62 | April 11, 2017 | | N/A | [4019990] |D3DCompiler update - requirement for .NET 4.7 |3.49 | May 9, 2017 |-| N/A | [4014563] |April .NET non-security rollup |3.49 | April 11, 2017 | -| N/A | [4014557] |April .NET non-security rollup |3.49 | April 11, 2017 | -| N/A | [4014545] |April .NET non-security rollup |3.49 | April 11, 2017 | -| N/A | [4014548] |April .NET non-security rollup |3.49 | April 11, 2017 | -| N/A | [4015551] |April non-security rollup |3.49 | April 11, 2017 | +| N/A | [4014563] |April .NET nonsecurity rollup |3.49 | April 11, 2017 | +| N/A | [4014557] |April .NET nonsecurity rollup |3.49 | April 11, 2017 | +| N/A | [4014545] |April .NET nonsecurity rollup |3.49 | April 11, 2017 | +| N/A | [4014548] |April .NET nonsecurity rollup |3.49 | April 11, 2017 | +| N/A | [4015551] |April nonsecurity rollup |3.49 | April 11, 2017 | | N/A | [3173424] |Servicing Stack Update |4.42 | July 12, 2016 |-| N/A | [4014555] |April .NET non-security rollup |4.42 | April 11, 2017 | -| N/A | [4014567] |April .NET non-security rollup |4.42 | April 11, 2017 | -| N/A | [4015550] |April non-security rollup |4.42 | April 11, 2017 | +| N/A | [4014555] |April .NET nonsecurity rollup |4.42 | April 11, 2017 | +| N/A | [4014567] |April .NET nonsecurity rollup |4.42 | April 11, 2017 | +| N/A | [4015550] |April nonsecurity rollup |4.42 | April 11, 2017 | | N/A | [4013418] |Servicing Stack Update |5.7 | March 14, 2017 | ## April 2017 Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates | MS16-077 |[3165191] |Security Update for WPAD |4.33, 3.40, 2.52 |June 14, 2016 | | MS16-080 |[3164302] |Security Update for Microsoft Windows PDF |4.33, 3.40 |June 14, 2016 | | MS16-081 |[3160352] |Security Update for Active Directory |4.33, 3.40, 2.52 |June 14, 2016 |-| N/A |[2922223] |You cannot change system time if RealTimeIsUniversal registry entry is enabled in Windows |2.52 |June 14, 2016 | +| N/A |[2922223] |You can't change system time if RealTimeIsUniversal registry entry is enabled in Windows |2.52 |June 14, 2016 | | N/A |[3121255] |"0x00000024" Stop error in FsRtlNotifyFilterReportChange and copy file may fail in Windows |2.52 |June 14, 2016 | | N/A |[3125424] |LSASS deadlocks cause Windows Server 2012 R2 or Windows Server 2012 not to respond |4.33, 3.40 |June 14, 2016 | | N/A |[3125574] |Convenience rollup update for Windows 7 SP1 and Windows Server 2008 R2 SP1 |2.52 |June 14, 2016 | The following tables show the Microsoft Security Response Center (MSRC) updates | N/A |[3012325] |Windows APN database entries update for DIGI, Vodafone, and Telekom mobile operators in Windows 8.1 and Windows 8 |4.15, 3.22, 2.34 |Jan 13 2015 | | N/A |[3007054] |PIN-protected printing option always shows when you print a document within a Windows Store application in Windows |4.15, 3.22, 2.34 |Jan 13 2015 | | N/A |[2999802] |Solid lines instead of dotted lines are printed in Windows |4.15, 3.22, 2.34 |Jan 13 2015 |-| N/A |[2896881] |Long logon time when you use the AddPrinterConnection VBScript command to map printers for users during logon process in Windows |4.15, 3.22, 2.34 |Jan 13 2015 | +| N/A |[2896881] |Long sign in time when you use the AddPrinterConnection VBScript command to map printers for users during sign in process in Windows |4.15, 3.22, 2.34 |Jan 13 2015 | [4457131]: https://support.microsoft.com/kb/4457131 [4457145]: https://support.microsoft.com/kb/4457145 |
cloud-services | Cloud Services Guestos Retirement Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-retirement-policy.md | Title: Supportability and retirement policy guide for Azure Guest OS | Microsoft Docs -description: Provides information about what Microsoft will support as regards to the Azure Guest OS used by Cloud Services. +description: Provides information about what Microsoft supports regarding the Azure Guest OS used by Cloud Services. -The information on this page relates to the Azure Guest operating system ([Guest OS](cloud-services-guestos-update-matrix.md)) for Cloud Services worker and web roles (PaaS). It does not apply to Virtual Machines (IaaS). +The information on this page relates to the Azure Guest operating system ([Guest OS](cloud-services-guestos-update-matrix.md)) for Cloud Services worker and web roles (PaaS). It doesn't apply to Virtual Machines (IaaS). -Microsoft has a published [support policy for the Guest OS](https://support.microsoft.com/gp/azure-cloud-lifecycle-faq). The page you are reading now describes how the policy is implemented. +Microsoft has a published [support policy for the Guest OS](https://support.microsoft.com/gp/azure-cloud-lifecycle-faq). This page describes how the policy is implemented. -The policy is +The policy is: -1. Microsoft will support **at least the latest two families of the Guest OS**. When a family is retired, customers have 12 months from the official retirement date to update to a newer supported Guest OS family. -2. Microsoft will support **at least the latest two versions of the supported Guest OS families**. -3. Microsoft will support **at least the latest two versions of the Azure SDK**. When a version of the SDK is retired, customers will have 12 months from the official retirement date to update to a newer version. +* Microsoft supports **at least the latest two families of the Guest OS**. When a family is retired, customers have 12 months from the official retirement date to update to a newer supported Guest OS family. +* Microsoft supports **at least the latest two versions of the supported Guest OS families**. +* Microsoft supports **at least the latest two versions of the Azure SDK**. When a version of the SDK is retired, customers have 12 months from the official retirement date to update to a newer version. -At times, more than two families or releases may be supported. Official Guest OS support information will appear on the [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md). +At times, more than two families or releases may be supported. Official Guest OS support information appears on the [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md). ## When a Guest OS version is retired-New Guest OS **versions** are introduced about every month to incorporate the latest MSRC updates. Because of the regular monthly updates, a Guest OS version is normally disabled around 60 days after its release. This activity keeps at least two Guest OS versions for each family available for use. +New Guest OS **versions** are introduced about every month to incorporate the latest Microsoft Security Response Center (MSRC) updates. Because of the regular monthly updates, a Guest OS version is normally disabled around 60 days after its release. This activity keeps at least two Guest OS versions for each family available for use. ### Process during a Guest OS family retirement-Once the retirement is announced, customers have a 12 month "transition" period before the older family is officially removed from service. This transition time may be extended at the discretion of Microsoft. Updates will be posted on the [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md). +Once the retirement is announced, customers have a 12 month "transition" period before the older family is officially removed from service. This transition time may be extended at the discretion of Microsoft. Microsoft posts updates on the [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md). -A gradual retirement process will begin six (6) months into the transition period. During this time: +A gradual retirement process begins six (6) months into the transition period. During this time: -1. Microsoft will notify customers of the retirement. -2. The newer version of the Azure SDK wonΓÇÖt support the retired Guest OS family. -3. New deployments and redeployments of Cloud Services will not be allowed on the retired family +* Microsoft notifies customers of the retirement. +* The newer version of the Azure SDK doesn't support the retired Guest OS family. +* New deployments and redeployments of Cloud Services are prohibited on the retired family -Microsoft will continue to introduce new Guest OS version incorporating the latest MSRC updates until the last day of the transition period, known as the "expiration date". On the expiration date, Cloud Services still running will be unsupported under the Azure SLA. Microsoft has the discretion to force upgrade, delete or stop those services after that date. +Microsoft continues to introduce new Guest OS version incorporating the latest MSRC updates until the last day of the transition period, known as the "expiration date." On the expiration date, cloud services still running are unsupported under the Azure Service Level Agreement (SLA). Microsoft has the discretion to force upgrade, delete or stop those services after that date. ### Process during a Guest OS Version retirement-If customers set their Guest OS to automatically update, they never have to worry about dealing with Guest OS versions. They will always be using the latest Guest OS version. +If customers set their Guest OS to automatically update, they never have to worry about dealing with Guest OS versions. They're always using the latest Guest OS version. Guest OS Versions are released every month. Because of the rate of regular releases, each version has a fixed lifespan. -At 60 days into the lifespan, a version is "*disabled*". "Disabled" means that the version is removed from the portal. The version can no longer be set from the CSCFG configuration file. Existing deployments are left running. But new deployments and code and configuration updates to existing deployments will not be allowed. +At 60 days into the lifespan, a version is "*disabled*." "Disabled" means that the version is removed from the portal. The version can no longer be set from the CSCFG configuration file. Existing deployments are left running, but new deployments and code and configuration updates to existing deployments are prohibited. -Sometime after becoming "disabled", the Guest OS version "expires" and any installations still running that expired version are exposed to security and vulnerability issues. Generally, expiration is done in batches, so the period from disablement to expiration can vary. +Sometime after the Guest OS version becomes "disabled," it "expires," and any installations still running that expired version are exposed to security and vulnerability issues. Generally, expiration is done in batches, so the period from disablement to expiration can vary. -Customers who configure their services to update the Guest OS manually, should ensure that their services are running on a supported Guest OS. If a service is configured to update the Guest OS automatically, the underlying platform will ensure compliance and will upgrade to the latest Guest OS. +Customers who configure their services to update the Guest OS manually, should ensure that their services are running on a supported Guest OS. If a service is configured to update the Guest OS automatically, the underlying platform ensures compliance and upgrades to the latest Guest OS. -These periods may be made longer at Microsoft's discretion to ease customer transitions. Any changes will be communicated on the [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md). +These periods may be made longer at Microsoft's discretion to ease customer transitions. Microsoft communicates any changes on the [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md). ### Notifications during retirement-* **Family retirement** <br>Microsoft will use blog posts and portal notification. Customers who are still using a retired Guest OS family will be notified through direct communication (email, portal messages, phone call) to assigned service administrators. All changes will be posted to the [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md). -* **Version Retirement** <br>All changes and the dates they occur will be posted to the [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md), including release, disabled, and expiration. Services admins will receive emails if they have deployments running on a disabled Guest OS version or family. The timing of these emails can vary. Generally they are at least a month before disablement, though this timing is not an official SLA. +* **Family retirement** <br>Microsoft uses blog posts and portal notification. Microsoft informs customers who are still using a retired Guest OS family through direct communication (email, portal messages, phone call) to assigned service administrators. Microsoft posts all changes to the [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md). +* **Version Retirement** <br>Microsoft posts all changes and the dates they occur to the [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md), including release, disabled, and expiration. Services admins receive emails if they have deployments running on a disabled Guest OS version or family. The timing of these emails can vary. Generally they are at least a month before disablement, though this timing isn't an official SLA. ## Frequently asked questions **How can I mitigate the impacts of migration?** We recommend that you use latest Guest OS family for designing your Cloud Services. -1. Start planning your migration to a newer family early. -2. Set up temporary test deployments to test your Cloud Service running on the new family. -3. Set your Guest OS version to **Automatic** (osVersion=* in the [.cscfg](cloud-services-model-and-package.md#cscfg) file) so the migration to new Guest OS versions occurs automatically. +* Start planning your migration to a newer family early. +* Set up temporary test deployments to test your Cloud Service running on the new family. +* Set your Guest OS version to **Automatic** (osVersion=* in the [.cscfg](cloud-services-model-and-package.md#cscfg) file) so the migration to new Guest OS versions occurs automatically. **What if my web application requires deeper integration with the OS?** -If your web application architecture depends on underlying features of the operating system, use platform supported capabilities such as [startup tasks](cloud-services-startup-tasks.md) or other extensibility mechanisms. Alternatively, you can also use [Azure Virtual Machines](https://azure.microsoft.com/documentation/scenarios/virtual-machines/) (IaaS ΓÇô Infrastructure as a Service), where you are responsible for maintaining the underlying operating system. +If your web application architecture depends on underlying features of the operating system, use platform supported capabilities such as [startup tasks](cloud-services-startup-tasks.md) or other extensibility mechanisms. Alternatively, you can also use [Azure Virtual Machines](https://azure.microsoft.com/documentation/scenarios/virtual-machines/) (IaaS ΓÇô Infrastructure as a Service), where you're responsible for maintaining the underlying operating system. ## Next steps Review the latest [Guest OS releases](cloud-services-guestos-update-matrix.md). |
cloud-services | Cloud Services Guestos Update Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md | -Provides you with up-to-date information about the latest Azure Guest OS releases for Cloud Services. This information helps you plan your upgrade path before a Guest OS is disabled. If you configure your roles to use *automatic* Guest OS updates as described in [Azure Guest OS Update Settings][Azure Guest OS Update Settings], it is not vital that you read this page. +Provides you with up-to-date information about the latest Azure Guest OS releases for Cloud Services. This information helps you plan your upgrade path before a Guest OS is disabled. If you configure your roles to use *automatic* Guest OS updates as described in [Azure Guest OS Update Settings][Azure Guest OS Update Settings], it isn't vital that you read this page. > [!IMPORTANT] > This page applies to Cloud Services web and worker roles, which run on top of a Guest OS. It does **not apply** to IaaS Virtual Machines. Unsure about how to update your Guest OS? Check [this][cloud updates] out. ## News updates ###### **June 27, 2024**-The June Guest OS has released. +The June Guest OS released. ###### **June 1, 2024**-The May Guest OS has released. +The May Guest OS released. ###### **April 19, 2024**-The April Guest OS has released. +The April Guest OS released. ###### **April 9, 2024**-The March Guest OS has released. +The March Guest OS released. ###### **February 24, 2024**-The February Guest OS has released. +The February Guest OS released. ###### **January 22, 2024**-The January Guest OS has released. +The January Guest OS released. ###### **January 16, 2023**-The December Guest OS has released. +The December Guest OS released. ###### **December 8, 2023**-The November Guest OS has released. +The November Guest OS released. ###### **October 23, 2023**-The October Guest OS has released. +The October Guest OS released. ###### **September 26, 2023**-The September Guest OS has released. +The September Guest OS released. ###### **August 21, 2023**-The August Guest OS has released. +The August Guest OS released. ###### **July 27, 2023**-The July Guest OS has released. +The July Guest OS released. ###### **July 8, 2023**-The June Guest OS has released. +The June Guest OS released. ###### **May 19, 2023**-The May Guest OS has released. +The May Guest OS released. ###### **April 27, 2023**-The April Guest OS has released. +The April Guest OS released. ###### **March 28, 2023**-The March Guest OS has released. +The March Guest OS released. ###### **March 1, 2023**-The February Guest OS has released. +The February Guest OS released. ###### **January 31, 2023**-The January Guest OS has released. +The January Guest OS released. ###### **January 19, 2023**-The December Guest OS has released. +The December Guest OS released. ###### **December 12, 2022**-The November Guest OS has released. +The November Guest OS released. ###### **November 4, 2022**-The October Guest OS has released. +The October Guest OS released. ###### **September 29, 2022**-The September Guest OS has released. +The September Guest OS released. ###### **September 2, 2022**-The August Guest OS has released. +The August Guest OS released. ###### **August 3, 2022**-The July Guest OS has released. +The July Guest OS released. ###### **July 11, 2022**-The June Guest OS has released. +The June Guest OS released. ###### **May 26, 2022**-The May Guest OS has released. +The May Guest OS released. ###### **April 30, 2022**-The April Guest OS has released. +The April Guest OS released. ###### **March 19, 2022**-The March Guest OS has released. +The March Guest OS released. ###### **March 2, 2022**-The February Guest OS has released. +The February Guest OS released. ###### **February 11, 2022**-The January Guest OS has released. +The January Guest OS released. ###### **January 10, 2022**-The December Guest OS has released. +The December Guest OS released. ###### **November 19, 2021**-The November Guest OS has released. +The November Guest OS released. ###### **November 1, 2021**-The October Guest OS has released. +The October Guest OS released. ###### **October 8, 2021**-The September Guest OS has released. +The September Guest OS released. ###### **August 27, 2021**-The August Guest OS has released. +The August Guest OS released. ###### **August 13, 2021**-The July Guest OS has released. +The July Guest OS released. ###### **July 1, 2021**-The June Guest OS has released. +The June Guest OS released. ###### **May 26, 2021**-The May Guest OS has released. +The May Guest OS released. ###### **April 30, 2021**-The April Guest OS has released. +The April Guest OS released. ###### **March 28, 2021**-The March Guest OS has released. +The March Guest OS released. ###### **February 19, 2021**-The February Guest OS has released. +The February Guest OS released. ###### **February 5, 2021**-The January Guest OS has released. +The January Guest OS released. ###### **January 15, 2021**-The December Guest OS has released. +The December Guest OS released. ###### **December 19, 2020**-The November Guest OS has released. +The November Guest OS released. ###### **November 17, 2020**-The October Guest OS has released. +The October Guest OS released. ###### **October 10, 2020**-The September Guest OS has released. +The September Guest OS released. ###### **September 5, 2020**-The August Guest OS has released. +The August Guest OS released. ###### **August 17, 2020**-The July Guest OS has released. +The July Guest OS released. ###### **August 10, 2020**-The June Guest OS has released. +The June Guest OS released. ###### **June 2, 2020**-The May Guest OS has released. +The May Guest OS released. ###### **May 4, 2020**-The April Guest OS has released. +The April Guest OS released. ###### **April 2, 2020**-The March Guest OS has released. +The March Guest OS released. ###### **March 5, 2020**-The February Guest OS has released. +The February Guest OS released. ###### **January 24, 2020**-The January Guest OS has released. +The January Guest OS released. ###### **January 8, 2020**-The December Guest OS has released. +The December Guest OS released. ###### **December 5, 2019**-The November Guest OS has released. +The November Guest OS released. ###### **November 1, 2019**-The October Guest OS has released. +The October Guest OS released. ###### **October 7, 2019**-The September Guest OS has released. +The September Guest OS released. ###### **September 4, 2019**-The August Guest OS has released. +The August Guest OS released. ###### **July 26, 2019**-The July Guest OS has released. +The July Guest OS released. ###### **July 8, 2019**-The June Guest OS has released. +The June Guest OS released. ###### **June 6, 2019**-The May Guest OS has released. +The May Guest OS released. ###### **May 7, 2019**-The April Guest OS has released. +The April Guest OS released. ###### **March 26, 2019**-The March Guest OS has released. +The March Guest OS released. ###### **March 12, 2019**-The February Guest OS has released. +The February Guest OS released. ###### **February 5, 2019**-The January Guest OS has released. +The January Guest OS released. ###### **January 24, 2019**-Family 6 Guest OS (Windows Server 2019) has released. +Family 6 Guest OS (Windows Server 2019) released. ###### **January 7, 2019**-The December Guest OS has released. +The December Guest OS released. ###### **December 14, 2018**-The November Guest OS has released. +The November Guest OS released. ###### **November 8, 2018**-The October Guest OS has released. +The October Guest OS released. ###### **October 12, 2018**-The September Guest OS has released. +The September Guest OS released. ## Releases The September Guest OS has released. |~~WA-GUEST-OS-4.102_202204-01~~| April 30, 2022 | July 11, 2022 | |~~WA-GUEST-OS-4.101_202203-01~~| March 19, 2022 | May 26, 2022 | |~~WA-GUEST-OS-4.100_202202-01~~| March 2, 2022 | April 30, 2022 |-|~~WA-GUEST-OS-4.99_202201-02~~| February 11 , 2022 | March 19, 2022 | -|~~WA-GUEST-OS-4.97_202112-01~~| January 10 , 2022 | March 2, 2022 | +|~~WA-GUEST-OS-4.99_202201-02~~| February 11, 2022 | March 19, 2022 | +|~~WA-GUEST-OS-4.97_202112-01~~| January 10, 2022 | March 2, 2022 | |~~WA-GUEST-OS-4.96_202111-01~~| November 19, 2021 | February 11, 2022 | |~~WA-GUEST-OS-4.95_202110-01~~| November 1, 2021 | January 10, 2022 | |~~WA-GUEST-OS-4.94_202109-01~~| October 8, 2021 | November 19, 2021 | Even though the [retirement policy for the Azure SDK][retire policy sdk] indicat | 1 |Version 1.0+ | ## Guest OS release information-There are three dates that are important to Guest OS releases: **release** date, **disabled** date, and **expiration** date. A Guest OS is considered available when it is in the Portal and can be selected as the target Guest OS. When a Guest OS reaches the **disabled** date, it is removed from Azure. However, any Cloud Service targeting that Guest OS will still operate as normal. +There are three dates that are important to Guest OS releases: **release** date, **disabled** date, and **expiration** date. A Guest OS is considered available when it is in the Portal and can be selected as the target Guest OS. When a Guest OS reaches the **disabled** date, Microsoft removes it from Azure. However, any Cloud Service targeting that Guest OS still operate as normal. -The window between the **disabled** date and the **expiration** date provides you with a buffer to easily transition from one Guest OS to one newer. If you're using *automatic* as your Guest OS, you'll always be on the latest version and you don't have to worry about it expiring. +The window between the **disabled** date and the **expiration** date provides you with a buffer to easily transition from one Guest OS to one newer. If you're using *automatic* as your Guest OS, you're always on the latest version and you don't have to worry about it expiring. -When the **expiration** date passes, any Cloud Service still using that Guest OS will be stopped, deleted, or forced to upgrade. You can read more about the retirement policy [here][retirepolicy]. +When the **expiration** date passes, any Cloud Service still using that Guest OS stops, deletes, or force upgrades. You can read more about the retirement policy [here][retirepolicy]. ## Guest OS family-version explanation The Guest OS families are based on released versions of Microsoft Windows Server. The Guest OS is the underlying operating system that Azure Cloud Services runs on. Each Guest OS has a family, version, and release number. The Guest OS families are based on released versions of Microsoft Windows Server Numbers start at 0 and increment by 1 each time a new set of updates is added. Trailing zeros are only shown if important. That is, version 2.10 is a different, much later version than version 2.1. * **Guest OS release** - A rerelease of a Guest OS version. A rerelease occurs if Microsoft finds issues during testing; requiring changes. The latest release always supersedes any previous releases, public or not. The Azure portal will only allow users to pick the latest release for a given version. Deployments running on a previous release are usually not force upgraded depending on the severity of the bug. + A rerelease of a Guest OS version. A rerelease occurs if Microsoft finds issues during testing; requiring changes. The latest release always supersedes any previous releases, public or not. The Azure portal only allows users to pick the latest release for a given version. Deployments running on a previous release aren't force upgraded depending on the severity of the bug. -In the example below, 2 is the family, 12 is the version and "rel2" is the release. +In the following example, 2 is the family, 12 is the version, and "rel2" is the release. **Guest OS release** - 2.12 rel2 **Configuration string for this release** - WA-GUEST-OS-2.12_201208-02 -The configuration string for a Guest OS has this same information embedded in it, along with a date showing which MSRC patches were considered for that release. In this example, MSRC patches produced for Windows Server 2008 R2 up to and including August 2012 were considered for inclusion. Only patches specifically applying to that version of Windows Server are included. For example, if an MSRC patch applies to Microsoft Office, it will not be included because that product is not part of the Windows Server base image. +The configuration string for a Guest OS has this same information embedded in it, along with a date showing which MSRC patches were considered for that release. In this example, MSRC patches produced for Windows Server 2008 R2 up to and including August 2012 were considered for inclusion. Only patches specifically applying to that version of Windows Server are included. For example, if an MSRC patch applies to Microsoft Office, it isn't included because that product isn't part of the Windows Server base image. ## Guest OS system update process-This page includes information on upcoming Guest OS Releases. Customers have indicated that they want to know when a release occurs because their cloud service roles will reboot if they are set to "Automatic" update. Guest OS releases typically occur 2-3 weeks after the MSRC update release that occurs on the second Tuesday of every month. New releases include all the relevant MSRC patches for each Guest OS family. +This page includes information on upcoming Guest OS Releases. Some customers want to know when a release occurs because cloud service roles set to automatically update reboot on releases. Guest OS releases typically occur 2-3 weeks after the MSRC update release that occurs on the second Tuesday of every month. New releases include all the relevant MSRC patches for each Guest OS family. -Microsoft Azure is constantly releasing updates. The Guest OS is only one such update in the pipeline. A release can be affected by many factors too numerous to list here. In addition, Azure runs on literally hundreds of thousands of machines. This means that it's impossible to give an exact date and time when your role(s) will reboot. We are working on a plan to limit or time reboots. +Microsoft Azure is constantly releasing updates. The Guest OS is only one such update in the pipeline. Many factors affect a release, and they're too numerous to list here. In addition, Azure runs on literally hundreds of thousands of machines. This means that it's impossible to give an exact date and time to expect your role or roles to reboot. We're working on a plan to limit or time reboots. -When a new release of the Guest OS is published, it can take time to fully propagate across Azure. As services are updated to the new Guest OS, they are rebooted honoring update domains. Services set to use "Automatic" updates will get a release first. After the update, youΓÇÖll see the new Guest OS version listed for your service in the Azure portal. Rereleases may occur during this period. Some versions may be deployed over longer periods of time and automatic upgrade reboots may not occur for many weeks after the official release date. Once a Guest OS is available, you can then explicitly choose that version from the portal or in your configuration file. +When a new release of the Guest OS is published, it can take time to fully propagate across Azure. As services are updated to the new Guest OS, they reboot, honoring update domains. Services set to use "Automatic" updates get a release first. After the update, youΓÇÖll see the new Guest OS version listed for your service in the Azure portal. Rereleases may occur during this period. Some versions may be deployed over longer periods of time and automatic upgrade reboots may not occur for many weeks after the official release date. Once a Guest OS is available, you can then explicitly choose that version from the portal or in your configuration file. -For a great deal of valuable information on restarts and pointers to more information technical details of Guest and Host OS updates, see the MSDN blog post titled [Role Instance Restarts Due to OS Upgrades][restarts]. +For a great deal of valuable information on restarts and pointers to more information on Guest and Host OS updates, see the Microsoft Developer Network (MSDN) blog post titled [Role Instance Restarts Due to OS Upgrades][restarts]. -If you manually update your Guest OS, see the [Guest OS retirement policy][retirepolicy] for additional information. +For more information about manually updating your Guest OS, see the [Guest OS retirement policy][retirepolicy]. ## Guest OS supportability and retirement policy The Guest OS supportability and retirement policy is explained [here][retirepolicy]. |
cloud-services | Cloud Services How To Configure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-configure-portal.md | description: Learn how to configure cloud services in Azure. Learn to update the Previously updated : 02/21/2023 Last updated : 07/23/2024 After opening the [Azure portal](https://portal.azure.com/), navigate to your cl ![Settings Page](./media/cloud-services-how-to-configure-portal/cloud-service.png) -The **Settings** or **All settings** links will open up **Settings** where you can change the **Properties**, change the **Configuration**, manage the **Certificates**, set up **Alert rules**, and manage the **Users** who have access to this cloud service. +The **Settings** or **All settings** links open up **Settings** where you can change the **Properties**, change the **Configuration**, manage the **Certificates**, set up **Alert rules**, and manage the **Users** who have access to this cloud service. ![Azure cloud service settings](./media/cloud-services-how-to-configure-portal/cs-settings-blade.png) ### Manage Guest OS version -By default, Azure periodically updates your guest OS to the latest supported image within the OS family that you've specified in your service configuration (.cscfg), such as Windows Server 2016. +By default, Azure periodically updates your guest OS to the latest supported image within the OS family that you specified in your service configuration (.cscfg), such as Windows Server 2016. If you need to target a specific OS version, you can set it in **Configuration**. If you need to target a specific OS version, you can set it in **Configuration** ## Monitoring -You can add alerts to your cloud service. Click **Settings** > **Alert Rules** > **Add alert**. +You can add alerts to your cloud service. Select **Settings** > **Alert Rules** > **Add alert**. ![Screenshot of the Settings pan with the Alert rules option highlighted and outlined in red and the Add alert option outlined in red.](./media/cloud-services-how-to-configure-portal/cs-alerts.png) From here, you can set up an alert. With the **Metric** drop-down box, you can s ### Configure monitoring from a metric tile -Instead of using **Settings** > **Alert Rules**, you can click on one of the metric tiles in the **Monitoring** section of the cloud service. +Instead of using **Settings** > **Alert Rules**, you can select on one of the metric tiles in the **Monitoring** section of the cloud service. ![Cloud Service Monitoring](./media/cloud-services-how-to-configure-portal/cs-monitoring.png) You can then initiate a remote desktop connection, remotely reboot the instance, You may need to reconfigure your cloud service through the [service config (cscfg)](cloud-services-model-and-package.md#cscfg) file. First you need to download your .cscfg file, modify it, then upload it. -1. Click on the **Settings** icon or the **All settings** link to open up **Settings**. +1. Select on the **Settings** icon or the **All settings** link to open up **Settings**. ![Settings Page](./media/cloud-services-how-to-configure-portal/cloud-service.png)-2. Click on the **Configuration** item. +2. Select on the **Configuration** item. ![Configuration Blade](./media/cloud-services-how-to-configure-portal/cs-settings-config.png)-3. Click on the **Download** button. +3. Select on the **Download** button. ![Download](./media/cloud-services-how-to-configure-portal/cs-settings-config-panel-download.png) 4. After you update the service configuration file, upload and apply the configuration updates: ![Upload](./media/cloud-services-how-to-configure-portal/cs-settings-config-panel-upload.png)-5. Select the .cscfg file and click **OK**. +5. Select the .cscfg file and select **OK**. ## Next steps |
cloud-services | Cloud Services How To Create Deploy Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-create-deploy-portal.md | Title: How to create and deploy a cloud service (classic) | Microsoft Docs description: Learn how to use the Quick Create method to create a cloud service and use Upload to upload and deploy a cloud service package in Azure. Previously updated : 02/21/2023 Last updated : 07/23/2024 Three components are required to deploy an application as a cloud service in Azu * **Service Package** The service package (.cspkg) contains the application code and configurations and the service definition file. -You can learn more about these and how to create a package [here](cloud-services-model-and-package.md). +You can learn more about these components and how to create a package [here](cloud-services-model-and-package.md). ## Prepare your app Before you can deploy a cloud service, you must create the cloud service package (.cspkg) from your application code and a cloud service configuration file (.cscfg). The Azure SDK provides tools for preparing these required deployment files. You can install the SDK from the [Azure Downloads](https://azure.microsoft.com/downloads/) page, in the language in which you prefer to develop your application code. Three cloud service features require special configurations before you export a * If you want to deploy a cloud service that uses Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL), for data encryption, [configure your application](cloud-services-configure-ssl-certificate-portal.md#modify) for TLS. * If you want to configure Remote Desktop connections to role instances, [configure the roles](cloud-services-role-enable-remote-desktop-new-portal.md) for Remote Desktop.-* If you want to configure verbose monitoring for your cloud service, enable Azure Diagnostics for the cloud service. *Minimal monitoring* (the default monitoring level) uses performance counters gathered from the host operating systems for role instances (virtual machines). *Verbose monitoring* gathers additional metrics based on performance data within the role instances to enable closer analysis of issues that occur during application processing. To find out how to enable Azure Diagnostics, see [Enabling diagnostics in Azure](cloud-services-dotnet-diagnostics.md). +* If you want to configure verbose monitoring for your cloud service, enable Azure Diagnostics for the cloud service. *Minimal monitoring* (the default monitoring level) uses performance counters gathered from the host operating systems for role instances (virtual machines). *Verbose monitoring* gathers more metrics based on performance data within the role instances to enable closer analysis of issues that occur during application processing. To find out how to enable Azure Diagnostics, see [Enabling diagnostics in Azure](cloud-services-dotnet-diagnostics.md). To create a cloud service with deployments of web roles or worker roles, you must [create the service package](cloud-services-model-and-package.md#servicepackagecspkg). ## Before you begin-* If you haven't installed the Azure SDK, click **Install Azure SDK** to open the [Azure Downloads page](https://azure.microsoft.com/downloads/), and then download the SDK for the language in which you prefer to develop your code. (You'll have an opportunity to do this later.) +* If you need to install the Azure SDK, choose **Install Azure SDK** to open the [Azure Downloads page](https://azure.microsoft.com/downloads/), and then download the SDK for the language in which you prefer to develop your code. You have an opportunity to do the installation later. * If any role instances require a certificate, create the certificates. Cloud services require a .pfx file with a private key. You can upload the certificates to Azure as you create and deploy the cloud service. ## Create and deploy-1. Log in to the [Azure portal](https://portal.azure.com/). -2. Click **Create a resource > Compute**, and then scroll down to and click **Cloud Service**. +1. Sign in to the [Azure portal](https://portal.azure.com/). +2. Choose **Create a resource > Compute**, and then scroll down to and select **Cloud Service**. ![Publish your cloud service1](media/cloud-services-how-to-create-deploy-portal/create-cloud-service.png) 3. In the new **Cloud Service** pane, enter a value for the **DNS name**. 4. Create a new **Resource Group** or select an existing one. 5. Select a **Location**.-6. Click **Package**. This opens the **Upload a package** pane. Fill in the required fields. If any of your roles contain a single instance, ensure **Deploy even if one or more roles contain a single instance** is selected. +6. Select **Package**, which opens the **Upload a package** pane. Fill in the required fields. If any of your roles contain a single instance, ensure **Deploy even if one or more roles contain a single instance** is selected. 7. Make sure that **Start deployment** is selected.-8. Click **OK** which will close the **Upload a package** pane. -9. If you do not have any certificates to add, click **Create**. +8. Select **OK**, which closes the **Upload a package** pane. +9. If you don't have any certificates to add, choose **Create**. ![Publish your cloud service2](media/cloud-services-how-to-create-deploy-portal/select-package.png) To create a cloud service with deployments of web roles or worker roles, you mus If your deployment package was [configured to use certificates](cloud-services-configure-ssl-certificate-portal.md#modify), you can upload the certificate now. 1. Select **Certificates**, and on the **Add certificates** pane, select the TLS/SSL certificate .pfx file, and then provide the **Password** for the certificate,-2. Click **Attach certificate**, and then click **OK** on the **Add certificates** pane. -3. Click **Create** on the **Cloud Service** pane. When the deployment has reached the **Ready** status, you can proceed to the next steps. +2. Select **Attach certificate**, and then choose **OK** on the **Add certificates** pane. +3. Select **Create** on the **Cloud Service** pane. When the deployment reaches the **Ready** status, proceed to the next steps. ![Publish your cloud service3](media/cloud-services-how-to-create-deploy-portal/attach-cert.png) ## Verify your deployment completed successfully-1. Click the cloud service instance. +1. Select the cloud service instance. The status should show that the service is **Running**.-2. Under **Essentials**, click the **Site URL** to open your cloud service in a web browser. +2. Under **Essentials**, select the **Site URL** to open your cloud service in a web browser. ![CloudServices_QuickGlance](./media/cloud-services-how-to-create-deploy-portal/running.png) |
cloud-services | Cloud Services How To Manage Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-manage-portal.md | Title: Common cloud service management tasks | Microsoft Docs description: Learn how to manage Cloud Services in the Azure portal. These examples use the Azure portal. Previously updated : 02/21/2023 Last updated : 07/23/2024 In the **Cloud Services** area of the Azure portal, you can: * Link resources to your cloud service so that you can see the resource dependencies and scale the resources together. * Delete a cloud service or a deployment. -For more information about how to scale your cloud service, see [Configure auto-scaling for a cloud service in the portal](cloud-services-how-to-scale-portal.md). +For more information about how to scale your cloud service, see [Configure autoscaling for a cloud service in the portal](cloud-services-how-to-scale-portal.md). ## Update a cloud service role or deployment If you need to update the application code for your cloud service, use **Update** on the cloud service blade. You can update a single role or all roles. To update, you can upload a new service package or service configuration file. If you need to update the application code for your cloud service, use **Update* Azure can guarantee only 99.95 percent service availability during a cloud service update if each role has at least two role instances (virtual machines). With two role instances, one virtual machine processes client requests while the other is updated. -6. Select the **Start deployment** check box to apply the update after the upload of the package has finished. +6. Select the **Start deployment** check box to apply the update after the upload of the package finishes. 7. Select **OK** to begin updating the service. There are two key prerequisites for a successful deployment swap: - All instances of your roles must be running before you can perform the swap. You can check the status of your instances on the **Overview** blade of the Azure portal. Alternatively, you can use the [Get-AzureRole](/powershell/module/servicemanagement/azure/get-azurerole) command in Windows PowerShell. -Note that guest OS updates and service healing operations also can cause deployment swaps to fail. For more information, see [Troubleshoot cloud service deployment problems](cloud-services-troubleshoot-deployment-problems.md). +> [!NOTE] +> Guest OS updates and service healing operations also can cause deployment swaps to fail. For more information, see [Troubleshoot cloud service deployment problems](cloud-services-troubleshoot-deployment-problems.md). **Does a swap incur downtime for my application? How should I handle it?** -As described in the previous section, a deployment swap is typically fast because it's just a configuration change in the Azure load balancer. In some cases, it can take 10 or more seconds and result in transient connection failures. To limit impact to your customers, consider implementing [client retry logic](/azure/architecture/best-practices/transient-faults). +As described in the previous section, a deployment swap is typically fast because it's just a configuration change in the Azure load balancer. In some cases, it can take 10 or more seconds and result in transient connection failures. To limit the impact to your customers, consider implementing [client retry logic](/azure/architecture/best-practices/transient-faults). ## Delete deployments and a cloud service Before you can delete a cloud service, you must delete each existing deployment. -To save compute costs, you can delete the staging deployment after you verify that your production deployment is working as expected. You are billed for compute costs for deployed role instances that are stopped. +To save compute costs, you can delete the staging deployment after you verify that your production deployment is working as expected. Even if you stop your deployed role instances, Azure bills you for compute costs. Use the following procedure to delete a deployment or your cloud service. |
cloud-services | Cloud Services How To Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-monitor.md | Title: Monitor an Azure Cloud Service (classic) | Microsoft Docs description: Describes what monitoring an Azure Cloud Service involves and what some of your options are. Previously updated : 02/21/2023 Last updated : 07/23/2024 -You can monitor key performance metrics for any cloud service. Every cloud service role collects minimal data: CPU usage, network usage, and disk utilization. If the cloud service has the `Microsoft.Azure.Diagnostics` extension applied to a role, that role can collect additional points of data. This article provides an introduction to Azure Diagnostics for Cloud Services. +You can monitor key performance metrics for any cloud service. Every cloud service role collects minimal data: CPU usage, network usage, and disk utilization. If the cloud service has the `Microsoft.Azure.Diagnostics` extension applied to a role, that role can collect more points of data. This article provides an introduction to Azure Diagnostics for Cloud Services. -With basic monitoring, performance counter data from role instances is sampled and collected at 3-minute intervals. This basic monitoring data is not stored in your storage account and has no additional cost associated with it. --With advanced monitoring, additional metrics are sampled and collected at intervals of 5 minutes, 1 hour, and 12 hours. The aggregated data is stored in a storage account, in tables, and is purged after 10 days. The storage account used is configured by role; you can use different storage accounts for different roles. This is configured with a connection string in the [.csdef](cloud-services-model-and-package.md#servicedefinitioncsdef) and [.cscfg](cloud-services-model-and-package.md#serviceconfigurationcscfg) files. +With basic monitoring, performance counter data from role instances is sampled and collected at 3-minute intervals. This basic monitoring data isn't stored in your storage account and has no additional cost associated with it. +With advanced monitoring, more metrics are sampled and collected at intervals of 5 minutes, 1 hour, and 12 hours. The aggregated data is stored in a storage account, in tables, and is purged after 10 days. The storage account used is configured per role; you can use different storage accounts for different roles. You use a connection string in the [.csdef](cloud-services-model-and-package.md#servicedefinitioncsdef) and [.cscfg](cloud-services-model-and-package.md#serviceconfigurationcscfg) files for configuration. ## Basic monitoring As stated in the introduction, a cloud service automatically collects basic monitoring data from the host virtual machine. This data includes CPU percentage, network in/out, and disk read/write. The collected monitoring data is automatically displayed on the overview and metrics pages of the cloud service, in the Azure portal. -Basic monitoring does not require a storage account. +Basic monitoring doesn't require a storage account. ![basic cloud service monitoring tiles](media/cloud-services-how-to-monitor/basic-tiles.png) ## Advanced monitoring -Advanced monitoring involves using the **Azure Diagnostics** extension (and optionally the Application Insights SDK) on the role you want to monitor. The diagnostics extension uses a config file (per role) named **diagnostics.wadcfgx** to configure the diagnostics metrics monitored. The Azure Diagnostic extension collects and stores data in an Azure Storage account. These settings are configured in the **.wadcfgx**, [.csdef](cloud-services-model-and-package.md#servicedefinitioncsdef), and [.cscfg](cloud-services-model-and-package.md#serviceconfigurationcscfg) files. This means that there is an extra cost associated with advanced monitoring. +Advanced monitoring involves using the **Azure Diagnostics** extension (and optionally the Application Insights SDK) on the role you want to monitor. The diagnostics extension uses a config file (per role) named **diagnostics.wadcfgx** to configure the diagnostics metrics monitored. The Azure Diagnostic extension collects and stores data in an Azure Storage account. These settings are configured in the **.wadcfgx**, [.csdef](cloud-services-model-and-package.md#servicedefinitioncsdef), and [.cscfg](cloud-services-model-and-package.md#serviceconfigurationcscfg) files. This means that there's an extra cost associated with advanced monitoring. As each role is created, Visual Studio adds the Azure Diagnostics extension to it. This diagnostics extension can collect the following types of information: As each role is created, Visual Studio adds the Azure Diagnostics extension to i * Application logs * Windows event logs * .NET event source-* IIS logs -* Manifest based ETW -* Crash dumps +* Internet Information Services (IIS) logs +* Manifest based Event Tracing for Windows (ETW) * Customer error logs > [!IMPORTANT] There are two config files you must change for advanced diagnostics to be enable ### ServiceDefinition.csdef -In the **ServiceDefinition.csdef** file, add a new setting named `Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString` for each role that uses advanced diagnostics. Visual Studio adds this value to the file when you create a new project. In case it is missing, you can add it now. +In the **ServiceDefinition.csdef** file, add a new setting named `Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString` for each role that uses advanced diagnostics. Visual Studio adds this value to the file when you create a new project. In case it's missing, you can add it now. ```xml <ServiceDefinition name="AnsurCloudService" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" schemaVersion="2015-04.2.6"> In the **ServiceDefinition.csdef** file, add a new setting named `Microsoft.Wind <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" /> ``` -This defines a new setting that must be added to every **ServiceConfiguration.cscfg** file. +This snippet defines a new setting that must be added to every **ServiceConfiguration.cscfg** file. Most likely you have two **.cscfg** files, one named **ServiceConfiguration.cloud.cscfg** for deploying to Azure, and one named **ServiceConfiguration.local.cscfg** that is used for local deployments in the emulated environment. Open and change each **.cscfg** file. Add a setting named `Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString`. Set the value to the **Primary connection string** of the classic storage account. If you want to use the local storage on your development machine, use `UseDevelopmentStorage=true`. Most likely you have two **.cscfg** files, one named **ServiceConfiguration.clou ## Use Application Insights -When you publish the Cloud Service from Visual Studio, you are given the option to send the diagnostic data to Application Insights. You can create the Application Insights Azure resource at that time or send the data to an existing Azure resource. Your cloud service can be monitored by Application Insights for availability, performance, failures, and usage. Custom charts can be added to Application Insights so that you can see the data that matters the most. Role instance data can be collected by using the Application Insights SDK in your cloud service project. For more information on how to integrate Application Insights, see [Application Insights with Cloud Services](../azure-monitor/app/azure-web-apps-net-core.md). --Note that while you can use Application Insights to display the performance counters (and the other settings) you have specified through the Windows Azure Diagnostics extension, you only get a richer experience by integrating the Application Insights SDK into your worker and web roles. +When you publish the Cloud Service from Visual Studio, you have the option to send the diagnostic data to Application Insights. You can create the Application Insights Azure resource at that time or send the data to an existing Azure resource. Application Insights can monitor your cloud service for availability, performance, failures, and usage. Custom charts can be added to Application Insights so that you can see the data that matters the most. Role instance data can be collected by using the Application Insights SDK in your cloud service project. For more information on how to integrate Application Insights, see [Application Insights with Cloud Services](../azure-monitor/app/azure-web-apps-net-core.md). +While you can use Application Insights to display the performance counters (and the other settings) you specified through the Microsoft Azure Diagnostics extension, you only get a richer experience by integrating the Application Insights SDK into your worker and web roles. ## Next steps |
cloud-services | Cloud Services How To Scale Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-scale-portal.md | description: Learn how to use the portal to configure auto scale rules for a clo Previously updated : 02/21/2023 Last updated : 07/23/2024 -Conditions can be set for a cloud service worker role that trigger a scale in or out operation. The conditions for the role can be based on the CPU, disk, or network load of the role. You can also set a condition based on a message queue or the metric of some other Azure resource associated with your subscription. +You can set conditions for a cloud service worker role to trigger scale in or out operations. The conditions for the role can be based on the CPU, disk, or network load of the role. You can also set a condition based on a message queue or the metric of some other Azure resource associated with your subscription. > [!NOTE] > This article focuses on Cloud Service (classic). When you create a virtual machine (classic) directly, it is hosted in a cloud service. You can scale a standard virtual machine by associating it with an [availability set](/previous-versions/azure/virtual-machines/windows/classic/configure-availability-classic) and manually turn them on or off. Conditions can be set for a cloud service worker role that trigger a scale in or ## Considerations You should consider the following information before you configure scaling for your application: -* Scaling is affected by core usage. +* Core usage affects scaling. - Larger role instances use more cores. You can scale an application only within the limit of cores for your subscription. For example, say your subscription has a limit of 20 cores. If you run an application with two medium-sized cloud services (a total of 4 cores), you can only scale up other cloud service deployments in your subscription by the remaining 16 cores. For more information about sizes, see [Cloud Service Sizes](cloud-services-sizes-specs.md). + Larger role instances use more cores. You can scale an application only within the limit of cores for your subscription. For example, say your subscription has a limit of 20 cores. If you run an application with two medium-sized cloud services (a total of four cores), you can only scale up other cloud service deployments in your subscription by the remaining 16 cores. For more information about sizes, see [Cloud Service Sizes](cloud-services-sizes-specs.md). * You can scale based on a queue message threshold. For more information about how to use queues, see [How to use the Queue Storage Service](/azure/storage/queues/storage-quickstart-queues-dotnet?tabs=passwordless%2Croles-azure-portal%2Cenvironment-variable-windows%2Csign-in-azure-cli). * You can also scale other resources associated with your subscription. -* To enable high availability of your application, you should ensure that it is deployed with two or more role instances. For more information, see [Service Level Agreements](https://azure.microsoft.com/support/legal/sla/). +* To enable high availability of your application, you should ensure it deploys with two or more role instances. For more information, see [Service Level Agreements](https://azure.microsoft.com/support/legal/sla/). * Auto Scale only happens when all the roles are in **Ready** state.   You should consider the following information before you configure scaling for y After you select your cloud service, you should have the cloud service blade visible. 1. On the cloud service blade, on the **Roles and Instances** tile, select the name of the cloud service. - **IMPORTANT**: Make sure to click the cloud service role, not the role instance that is below the role. + **IMPORTANT**: Make sure to select the cloud service role, not the role instance that is below the role. ![Screenshot of the Roles and instances tile with the Worker Role With S B Queue 1 option outlined in red.](./media/cloud-services-how-to-scale-portal/roles-instances.png) 2. Select the **scale** tile. Set the **Scale by** option to **schedule and performance rules**. Select **Add Profile**. The profile determines which mode you want to use for the scale: **always**, **recurrence**, **fixed date**. -After you have configured the profile and rules, select the **Save** icon at the top. +After you configure the profile and rules, select the **Save** icon at the top. #### Profile The profile sets minimum and maximum instances for the scale, and also when this scale range is active. The profile sets minimum and maximum instances for the scale, and also when this ![CLoud service scale with a fixed date](./media/cloud-services-how-to-scale-portal/select-fixed.png) -After you have configured the profile, select the **OK** button at the bottom of the profile blade. +After you configure the profile, select the **OK** button at the bottom of the profile blade. #### Rule Rules are added to a profile and represent a condition that triggers the scale. The rule trigger is based on a metric of the cloud service (CPU usage, disk acti ![Screenshot of the Rule dialog box with the Metric name option outlined in red.](./media/cloud-services-how-to-scale-portal/rule-settings.png) -After you have configured the rule, select the **OK** button at the bottom of the rule blade. +After you configure the rule, select the **OK** button at the bottom of the rule blade. ## Back to manual scale Navigate to the [scale settings](#where-scale-is-located) and set the **Scale by** option to **an instance count that I enter manually**. This setting removes automated scaling from the role and then you can set the in 2. A role instance slider to set the instances to scale to. 3. Instances of the role to scale to. -After you have configured the scale settings, select the **Save** icon at the top. +After you configure the scale settings, select the **Save** icon at the top. |
cloud-services | Cloud Services How To Scale Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-scale-powershell.md | Title: Scale an Azure cloud service (classic) in Windows PowerShell | Microsoft Docs -description: (classic) Learn how to use PowerShell to scale a web role or worker role in or out in Azure. +description: Learn how to use PowerShell to scale a web role or worker role in or out in Azure cloud services (classic). Previously updated : 02/21/2023 Last updated : 07/23/2024 -## Log in to Azure +## Sign in to Azure -Before you can perform any operations on your subscription through PowerShell, you must log in: +Before you can perform any operations on your subscription through PowerShell, you must sign in: ```powershell Add-AzureAccount To scale out your role, pass the desired number of instances as the **Count** pa Set-AzureRole -ServiceName '<your_service_name>' -RoleName '<your_role_name>' -Slot <target_slot> -Count <desired_instances> ``` -The cmdlet blocks momentarily while the new instances are provisioned and started. During this time, if you open a new PowerShell window and call **Get-AzureRole** as shown earlier, you will see the new target instance count. And if you inspect the role status in the portal, you should see the new instance starting up: +The cmdlet blocks momentarily while the new instances are provisioned and started. During this time, if you open a new PowerShell window and call **Get-AzureRole** as shown earlier, you see the new target instance count. If you inspect the role status in the portal, you should see the new instance starting up: ![VM instance starting in portal](./media/cloud-services-how-to-scale-powershell/role-instance-starting.png) -Once the new instances have started, the cmdlet will return successfully: +Once the new instances start, the cmdlet returns successfully: ![Role instance increase success](./media/cloud-services-how-to-scale-powershell/set-azure-role-success.png) You can scale in a role by removing instances in the same way. Set the **Count** ## Next steps -It is not possible to configure auto-scale for cloud services from PowerShell. To do that, see [How to auto scale a cloud service](cloud-services-how-to-scale-portal.md). +It isn't possible to configure autoscale for cloud services from PowerShell. To do that, see [How to auto scale a cloud service](cloud-services-how-to-scale-portal.md). |
cloud-services | Cloud Services Model And Package | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-model-and-package.md | description: Describes the cloud service model (.csdef, .cscfg) and package (.cs Previously updated : 02/21/2023 Last updated : 07/23/2024 -A cloud service is created from three components, the service definition *(.csdef)*, the service config *(.cscfg)*, and a service package *(.cspkg)*. Both the **ServiceDefinition.csdef** and **ServiceConfig.cscfg** files are XML-based and describe the structure of the cloud service and how it's configured; collectively called the model. The **ServicePackage.cspkg** is a zip file that is generated from the **ServiceDefinition.csdef** and among other things, contains all the required binary-based dependencies. Azure creates a cloud service from both the **ServicePackage.cspkg** and the **ServiceConfig.cscfg**. +A cloud service is created from three components, the service definition *(.csdef)*, the service config *(.cscfg)*, and a service package *(.cspkg)*. Both the **ServiceDefinition.csdef** and **ServiceConfig.cscfg** files are XML-based and describe the structure of the cloud service and its configuration; collectively called the model. The **ServicePackage.cspkg** is a zip file that is generated from the **ServiceDefinition.csdef** and among other things, contains all the required binary-based dependencies. Azure creates a cloud service from both the **ServicePackage.cspkg** and the **ServiceConfig.cscfg**. -Once the cloud service is running in Azure, you can reconfigure it through the **ServiceConfig.cscfg** file, but you cannot alter the definition. +Once the cloud service is running in Azure, you can reconfigure it through the **ServiceConfig.cscfg** file, but you can't alter the definition. ## What would you like to know more about? * I want to know more about the [ServiceDefinition.csdef](#csdef) and [ServiceConfig.cscfg](#cscfg) files. * I already know about that, give me [some examples](#next-steps) on what I can configure. * I want to create the [ServicePackage.cspkg](#cspkg).-* I am using Visual Studio and I want to... +* I'm using Visual Studio and I want to... * [Create a cloud service][vs_create] * [Reconfigure an existing cloud service][vs_reconfigure] * [Deploy a Cloud Service project][vs_deploy] The **ServiceDefinition.csdef** file specifies the settings that are used by Azu </ServiceDefinition> ``` -You can refer to the [Service Definition Schema](/previous-versions/azure/reference/ee758711(v=azure.100)) for a better understanding of the XML schema used here, however, here is a quick explanation of some of the elements: +You can refer to the [Service Definition Schema](/previous-versions/azure/reference/ee758711(v=azure.100)) for a better understanding of the XML schema used here, however, here's a quick explanation of some of the elements: **Sites** Contains the definitions for websites or web applications that are hosted in IIS7. Contains tasks that are run when the role starts. The tasks are defined in a .cm ## ServiceConfiguration.cscfg The configuration of the settings for your cloud service is determined by the values in the **ServiceConfiguration.cscfg** file. You specify the number of instances that you want to deploy for each role in this file. The values for the configuration settings that you defined in the service definition file are added to the service configuration file. The thumbprints for any management certificates that are associated with the cloud service are also added to the file. The [Azure Service Configuration Schema (.cscfg File)](/previous-versions/azure/reference/ee758710(v=azure.100)) provides the allowable format for a service configuration file. -The service configuration file is not packaged with the application, but is uploaded to Azure as a separate file and is used to configure the cloud service. You can upload a new service configuration file without redeploying your cloud service. The configuration values for the cloud service can be changed while the cloud service is running. The following example shows the configuration settings that can be defined for the Web and Worker roles: +The service configuration file isn't packaged with the application. The configuration uploads to Azure as a separate file and used to configure the cloud service. You can upload a new service configuration file without redeploying your cloud service. The configuration values for the cloud service can be changed while the cloud service is running. The following example shows the configuration settings that can be defined for the Web and Worker roles: ```xml <?xml version="1.0"?> The service configuration file is not packaged with the application, but is uplo </ServiceConfiguration> ``` -You can refer to the [Service Configuration Schema](/previous-versions/azure/reference/ee758710(v=azure.100)) for better understanding the XML schema used here, however, here is a quick explanation of the elements: +You can refer to the [Service Configuration Schema](/previous-versions/azure/reference/ee758710(v=azure.100)) for better understanding the XML schema used here, however, here's a quick explanation of the elements: **Instances** -Configures the number of running instances for the role. To prevent your cloud service from potentially becoming unavailable during upgrades, it is recommended that you deploy more than one instance of your web-facing roles. By deploying more than one instance, you are adhering to the guidelines in the [Azure Compute Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/), which guarantees 99.95% external connectivity for Internet-facing roles when two or more role instances are deployed for a service. +Configures the number of running instances for the role. To prevent your cloud service from potentially becoming unavailable during upgrades, we recommend you deploy more than one instance of your web-facing roles. By deploying more than one instance, you adhere to the guidelines in the [Azure Compute Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/), which guarantees 99.95% external connectivity for Internet-facing roles when two or more role instances are deployed for a service. **ConfigurationSettings** Configures the settings for the running instances for a role. The name of the `<Setting>` elements must match the setting definitions in the service definition file. Configures the certificates that are used by the service. The previous code exam ## Defining ports for role instances Azure allows only one entry point to a web role. Meaning that all traffic occurs through one IP address. You can configure your websites to share a port by configuring the host header to direct the request to the correct location. You can also configure your applications to listen to well-known ports on the IP address. -The following sample shows the configuration for a web role with a website and web application. The website is configured as the default entry location on port 80, and the web applications are configured to receive requests from an alternate host header that is called ΓÇ£mail.mysite.cloudapp.netΓÇ¥. +The following sample shows the configuration for a web role with a website and web application. The website is configured as the default entry location on port 80. The web applications are configured to receive requests from an alternate host header that is called ΓÇ£mail.mysite.cloudapp.netΓÇ¥. ```xml <WebRole> The following sample shows the configuration for a web role with a website and w ## Changing the configuration of a role-You can update the configuration of your cloud service while it is running in Azure, without taking the service offline. To change configuration information, you can either upload a new configuration file, or edit the configuration file in place and apply it to your running service. The following changes can be made to the configuration of a service: +You can update the configuration of your cloud service while it runs in Azure, without taking the service offline. To change configuration information, you can either upload a new configuration file, or edit the configuration file in place and apply it to your running service. The following changes can be made to the configuration of a service: * **Changing the values of configuration settings** When a configuration setting changes, a role instance can choose to apply the change while the instance is online, or to recycle the instance gracefully and apply the change while the instance is offline. * **Changing the service topology of role instances** - Topology changes do not affect running instances, except where an instance is being removed. All remaining instances generally do not need to be recycled; however, you can choose to recycle role instances in response to a topology change. + Topology changes don't affect running instances, except where an instance is being removed. All remaining instances generally don't need to be recycled; however, you can choose to recycle role instances in response to a topology change. * **Changing the certificate thumbprint** - You can only update a certificate when a role instance is offline. If a certificate is added, deleted, or changed while a role instance is online, Azure gracefully takes the instance offline to update the certificate and bring it back online after the change is complete. + You can only update a certificate when a role instance is offline. If a certificate is added, deleted, or changed while a role instance is online, Azure gracefully takes the instance offline to update the certificate. Azure brings it back online after the change is complete. ### Handling configuration changes with Service Runtime Events The [Azure Runtime Library](/previous-versions/azure/reference/mt419365(v=azure.100)) includes the [Microsoft.WindowsAzure.ServiceRuntime](/previous-versions/azure/reference/ee741722(v=azure.100)) namespace, which provides classes for interacting with the Azure environment from a role. The [RoleEnvironment](/previous-versions/azure/reference/ee773173(v=azure.100)) class defines the following events that are raised before and after a configuration change: Where the variables are defined as follows: | | | | \[DirectoryName\] |The subdirectory under the root project directory that contains the .csdef file of the Azure project. | | \[ServiceDefinition\] |The name of the service definition file. By default, this file is named ServiceDefinition.csdef. |-| \[OutputFileName\] |The name for the generated package file. Typically, this is set to the name of the application. If no file name is specified, the application package is created as \[ApplicationName\].cspkg. | +| \[OutputFileName\] |The name for the generated package file. Typically, this variable is set to the name of the application. If no file name is specified, the application package is created as \[ApplicationName\].cspkg. | | \[RoleName\] |The name of the role as defined in the service definition file. | | \[RoleBinariesDirectory] |The location of the binary files for the role. | | \[VirtualPath\] |The physical directories for each virtual path defined in the Sites section of the service definition. | I'm creating a cloud service package and I want to... * [Setup remote desktop for a cloud service instance][remotedesktop] * [Deploy a Cloud Service project][deploy] -I am using Visual Studio and I want to... +I'm using Visual Studio and I want to... * [Create a new cloud service][vs_create] * [Reconfigure an existing cloud service][vs_reconfigure] |
cloud-services | Cloud Services Nodejs Chat App Socketio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-chat-app-socketio.md | Title: Node.js application using Socket.io - Azure description: Socket.IO is now natively supported on Azure. This old tutorial shows how to self-host a socket.IO-based chat application on Azure. The latest recommendation is to let Socket.IO provide real time communication for a Node.js server and clients, and let Azure manage scaling client connections. Previously updated : 08/31/2023 Last updated : 07/23/2024 server and clients. This tutorial walks you through hosting a socket.IO based chat application on Azure. For more information on Socket.IO, see [socket.io](https://socket.io). -A screenshot of the completed application is below: +The following screenshot shows the completed application: ![A browser window displaying the service hosted on Azure][completed-app] Ensure that the following products and versions are installed to successfully co * Install [Python version 2.7.10](https://www.python.org/) ## Create a Cloud Service Project-The following steps create the cloud service project that will host the Socket.IO application. +The following steps create the cloud service project that hosts the Socket.IO application. 1. From the **Start Menu** or **Start Screen**, search for **Windows PowerShell**. Finally, right-click **Windows PowerShell** and select **Run As Administrator**. The following steps create the cloud service project that will host the Socket.I PS C:\Node> Add-AzureNodeWorkerRole ``` - You will see the following response: + You see the following response: ![The output of the new-azureservice and add-azurenodeworkerrolecmdlets](./media/cloud-services-nodejs-chat-app-socketio/socketio-1.png) ## Download the Chat Example -For this project, we will use the chat example from the [Socket.IO +For this project, we use the chat example from the [Socket.IO GitHub repository]. Perform the following steps to download the example and add it to the project you previously created. and add it to the project you previously created. ![Explorer, displaying the contents of the examples\\chat directory extracted from the archive][chat-contents] - The highlighted items in the screenshot above are the files copied from the **examples\\chat** directory + The highlighted items in the previous screenshot are the files copied from the **examples\\chat** directory -3. In the **C:\\node\\chatapp\\WorkerRole1** directory, delete the **server.js** file, and then rename the **app.js** file to **server.js**. This removes the default **server.js** file created previously by the **Add-AzureNodeWorkerRole** cmdlet and replaces it with the application file from the chat example. +3. In the **C:\\node\\chatapp\\WorkerRole1** directory, delete the **server.js** file, and then rename the **app.js** file to **server.js**. This step removes the default **server.js** file created previously by the **Add-AzureNodeWorkerRole** cmdlet and replaces it with the application file from the chat example. ### Modify Server.js and Install Modules Before testing the application in the Azure emulator, we must server.js file: 1. Open the **server.js** file in Visual Studio or any text editor. -2. Find the **Module dependencies** section at the beginning of server.js and change the line containing **sio = require('..//..//lib//socket.io')** to **sio = require('socket.io')** as shown below: +2. Find the **Module dependencies** section at the beginning of server.js and change the line containing **sio = require('..//..//lib//socket.io')** to **sio = require('socket.io')** as follows: ```js var express = require('express') server.js file: 3. To ensure the application listens on the correct port, open server.js in Notepad or your favorite editor, and then change the- following line by replacing **3000** with **process.env.port** as shown below: + following line by replacing **3000** with **process.env.port** as follows: ```js //app.listen(3000, function () {           //Original After saving the changes to **server.js**, use the following steps to install required modules, and then test the application in the Azure emulator: -1. Using **Azure PowerShell**, change directories to the **C:\\node\\chatapp\\WorkerRole1** directory and use the following command to install the modules required by this application: +1. In **Azure PowerShell**, change directories to the **C:\\node\\chatapp\\WorkerRole1** directory and use the following command to install the modules required by this application: ```powershell PS C:\node\chatapp\WorkerRole1> npm install ``` - This will install the modules listed in the package.json file. After + This command installs the modules listed in the package.json file. After the command completes, you should see output similar to the- following: + following screenshot: ![The output of the npm install command][The-output-of-the-npm-install-command] 2. Since this example was originally a part of the Socket.IO GitHub repository, and directly referenced the Socket.IO library by- relative path, Socket.IO was not referenced in the package.json + relative path, Socket.IO wasn't referenced in the package.json file, so we must install it by issuing the following command: ```powershell Azure emulator: 2. Open a browser and navigate to `http://127.0.0.1`. 3. When the browser window opens, enter a nickname and then hit enter.- This will allow you to post messages as a specific nickname. To test - multi-user functionality, open additional browser windows using the + This step allows you to post messages as a specific nickname. To test + multi-user functionality, open more browser windows using the same URL and enter different nicknames. ![Two browser windows displaying chat messages from User1 and User2](./media/cloud-services-nodejs-chat-app-socketio/socketio-8.png) messages between different clients using Socket.IO. ## Next steps -In this tutorial you learned how to create a basic chat application hosted in an Azure Cloud Service. To learn how to host this application in an Azure Website, see [Build a Node.js Chat Application with Socket.IO on an Azure Web Site][chatwebsite]. +In this tutorial, you learned how to create a basic chat application hosted in an Azure Cloud Service. To learn how to host this application in an Azure Website, see [Build a Node.js Chat Application with Socket.IO on an Azure Web Site][chatwebsite]. For more information, see also the [Node.js Developer Center](/azure/developer/javascript/). |
cloud-services | Cloud Services Nodejs Develop Deploy App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-develop-deploy-app.md | Title: Node.js Getting Started Guide -description: Learn how to create a simple Node.js web application and deploy it to an Azure cloud service. +description: Learn how to create a Node.js web application and deploy it to an Azure cloud service. Previously updated : 02/21/2023 Last updated : 07/23/2024 -This tutorial shows how to create a simple Node.js application running in an Azure Cloud Service. Cloud Services are the building blocks of scalable cloud applications in Azure. They allow the separation and independent management and scale-out of front-end and back-end components of your application. Cloud Services provide a robust dedicated virtual machine for hosting each role reliably. --For more information on Cloud Services, and how they compare to Azure Websites and Virtual machines, see [Azure Websites, Cloud Services and Virtual Machines comparison]. +This tutorial shows how to create a Node.js application running in an Azure Cloud Service. Cloud Services are the building blocks of scalable cloud applications in Azure. They allow the separation and independent management and scale-out of front-end and back-end components of your application. Cloud Services provide a robust dedicated virtual machine for hosting each role reliably. > [!TIP]-> Looking to build a simple website? If your scenario involves just a simple website front-end, consider [using a lightweight web app]. You can easily upgrade to a Cloud Service as your web app grows and your requirements change. +> Looking to build a website? If your scenario involves just a simple website front-end, consider [using a lightweight web app]. You can easily upgrade to a Cloud Service as your web app grows and your requirements change. -By following this tutorial, you will build a simple web application hosted inside a web role. You will use the compute emulator to test your application locally, then deploy it using PowerShell command-line tools. +By following this tutorial, you build a web application hosted inside a web role. You use the compute emulator to test your application locally, then deploy it using PowerShell command-line tools. -The application is a simple "hello world" application: +The application is a "hello world" application: ![A web browser displaying the Hello World web page][A web browser displaying the Hello World web page] Perform the following tasks to create a new Azure Cloud Service project, along w 1. Run **Windows PowerShell** as Administrator; from the **Start Menu** or **Start Screen**, search for **Windows PowerShell**. 2. [Connect PowerShell] to your subscription.-3. Enter the following PowerShell cmdlet to create to create the project: +3. Enter the following PowerShell cmdlet to create the project: ```powershell New-AzureServiceProject helloworld Perform the following tasks to create a new Azure Cloud Service project, along w > [!NOTE] > If you do not specify a role name, a default name is used. You can provide a name as the first cmdlet parameter: `Add-AzureNodeWebRole MyRole` -The Node.js app is defined in the file **server.js**, located in the directory for the web role (**WebRole1** by default). Here is the code: +The Node.js app is defined in the file **server.js**, located in the directory for the web role (**WebRole1** by default). Here's the code: ```js var http = require('http'); To deploy your application to Azure, you must first download the publishing sett Get-AzurePublishSettingsFile ``` - This will use your browser to navigate to the publish settings download page. You may be prompted to log in with a Microsoft Account. If so, use the account associated with your Azure subscription. + This command uses your browser to navigate to the publish settings download page. You may be prompted to sign in with a Microsoft Account. If so, use the account associated with your Azure subscription. Save the downloaded profile to a file location you can easily access. 2. Run following cmdlet to import the publishing profile you downloaded: $ServiceName = "NodeHelloWorld" + $(Get-Date -Format ('ddhhmm')) Publish-AzureServiceProject -ServiceName $ServiceName -Location "East US" -Launch ``` -* **-ServiceName** specifies the name for the deployment. This must be a unique name, otherwise the publish process will fail. The **Get-Date** command tacks on a date/time string that should make the name unique. -* **-Location** specifies the datacenter that the application will be hosted in. To see a list of available datacenters, use the **Get-AzureLocation** cmdlet. -* **-Launch** opens a browser window and navigates to the hosted service after deployment has completed. +* **-ServiceName** specifies the name for the deployment. This value must be a unique name; otherwise, the publish process fails. The **Get-Date** command tacks on a date/time string that should make the name unique. +* **-Location** specifies the datacenter that hosts the application. To see a list of available datacenters, use the **Get-AzureLocation** cmdlet. +* **-Launch** opens a browser window and navigates to the hosted service after the deployment completes. -After publishing succeeds, you will see a response similar to the following: +After publishing succeeds, you see a response similar to the screenshot: ![The output of the Publish-AzureService command][The output of the Publish-AzureService command] > [!NOTE] > It can take several minutes for the application to deploy and become available when first published. -Once the deployment has completed, a browser window will open and navigate to the cloud service. +Once the deployment completes, a browser window opens and navigates to the cloud service. ![A browser window displaying the hello world page; the URL indicates the page is hosted on Azure.][A browser window displaying the hello world page; the URL indicates the page is hosted on Azure.] Your application is now running on Azure. The **Publish-AzureServiceProject** cmdlet performs the following steps: 1. Creates a package to deploy. The package contains all the files in your application folder.-2. Creates a new **storage account** if one does not exist. The Azure storage account is used to store the application package during deployment. You can safely delete the storage account after deployment is done. -3. Creates a new **cloud service** if one does not already exist. A **cloud service** is the container in which your application is hosted when it is deployed to Azure. For more information, see [Overview of Creating a Hosted Service for Azure]. +2. Creates a new **storage account** if one doesn't exist. The Azure storage account is used to store the application package during deployment. You can safely delete the storage account after deployment is done. +3. Creates a new **cloud service** if one doesn't already exist. A **cloud service** is the container in which your application is hosted when it deploys to Azure. For more information, see [Overview of Creating a Hosted Service for Azure]. 4. Publishes the deployment package to Azure. ## Stopping and deleting your application-After deploying your application, you may want to disable it so you can avoid extra costs. Azure bills web role instances per hour of server time consumed. Server time is consumed once your application is deployed, even if the instances are not running and are in the stopped state. +After deploying your application, you may want to disable it so you can avoid extra costs. Azure bills web role instances per hour of server time consumed. Server time is consumed once your application is deployed, even if the instances aren't running and are in the stopped state. 1. In the Windows PowerShell window, stop the service deployment created in the previous section with the following cmdlet: After deploying your application, you may want to disable it so you can avoid ex Stop-AzureService ``` - Stopping the service may take several minutes. When the service is stopped, you receive a message indicating that it has stopped. + Stopping the service may take several minutes. When the service is stopped, you receive a message indicating that it stopped. ![The status of the Stop-AzureService command][The status of the Stop-AzureService command] 2. To delete the service, call the following cmdlet: After deploying your application, you may want to disable it so you can avoid ex When prompted, enter **Y** to delete the service. - Deleting the service may take several minutes. After the service has been deleted you receive a message indicating that the service was deleted. + Deleting the service may take several minutes. After you delete the service, you receive a message indicating that the service was deleted. ![The status of the Remove-AzureService command][The status of the Remove-AzureService command] |
cloud-services | Cloud Services Nodejs Develop Deploy Express App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-develop-deploy-express-app.md | Title: Build and deploy a Node.js Express app to Azure Cloud Services (classic) -description: Use this tutorial to create a new application using the Express module, which provides an MVC framework for creating Node.js web applications. +description: Use this tutorial to create a new application using the Express module, which provides a Model-View-Control (MVC) framework for creating Node.js web applications. Previously updated : 02/21/2023 Last updated : 07/23/2024 -Developers often use 3rd party modules to provide additional -functionality when developing a Node.js application. In this tutorial -you'll create a new application using the [Express](https://github.com/expressjs/express) module, which provides an MVC framework for creating Node.js web applications. +Developers often use non-Microsoft modules to provide more +functionality when developing a Node.js application. In this tutorial, +you create a new application using the [Express](https://github.com/expressjs/express) module, which provides a Model-View-Control framework for creating Node.js web applications. -A screenshot of the completed application is below: +The following screenshot shows the completed application: ![A web browser displaying Welcome to Express in Azure](./media/cloud-services-nodejs-develop-deploy-express-app/node36.png) Perform the following steps to create a new cloud service project named `express ``` > [!NOTE]- > By default, **Add-AzureNodeWebRole** uses an older version of Node.js. The **Set-AzureServiceProjectRole** statement above instructs Azure to use v0.10.21 of Node. Note the parameters are case-sensitive. You can verify the correct version of Node.js has been selected by checking the **engines** property in **WebRole1\package.json**. -> -> + > By default, **Add-AzureNodeWebRole** uses an older version of Node.js. The preceding **Set-AzureServiceProjectRole** line instructs Azure to use v0.10.21 of Node. Note the parameters are case-sensitive. You can verify the correct version of Node.js has been selected by checking the **engines** property in **WebRole1\package.json**. ## Install Express 1. Install the Express generator by issuing the following command: Perform the following steps to create a new cloud service project named `express PS C:\node\expressapp> npm install express-generator -g ``` - The output of the npm command should look similar to the result below. + The following screenshot shows the output of the npm command. Your output should look similar. ![Windows PowerShell displaying the output of the npm install express command.](./media/cloud-services-nodejs-develop-deploy-express-app/express-g.png)+ 2. Change directories to the **WebRole1** directory and use the express command to generate a new application: ```powershell PS C:\node\expressapp\WebRole1> express ``` - You'll be prompted to overwrite your earlier application. Enter **y** or **yes** to continue. Express will generate the app.js file and a folder structure for building your application. + To continue, enter **y** or **yes** when prompted to overwrite your earlier application. Express generates the app.js file and a folder structure for building your application. ![The output of the express command](./media/cloud-services-nodejs-develop-deploy-express-app/node23.png)-3. To install additional dependencies defined in the package.json file, ++3. To install the other dependencies defined in the package.json file, enter the following command: ```powershell Perform the following steps to create a new cloud service project named `express ``` ![The output of the npm install command](./media/cloud-services-nodejs-develop-deploy-express-app/node26.png)-4. Use the following command to copy the **bin/www** file to **server.js**. This is so the cloud service can find the entry point for this application. ++4. Use the following command to copy the **bin/www** file to **server.js**. This step allows the cloud service to find the entry point for this application. ```powershell PS C:\node\expressapp\WebRole1> copy bin/www server.js ``` After this command completes, you should have a **server.js** file in the WebRole1 directory.+ 5. Modify the **server.js** to remove one of the '.' characters from the following line. ```js var app = require('../app'); ``` - After making this modification, the line should appear as follows. + Once you make this modification, the line should appear as follows: ```js var app = require('./app'); Perform the following steps to create a new cloud service project named `express ## Modifying the View Now modify the view to display the message "Welcome to Express in-Azure". +Azure." 1. Enter the following command to open the index.jade file: Azure". ![The index.jade file, the last line reads: p Welcome to \#{title} in Azure](./media/cloud-services-nodejs-develop-deploy-express-app/node31.png) 3. Save the file and exit Notepad.-4. Refresh your browser and you'll see your changes. +4. To see your changes, refresh your browser. ![A browser window, the page contains Welcome to Express in Azure](./media/cloud-services-nodejs-develop-deploy-express-app/node32.png) In the Azure PowerShell window, use the **Publish-AzureServiceProject** cmdlet t PS C:\node\expressapp\WebRole1> Publish-AzureServiceProject -ServiceName myexpressapp -Location "East US" -Launch ``` -Once the deployment operation completes, your browser will open and display the web page. +Once the deployment operation completes, your browser opens and displays the web page. ![A web browser displaying the Express page. The URL indicates it is now hosted on Azure.](./media/cloud-services-nodejs-develop-deploy-express-app/node36.png) |
cloud-services | Cloud Services Performance Testing Visual Studio Profiler | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-performance-testing-visual-studio-profiler.md | Title: Profiling a Cloud Service (classic) Locally in the Compute Emulator | Mic description: Investigate performance issues in cloud services with the Visual Studio profiler Previously updated : 02/21/2023 Last updated : 07/23/2024 -A variety of tools and techniques are available for testing the performance of cloud services. +Various tools and techniques are available for testing the performance of cloud services. When you publish a cloud service to Azure, you can have Visual Studio collect profiling data and then analyze it locally, as described in [Profiling an Azure Application][1].-You can also use diagnostics to track a variety of performance +You can also use diagnostics to track numerous performance counters, as described in [Using performance counters in Azure][2]. You might also want to profile your application locally in the compute emulator before deploying it to the cloud. -This article covers the CPU Sampling method of profiling, which can be done locally in the emulator. CPU sampling is a method of profiling that is not very intrusive. At a designated sampling interval, the profiler takes a snapshot of the call stack. The data is collected over a period of time, and shown in a report. This method of profiling tends to indicate where in a computationally intensive application most of the CPU work is being done. This gives you the opportunity to focus on the "hot path" where your application is spending the most time. +This article covers the CPU Sampling method of profiling, which can be done locally in the emulator. CPU sampling is a method of profiling that isn't intrusive. At a designated sampling interval, the profiler takes a snapshot of the call stack. The data is collected over a period of time, and shown in a report. This method of profiling tends to indicate where in a computationally intensive application most of the CPU work is being done, giving you the opportunity to focus on the "hot path" where your application is spending the most time. -## 1: Configure Visual Studio for profiling -First, there are a few Visual Studio configuration options that might be helpful when profiling. To make sense of the profiling reports, you'll need symbols (.pdb files) for your application and also symbols for system libraries. You'll want to make sure that you reference the available symbol servers. To do this, on the **Tools** menu in Visual Studio, choose **Options**, then choose **Debugging**, then **Symbols**. Make sure that Microsoft Symbol Servers is listed under **Symbol file (.pdb) locations**. You can also reference https://referencesource.microsoft.com/symbols, which might have additional symbol files. +## Configure Visual Studio for profiling +First, there are a few Visual Studio configuration options that might be helpful when profiling. To make sense of the profiling reports, you need symbols (.pdb files) for your application and also symbols for system libraries. Make sure you reference the available symbol servers; to do so, on the **Tools** menu in Visual Studio, choose **Options**, then choose **Debugging**, then **Symbols**. Make sure that Microsoft Symbol Servers is listed under **Symbol file (.pdb) locations**. You can also reference https://referencesource.microsoft.com/symbols, which might have more symbol files. ![Symbol options][4] If desired, you can simplify the reports that the profiler generates by setting ![Just My Code options][17] -You can use these instructions with an existing project or with a new project. If you create a new project to try the techniques described below, choose a C# **Azure Cloud Service** project, and select a **Web Role** and a **Worker Role**. +You can use these instructions with an existing project or with a new project. If you create a new project to try the following techniques, choose a C# **Azure Cloud Service** project, and select a **Web Role** and a **Worker Role**. ![Azure Cloud Service project roles][5] private async Task RunAsync(CancellationToken cancellationToken) } ``` -Build and run your cloud service locally without debugging (Ctrl+F5), with the solution configuration set to **Release**. This ensures that all files and folders are created for running the application locally, and ensures that all the emulators are started. Start the Compute Emulator UI from the taskbar to verify that your worker role is running. +Build and run your cloud service locally without debugging (Ctrl+F5), with the solution configuration set to **Release**. This setting ensures that all files and folders are created for running the application locally and that all the emulators are started. To verify that your worker role is running, start the Compute Emulator UI from the taskbar. -## 2: Attach to a process +## Attach to a process Instead of profiling the application by starting it from the Visual Studio 2010 IDE, you must attach the profiler to a running process. -To attach the profiler to a process, on the **Analyze** menu, choose **Profiler** and **Attach/Detach**. +To attach the profiler to a process, go to the **Analyze** menu, select **Profiler**, and choose **Attach/Detach**. ![Attach profile option][6] For a worker role, find the WaWorkerHost.exe process. ![WaWorkerHost process][7] -If your project folder is on a network drive, the profiler will ask you to provide another location to save the profiling reports. +If your project folder is on a network drive, the profiler asks you to provide another location to save the profiling reports. You can also attach to a web role by attaching to WaIISHost.exe. If there are multiple worker role processes in your application, you need to use the processID to distinguish them. You can query the processID programmatically by accessing the Process object. For example, if you add this code to the Run method of the RoleEntryPoint-derived class in a role, you can look at the-log in the Compute Emulator UI to know what process to connect to. +sign-in the Compute Emulator UI to know what process to connect to. ```csharp var process = System.Diagnostics.Process.GetCurrentProcess(); Open the worker role log console window in the Compute Emulator UI by clicking o ![View process ID][9] -One you've attached, perform the steps in your application's UI (if needed) to reproduce the scenario. +Once you attach, perform the steps in your application's UI (if needed) to reproduce the scenario. When you want to stop profiling, choose the **Stop Profiling** link. ![Stop Profiling option][10] -## 3: View performance reports +## View performance reports The performance report for your application is displayed. At this point, the profiler stops executing, saves data in a .vsp file, and displays a report that shows an analysis of this data. ![Profiler report][11] -If you see String.wstrcpy in the Hot Path, click on Just My Code to change the view to show user code only. If you see String.Concat, try pressing the Show All Code button. +If you see String.wstrcpy in the Hot Path, select on Just My Code to change the view to show user code only. If you see String.Concat, try pressing the **Show All Code** button. You should see the Concatenate method and String.Concat taking up a large portion of the execution time. ![Analysis of report][12] -If you added the string concatenation code in this article, you should see a warning in the Task List for this. You may also see a warning that there is an excessive amount of garbage collection, which is due to the number of strings that are created and disposed. +If you added the string concatenation code in this article, you should see a warning in the Task List for it. You may also see a warning that there's an excessive amount of garbage collection, which is due to the number of strings created and disposed. ![Performance warnings][14] -## 4: Make changes and compare performance -You can also compare the performance before and after a code change. Stop the running process, and edit the code to replace the string concatenation operation with the use of StringBuilder: +## Make changes and compare performance +You can also compare the performance before and after a code change. To replace the string concatenation operation with the use of StringBuilder, stop the running process and edit the code: ```csharp public static string Concatenate(int number) The reports highlight differences between the two runs. ![Comparison report][16] -Congratulations! You've gotten started with the profiler. +Congratulations! You got started with the profiler. ## Troubleshooting-* Make sure you are profiling a Release build and start without debugging. -* If the Attach/Detach option is not enabled on the Profiler menu, run the Performance Wizard. +* Make sure you profile a Release build and start without debugging. +* If the Attach/Detach option isn't enabled on the Profiler menu, run the Performance Wizard. * Use the Compute Emulator UI to view the status of your application. * If you have problems starting applications in the emulator, or attaching the profiler, shut down the compute emulator and restart it. If that doesn't solve the problem, try rebooting. This problem can occur if you use the Compute Emulator to suspend and remove running deployments.-* If you have used any of the profiling commands from the - command line, especially the global settings, make sure that VSPerfClrEnv /globaloff has been called and that VsPerfMon.exe has been shut down. -* If when sampling, you see the message "PRF0025: No data was collected," check that the process you attached to has CPU activity. Applications that are not doing any computational work might not produce any sampling data. It's also possible that the process exited before any sampling was done. Check to see that the Run method for a role that you are profiling does not terminate. +* If you used any of the profiling commands from the + command line, especially the global settings, make sure you call VSPerfClrEnv /globaloff and shut down VsPerfMon.exe. +* When sampling, you see the message "PRF0025: No data was collected," check the CPU activity of the process. Applications that aren't doing any computational work might not produce any sampling data. It's also possible that the process exited before any sampling was done. Check to see that the Run method for a role that you profile doesn't terminate. ## Next Steps-Instrumenting Azure binaries in the emulator is not supported in the Visual Studio profiler, but if you want to test memory allocation, you can choose that option when profiling. You can also choose concurrency profiling, which helps you determine whether threads are wasting time competing for locks, or tier interaction profiling, which helps you track down performance problems when interacting between tiers of an application, most frequently between the data tier and a worker role. You can view the database queries that your app generates and use the profiling data to improve your use of the database. For information about tier interaction profiling, see the blog post [Walkthrough: Using the Tier Interaction Profiler in Visual Studio Team System 2010][3]. +Instrumenting Azure binaries in the emulator isn't supported in the Visual Studio profiler, but if you want to test memory allocation, you can choose that option when profiling. You can also choose concurrency profiling, which helps you determine whether threads are wasting time competing for locks, or tier interaction profiling, which helps you track down performance problems when interacting between tiers of an application, most frequently between the data tier and a worker role. You can view the database queries that your app generates and use the profiling data to improve your use of the database. For information about tier interaction profiling, see the blog post [Walkthrough: Using the Tier Interaction Profiler in Visual Studio Team System 2010][3]. [1]: ../azure-monitor/app/profiler.md [2]: /previous-versions/azure/hh411542(v=azure.100) |
cloud-services | Cloud Services Php Create Web Role | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-php-create-web-role.md | ms.assetid: 9f7ccda0-bd96-4f7b-a7af-fb279a9e975b ms.devlang: php Previously updated : 04/11/2018 Last updated : 07/23/2024 # Create PHP web and worker roles+ ## Overview [!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)] -This guide will show you how to create PHP web or worker roles in a Windows development environment, choose a specific version of PHP from the "built-in" versions available, change the PHP configuration, enable extensions, and finally, deploy to Azure. It also describes how to configure a web or worker role to use a PHP runtime (with custom configuration and extensions) that you provide. +This guide shows you how to create PHP web or worker roles in a Windows development environment, choose a specific version of PHP from the "built-in" versions available, change the PHP configuration, enable extensions, and finally, deploy to Azure. It also describes how to configure a web or worker role to use a PHP runtime (with custom configuration and extensions) that you provide. -Azure provides three compute models for running applications: Azure App Service, Azure Virtual Machines, and Azure Cloud Services. All three models support PHP. Cloud Services, which includes web and worker roles, provides *platform as a service (PaaS)*. Within a cloud service, a web role provides a dedicated Internet Information Services (IIS) web server to host front-end web applications. A worker role can run asynchronous, long-running or perpetual tasks independent of user interaction or input. +Azure provides three compute models for running applications: Azure App Service, Azure Virtual Machines, and Azure Cloud Services. All three models support PHP. Cloud Services, which includes web and worker roles, provides *platform as a service (PaaS)*. Within a cloud service, a web role provides a dedicated Internet Information Services (IIS) web server to host front-end web applications. A worker role can run asynchronous, long-running, or perpetual tasks independent of user interaction or input. For more information about these options, see [Compute hosting options provided by Azure](cloud-services-choose-me.md). ## Download the Azure SDK for PHP -The [Azure SDK for PHP](https://github.com/Azure/azure-sdk-for-php) consists of several components. This article will use two of them: Azure PowerShell and the Azure emulators. These two components can be installed via the Microsoft Web Platform Installer. For more information, see [How to install and configure Azure PowerShell](/powershell/azure/). +The [Azure SDK for PHP](https://github.com/Azure/azure-sdk-for-php) consists of several components. This article uses two of them: Azure PowerShell and the Azure emulators. These two components can be installed via the Microsoft Web Platform Installer. For more information, see [How to install and configure Azure PowerShell](/powershell/azure/). ## Create a Cloud Services project -The first step in creating a PHP web or worker role is to create an Azure Service project. an Azure Service project serves as a logical container for web and worker roles, and it contains the project's [service definition (.csdef)] and [service configuration (.cscfg)] files. +The first step in creating a PHP web or worker role is to create an Azure Service project. An Azure Service project serves as a logical container for web and worker roles, and it contains the project's [service definition (.csdef)] and [service configuration (.cscfg)] files. To create a new Azure Service project, run Azure PowerShell as an administrator, and execute the following command: To create a new Azure Service project, run Azure PowerShell as an administrator, PS C:\>New-AzureServiceProject myProject ``` -This command will create a new directory (`myProject`) to which you can add web and worker roles. +This command creates a new directory (`myProject`) to which you can add web and worker roles. ## Add PHP web or worker roles PS C:\myProject> Add-AzurePHPWorkerRole roleName ## Use your own PHP runtime -In some cases, instead of selecting a built-in PHP runtime and configuring it as described above, you may want to provide your own PHP runtime. For example, you can use the same PHP runtime in a web or worker role that you use in your development environment. This makes it easier to ensure that the application will not change behavior in your production environment. +In some cases, instead of selecting a built-in PHP runtime and configuring it as previously described, you may want to provide your own PHP runtime. For example, you can use the same PHP runtime in a web or worker role that you use in your development environment. This process makes it easier to ensure that the application behavior stays the same in your production environment. ### Configure a web role to use your own PHP runtime To configure a web role to use a PHP runtime that you provide, follow these steps: -1. Create an Azure Service project and add a PHP web role as described previously in this topic. +1. Create an Azure Service project and add a PHP web role as described previously in this article. 2. Create a `php` folder in the `bin` folder that is in your web role's root directory, and then add your PHP runtime (all binaries, configuration files, subfolders, etc.) to the `php` folder.-3. (OPTIONAL) If your PHP runtime uses the [Microsoft Drivers for PHP for SQL Server][sqlsrv drivers], you will need to configure your web role to install [SQL Server Native Client 2012][sql native client] when it is provisioned. To do this, add the [sqlncli.msi x64 installer] to the `bin` folder in your web role's root directory. The startup script described in the next step will silently run the installer when the role is provisioned. If your PHP runtime does not use the Microsoft Drivers for PHP for SQL Server, you can remove the following line from the script shown in the next step: +3. (OPTIONAL) If your PHP runtime uses the [Microsoft Drivers for PHP for SQL Server][sqlsrv drivers], you need to configure your web role to install [SQL Server Native Client 2012][sql native client] when it provisions. To do so, add the [sqlncli.msi x64 installer] to the `bin` folder in your web role's root directory. The startup script described in the next step will silently run the installer when the role is provisioned. If your PHP runtime doesn't use the Microsoft Drivers for PHP for SQL Server, you can remove the following line from the script shown in the next step: ```console msiexec /i sqlncli.msi /qn IACCEPTSQLNCLILICENSETERMS=YES ``` -4. Define a startup task that configures [Internet Information Services (IIS)][iis.net] to use your PHP runtime to handle requests for `.php` pages. To do this, open the `setup_web.cmd` file (in the `bin` file of your web role's root directory) in a text editor and replace its contents with the following script: +4. Define a startup task that configures [Internet Information Services (IIS)][iis.net] to use your PHP runtime to handle requests for `.php` pages. To do so, open the `setup_web.cmd` file (in the `bin` file of your web role's root directory) in a text editor and replace its contents with the following script: ```cmd @ECHO ON To configure a web role to use a PHP runtime that you provide, follow these step %WINDIR%\system32\inetsrv\appcmd.exe set config -section:system.webServer/handlers /+"[name='PHP',path='*.php',verb='GET,HEAD,POST',modules='FastCgiModule',scriptProcessor='%PHP_FULL_PATH%',resourceType='Either',requireAccess='Script']" /commit:apphost %WINDIR%\system32\inetsrv\appcmd.exe set config -section:system.webServer/fastCgi /"[fullPath='%PHP_FULL_PATH%'].queueLength:50000" ```-5. Add your application files to your web role's root directory. This will be the web server's root directory. -6. Publish your application as described in the [Publish your application](#publish-your-application) section below. +5. Add your application files to your web role's root directory, which becomes the web server's root directory. +6. Publish your application as described in the [Publish your application section](#publish-your-application). > [!NOTE]-> The `download.ps1` script (in the `bin` folder of the web role's root directory) can be deleted after you follow the steps described above for using your own PHP runtime. -> -> +> The `download.ps1` script (in the `bin` folder of the web role's root directory) can be deleted after you follow the preceding steps for using your own PHP runtime. ### Configure a worker role to use your own PHP runtime To configure a worker role to use a PHP runtime that you provide, follow these steps: -1. Create an Azure Service project and add a PHP worker role as described previously in this topic. +1. Create an Azure Service project and add a PHP worker role as described previously in this article. 2. Create a `php` folder in the worker role's root directory, and then add your PHP runtime (all binaries, configuration files, subfolders, etc.) to the `php` folder.-3. (OPTIONAL) If your PHP runtime uses [Microsoft Drivers for PHP for SQL Server][sqlsrv drivers], you will need to configure your worker role to install [SQL Server Native Client 2012][sql native client] when it is provisioned. To do this, add the [sqlncli.msi x64 installer] to the worker role's root directory. The startup script described in the next step will silently run the installer when the role is provisioned. If your PHP runtime does not use the Microsoft Drivers for PHP for SQL Server, you can remove the following line from the script shown in the next step: +3. (OPTIONAL) If your PHP runtime uses [Microsoft Drivers for PHP for SQL Server][sqlsrv drivers], you need to configure your worker role to install [SQL Server Native Client 2012][sql native client] when it provisions. To do so, add the [sqlncli.msi x64 installer] to the worker role's root directory. The startup script described in the next step will silently run the installer when the role is provisioned. If your PHP runtime doesn't use the Microsoft Drivers for PHP for SQL Server, you can remove the following line from the script shown in the next step: ```console msiexec /i sqlncli.msi /qn IACCEPTSQLNCLILICENSETERMS=YES ``` -4. Define a startup task that adds your `php.exe` executable to the worker role's PATH environment variable when the role is provisioned. To do this, open the `setup_worker.cmd` file (in the worker role's root directory) in a text editor and replace its contents with the following script: +4. Define a startup task that adds your `php.exe` executable to the worker role's PATH environment variable when the role is provisioned. To do so, open the `setup_worker.cmd` file (in the worker role's root directory) in a text editor and replace its contents with the following script: ```cmd @echo on To configure a worker role to use a PHP runtime that you provide, follow these s exit /b -1 ``` 5. Add your application files to your worker role's root directory.-6. Publish your application as described in the [Publish your application](#publish-your-application) section below. +6. Publish your application as described in the [Publish your application section](#publish-your-application). ## Run your application in the compute and storage emulators -The Azure emulators provide a local environment in which you can test your Azure application before you deploy it to the cloud. There are some differences between the emulators and the Azure environment. To understand this better, see [Use the Azure Storage Emulator for development and testing](../storage/common/storage-use-emulator.md). +The Azure emulators provide a local environment in which you can test your Azure application before you deploy it to the cloud. There are some differences between the emulators and the Azure environment. To understand these differences better, see [Use the Azure Storage Emulator for development and testing](../storage/common/storage-use-emulator.md). -Note that you must have PHP installed locally to use the compute emulator. The compute emulator will use your local PHP installation to run your application. +You must have PHP installed locally to use the compute emulator. The compute emulator uses your local PHP installation to run your application. To run your project in the emulators, execute the following command from your project's root directory: To run your project in the emulators, execute the following command from your pr PS C:\MyProject> Start-AzureEmulator ``` -You will see output similar to this: +The following sample output is similar to what you should see: ```output Creating local package... Role is running at http://127.0.0.1:81 Started ``` -You can see your application running in the emulator by opening a web browser and browsing to the local address shown in the output (`http://127.0.0.1:81` in the example output above). +You can see your application running in the emulator by opening a web browser and browsing to the local address shown in the output (`http://127.0.0.1:81` in the example output shown earlier). To stop the emulators, execute this command: |
cloud-services | Cloud Services Powershell Create Cloud Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-powershell-create-cloud-container.md | Title: Create a cloud service (classic) container with PowerShell | Microsoft Do description: This article explains how to create a cloud service container with PowerShell. The container hosts web and worker roles. Previously updated : 02/21/2023 Last updated : 07/23/2024 -This article explains how to quickly create a Cloud Services container using Azure PowerShell cmdlets. Please follow the steps below: +This article explains how to quickly create a Cloud Services container using Azure PowerShell cmdlets. Use the following steps: 1. Install the Microsoft Azure PowerShell cmdlet from the [Azure PowerShell downloads](https://aka.ms/webpi-azps) page. 2. Open the PowerShell command prompt. Get-help New-AzureService ### Next steps -* To manage the cloud service deployment, refer to the [Get-AzureService](/powershell/module/servicemanagement/azure/Get-AzureService), [Remove-AzureService](/powershell/module/servicemanagement/azure/Remove-AzureService), and [Set-AzureService](/powershell/module/servicemanagement/azure/set-azureservice) commands. You may also refer to [How to configure cloud services](cloud-services-how-to-configure-portal.md) for further information. +* To manage the cloud service deployment, refer to the [Get-AzureService](/powershell/module/servicemanagement/azure/Get-AzureService), [Remove-AzureService](/powershell/module/servicemanagement/azure/Remove-AzureService), and [Set-AzureService](/powershell/module/servicemanagement/azure/set-azureservice) commands. For more information, see [How to configure cloud services](cloud-services-how-to-configure-portal.md). * To publish your cloud service project to Azure, refer to the **PublishCloudService.ps1** code sample from [archived cloud services repository](https://github.com/MicrosoftDocs/azure-cloud-services-files/tree/master/Scripts/cloud-services-continuous-delivery). |
cloud-services | Cloud Services Python How To Use Service Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-python-how-to-use-service-management.md | Title: Use the Service Management API (Python) - feature guide + Title: Use the classic deployment model (Python) - feature guide description: Learn how to programmatically perform common service management tasks from Python. Previously updated : 02/21/2023 Last updated : 07/23/2024 -The Azure Service Management API provides programmatic access to much of the service management functionality available through the [Azure portal]. You can use the Azure SDK for Python to manage your cloud services and storage accounts. +The Azure classic deployment model provides programmatic access to much of the service management functionality available through the [Azure portal]. You can use the Azure SDK for Python to manage your cloud services and storage accounts. -To use the Service Management API, you need to [create an Azure account](https://azure.microsoft.com/pricing/free-trial/). +To use the classic deployment model, you need to [create an Azure account](https://azure.microsoft.com/pricing/free-trial/). ## <a name="Concepts"> </a>Concepts-The Azure SDK for Python wraps the [Service Management API][svc-mgmt-rest-api], which is a REST API. All API operations are performed over TLS and mutually authenticated by using X.509 v3 certificates. The management service can be accessed from within a service running in Azure. It also can be accessed directly over the Internet from any application that can send an HTTPS request and receive an HTTPS response. +The Azure SDK for Python wraps the [classic deployment model][svc-mgmt-rest-api], which is a REST API. All API operations are performed over Transport Layer Security (TLS) and mutually authenticated by using X.509 v3 certificates. The management service can be accessed from within a service running in Azure. It also can be accessed directly over the Internet from any application that can send an HTTPS request and receive an HTTPS response. ## <a name="Installation"> </a>Installation All the features described in this article are available in the `azure-servicemanagement-legacy` package, which you can install by using pip. For more information about installation (for example, if you're new to Python), see [Install Python and the Azure SDK](/azure/developer/python/sdk/azure-sdk-install). image_name = 'OpenLogic__OpenLogic-CentOS-62-20120531-en-us-30GB.vhd' # will be created media_link = 'url_to_target_storage_blob_for_vm_hd' -# Linux VM configuration, you can use WindowsConfigurationSet +# Linux virtual machine (VM) configuration, you can use WindowsConfigurationSet # for a Windows VM instead linux_config = LinuxConfigurationSet('myhostname', 'myuser', 'mypassword', True) sms.delete_hosted_service(service_name='myvm') ``` ## Create a virtual machine from a captured virtual machine image-To capture a VM image, you first call the **capture\_vm\_image** method. +To capture a virtual machine (VM) image, you first call the **capture\_vm\_image** method. ```python from azure import * To learn more about how to capture a Linux virtual machine in the classic deploy To learn more about how to capture a Windows virtual machine in the classic deployment model, see [Capture a Windows virtual machine](/previous-versions/azure/virtual-machines/windows/classic/capture-image-classic). ## <a name="What's Next"> </a>Next steps-Now that you've learned the basics of service management, you can access the [Complete API reference documentation for the Azure Python SDK](https://azure-sdk-for-python.readthedocs.org/) and perform complex tasks easily to manage your Python application. +Now that you learned the basics of service management, you can access the [Complete API reference documentation for the Azure Python SDK](https://azure-sdk-for-python.readthedocs.org/) and perform complex tasks easily to manage your Python application. For more information, see the [Python Developer Center](https://azure.microsoft.com/develop/python/). |
cloud-services | Cloud Services Python Ptvs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-python-ptvs.md | Title: Get started with Python and Azure Cloud Services (classic)| Microsoft Doc description: Overview of using Python Tools for Visual Studio to create Azure cloud services including web roles and worker roles. Previously updated : 02/21/2023 Last updated : 07/23/2024 This article provides an overview of using Python web and worker roles using [Py ## Prerequisites * [Visual Studio 2013, 2015, or 2017](https://www.visualstudio.com/) * [Python Tools for Visual Studio][Python Tools for Visual Studio] (PTVS)-* [Azure SDK Tools for VS 2013][Azure SDK Tools for VS 2013] or +* [Azure SDK Tools for Visual Studio (VS) 2013][Azure SDK Tools for VS 2013] or [Azure SDK Tools for VS 2015][Azure SDK Tools for VS 2015] or [Azure SDK Tools for VS 2017][Azure SDK Tools for VS 2017] * [Python 2.7 32-bit][Python 2.7 32-bit] or [Python 3.8 32-bit][Python 3.8 32-bit] This article provides an overview of using Python web and worker roles using [Py [!INCLUDE [create-account-and-websites-note](../../includes/create-account-and-websites-note.md)] ## What are Python web and worker roles?-Azure provides three compute models for running applications: [Web Apps feature in Azure App Service][execution model-web sites], [Azure Virtual Machines][execution model-vms], and [Azure Cloud Services][execution model-cloud services]. All three models support Python. Cloud Services, which include web and worker roles, provide *Platform as a Service (PaaS)*. Within a cloud service, a web role provides a dedicated Internet Information Services (IIS) web server to host front end web applications, while a worker role can run asynchronous, long-running, or perpetual tasks independent of user interaction or input. +Azure provides three compute models for running applications: [Web Apps feature in Azure App Service][execution model-web sites], [Azure Virtual Machines][execution model-vms], and [Azure Cloud Services][execution model-cloud services]. All three models support Python. Cloud Services, which include web and worker roles, provide *Platform as a Service (PaaS)*. Within a cloud service, a web role provides a dedicated Internet Information Services (IIS) web server to host front end web applications. A worker role can run asynchronous, long-running, or perpetual tasks independent of user interaction or input. For more information, see [What is a Cloud Service?]. The worker role template comes with boilerplate code to connect to an Azure stor ![Cloud Service Solution](./media/cloud-services-python-ptvs/worker.png) -You can add web or worker roles to an existing cloud service at any time. You can choose to add existing projects in your solution, or create new ones. +You can add web or worker roles to an existing cloud service at any time. You can choose to add existing projects in your solution, or create new ones. ![Add Role Command](./media/cloud-services-python-ptvs/add-new-or-existing-role.png) -Your cloud service can contain roles implemented in different languages. For example, you can have a Python web role implemented using Django, with Python, or with C# worker roles. You can easily communicate between your roles using Service Bus queues or storage queues. +Your cloud service can contain roles implemented in different languages. For example, you can have a Python web role implemented using Django, with Python, or with C# worker roles. You can easily communicate between your roles using Service Bus queues or storage queues. ## Install Python on the cloud service > [!WARNING] Your cloud service can contain roles implemented in different languages. For ex > > -The main problem with the setup scripts is that they do not install python. First, define two [startup tasks](cloud-services-startup-tasks.md) in the [ServiceDefinition.csdef](cloud-services-model-and-package.md#servicedefinitioncsdef) file. The first task (**PrepPython.ps1**) downloads and installs the Python runtime. The second task (**PipInstaller.ps1**) runs pip to install any dependencies you may have. +The main problem with the setup scripts is that they don't install Python. First, define two [startup tasks](cloud-services-startup-tasks.md) in the [ServiceDefinition.csdef](cloud-services-model-and-package.md#servicedefinitioncsdef) file. The first task (**PrepPython.ps1**) downloads and installs the Python runtime. The second task (**PipInstaller.ps1**) runs pip to install any dependencies you may have. The following scripts were written targeting Python 3.8. If you want to use the version 2.x of python, set the **PYTHON2** variable file to **on** for the two startup tasks and the runtime task: `<Variable name="PYTHON2" value="<mark>on</mark>" />`. if (-not $is_emulated){ > > -The **bin\LaunchWorker.ps1** was originally created to do a lot of prep work but it doesn't really work. Replace the contents in that file with the following script. +The **bin\LaunchWorker.ps1** was originally created to do a lot of prep work, but it doesn't really work. Replace the contents in that file with the following script. This script calls the **worker.py** file from your Python project. If the **PYTHON2** environment variable is set to **on**, then Python 2.7 is used, otherwise Python 3.8 is used. else ``` #### ps.cmd-The Visual Studio templates should have created a **ps.cmd** file in the **./bin** folder. This shell script calls out the PowerShell wrapper scripts above and provides logging based on the name of the PowerShell wrapper called. If this file wasn't created, here is what should be in it. +The Visual Studio templates probably created a **ps.cmd** file in the **./bin** folder. This shell script calls out the preceding PowerShell wrapper scripts and provides logging based on the name of the PowerShell wrapper called. If this file wasn't created, the following script would be in it: ```cmd @echo off if not exist "%DiagnosticStore%\LogFiles" mkdir "%DiagnosticStore%\LogFiles" %SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe -ExecutionPolicy Unrestricted -File %* >> "%DiagnosticStore%\LogFiles\%~n1.txt" 2>> "%DiagnosticStore%\LogFiles\%~n1.err.txt" ``` -- ## Run locally If you set your cloud service project as the startup project and press F5, the cloud service runs in the local Azure emulator. -Although PTVS supports launching in the emulator, debugging (for example, breakpoints) does not work. +Although PTVS supports launching in the emulator, debugging (for example, breakpoints) doesn't work. -To debug your web and worker roles, you can set the role project as the startup project and debug that instead. You can also set multiple startup projects. Right-click the solution and then select **Set StartUp Projects**. +To debug your web and worker roles, you can set the role project as the startup project and debug that instead. You can also set multiple startup projects. Right-click the solution and then select **Set StartUp Projects**. ![Solution Startup Project Properties](./media/cloud-services-python-ptvs/startup.png) To publish, right-click the cloud service project in the solution and then selec Follow the wizard. If you need to, enable remote desktop. Remote desktop is helpful when you need to debug something. -When you are done configuring settings, click **Publish**. +When you finish configuring settings, choose **Publish**. -Some progress appears in the output window, then you'll see the Microsoft Azure Activity Log window. +Some progress appears in the output window, then you see the Microsoft Azure Activity Log window. ![Microsoft Azure Activity Log Window](./media/cloud-services-python-ptvs/publish-activity-log.png) |
communication-services | Actions For Call Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/actions-for-call-control.md | To connect to any 1:1 or group call, use the ServerCallLocator. If you started a ```csharp Uri callbackUri = new Uri("https://<myendpoint>/Events"); //the callback endpoint where you want to receive subsequent events CallLocator serverCallLocator = new ServerCallLocator("<ServerCallId>");-ConnctCallResult response = await client.ConnectAsync(serverCallLocator, callbackUri); +ConnectCallResult response = await client.ConnectCallAsync(serverCallLocator, callbackUri); ``` ### [Java](#tab/java) To connect to a Rooms call, use RoomCallLocator which takes RoomId. ```csharp Uri callbackUri = new Uri("https://<myendpoint>/Events"); //the callback endpoint where you want to receive subsequent events CallLocator roomCallLocator = new RoomCallLocator("<RoomId>");-ConnctCallResult response = await client.ConnectAsync(roomCallLocator, callbackUri); +ConnectCallResult response = await client.ConnectCallAsync(roomCallLocator, callbackUri); ``` ### [Java](#tab/java) |
communication-services | Send Email | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email.md | This quickstart describes how to send email using our Email SDKs. ::: zone-end ::: zone pivot="programming-language-csharp" ::: zone-end ::: zone pivot="programming-language-javascript" |
container-apps | Connect Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/connect-apps.md | The following diagram shows how these values are used to compose a container app [!INCLUDE [container-apps-get-fully-qualified-domain-name](../../includes/container-apps-get-fully-qualified-domain-name.md)] -## Dapr location +### Dapr location Developing microservices often requires you to implement patterns common to distributed architecture. Dapr allows you to secure microservices with mutual Transport Layer Security (TLS) (client certificates), trigger retries when errors occur, and take advantage of distributed tracing when Azure Application Insights is enabled. A microservice that uses Dapr is available through the following URL pattern: :::image type="content" source="media/connect-apps/azure-container-apps-location-dapr.png" alt-text="Azure Container Apps container app location with Dapr."::: +## Call a container app by name ++You can call a container app by doing by sending a request to `http://<CONTAINER_APP_NAME>` from another app in the environment. + ## Next steps > [!div class="nextstepaction"] |
container-apps | Firewall Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/firewall-integration.md | The following tables describe how to configure a collection of NSG allow rules. #### Considerations - If you're running HTTP servers, you might need to add ports `80` and `443`.-- Don't explicitly deny the Azure DNS address `168.63.128.16` in the outgoing NSG rules, or your Container Apps environment won't be able to function.+- Don't explicitly deny the Azure DNS address `168.63.129.16` in the outgoing NSG rules, or your Container Apps environment won't be able to function. |
cosmos-db | Index Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/index-policy.md | Here are some rules for included and excluded paths precedence in Azure Cosmos D ## Vector indexes +> [!NOTE] +> You must enroll in the [Azure Cosmos DB NoSQL Vector Index preview feature](nosql/vector-search.md#enroll-in-the-vector-search-preview-feature) to specify a vector indexing policy.> + **Vector** indexes increase the efficiency when performing vector searches using the `VectorDistance` system function. Vectors searches will have significantly lower latency, higher throughput, and less RU consumption when leveraging a vector index. You can specify the following types of vector index policies: | Type | Description | Max dimensions | Here's an example of an indexing policy with a vector index: } ``` -> [!NOTE] -> You must enroll in the [Azure Cosmos DB NoSQL Vector Index preview feature](nosql/vector-search.md#enroll-in-the-vector-search-preview-feature) to specify a vector indexing policy.> - > [!IMPORTANT] > A vector indexing policy must be on the path defined in the container's vector policy. [Learn more about container vector policies](nosql/vector-search.md#container-vector-policies). > Vector indexes must also be defined at the time of Container creation and cannot be modified once created. In a future release, vector indexes will be modifiable. -+>[!IMPORTANT] +> The vector path added to the "excludedPaths" section of the indexing policy to ensure optimized performance for insertion. Not adding the vector path to "excludedPaths" will result in higher RU charge and latency for vector insertions. ## Spatial indexes |
cosmos-db | How To Dotnet Vector Index Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-vector-index-query.md | For our example with book details, the vector policy can look like the example J Once the vector embedding paths are decided, vector indexes need to be added to the indexing policy. Currently, the vector search feature for Azure Cosmos DB for NoSQL is supported only on new containers so you need to apply the vector policy during the time of container creation and it canΓÇÖt be modified later. For this example, the indexing policy would look something like this: ```csharp - Collection<Embedding> collection = new Collection<Embedding>(embeddings); - ContainerProperties properties = new ContainerProperties(id: "vector-container", partitionKeyPath: "/id") - { - VectorEmbeddingPolicy = new(collection), - IndexingPolicy = new IndexingPolicy() - { - VectorIndexes = new() - { - new VectorIndexPath() - { - Path = "/vector", - Type = VectorIndexType.QuantizedFlat, - } - } - }, - }; + Collection<Embedding> collection = new Collection<Embedding>(embeddings); + ContainerProperties properties = new ContainerProperties(id: "vector-container", partitionKeyPath: "/id") + { + VectorEmbeddingPolicy = new(collection), + IndexingPolicy = new IndexingPolicy() + { + VectorIndexes = new() + { + new VectorIndexPath() + { + Path = "/vector", + Type = VectorIndexType.QuantizedFlat, + } + } + }, + }; + properties.IndexingPolicy.IncludedPaths.Add(new IncludedPath { Path = "/*" }); + properties.IndexingPolicy.ExcludedPaths.Add(new ExcludedPath { Path = "/vector/*" }); ``` +>[!IMPORTANT] +> The vector path added to the "excludedPaths" section of the indexing policy to ensure optimized performance for insertion. Not adding the vector path to "excludedPaths" will result in higher RU charge and latency for vector insertions. > [!IMPORTANT] > Currently vector search in Azure Cosmos DB for NoSQL is supported on new containers only. You need to set both the container vector policy and any vector indexing policy during the time of container creation as it canΓÇÖt be modified later. Both policies will be modifiable in a future improvement to the preview feature. |
cosmos-db | How To Java Vector Index Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-java-vector-index-query.md | Once the vector embedding paths are decided, vector indexes need to be added to ```java IndexingPolicy indexingPolicy = new IndexingPolicy(); indexingPolicy.setIndexingMode(IndexingMode.CONSISTENT);-ExcludedPath excludedPath = new ExcludedPath("/*"); -indexingPolicy.setExcludedPaths(Collections.singletonList(excludedPath)); +ExcludedPath excludedPath1 = new ExcludedPath("/coverImageVector/*"); +ExcludedPath excludedPath2 = new ExcludedPath("/contentVector/*"); +indexingPolicy.setExcludedPaths(ImmutableList.of(excludedPath1, excludedPath2)); -IncludedPath includedPath1 = new IncludedPath("/name/?"); -IncludedPath includedPath2 = new IncludedPath("/description/?"); -indexingPolicy.setIncludedPaths(ImmutableList.of(includedPath1, includedPath2)); +IncludedPath includedPath1 = new IncludedPath("/*"); +indexingPolicy.setIncludedPaths(Collections.singletonList(includedPath1)); // Creating vector indexes CosmosVectorIndexSpec cosmosVectorIndexSpec1 = new CosmosVectorIndexSpec(); database.createContainer(collectionDefinition).block(); ``` ++>[!IMPORTANT] +> The vector path added to the "excludedPaths" section of the indexing policy to ensure optimized performance for insertion. Not adding the vector path to "excludedPaths" will result in higher RU charge and latency for vector insertions. + > [!IMPORTANT] > Currently vector search in Azure Cosmos DB for NoSQL is supported on new containers only. You need to set both the container vector policy and any vector indexing policy during the time of container creation as it canΓÇÖt be modified later. Both policies will be modifiable in a future improvement to the preview feature. |
cosmos-db | How To Manage Indexing Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-manage-indexing-policy.md | In addition to including or excluding paths for individual properties, you can a > You must enroll in the [Azure Cosmos DB NoSQL Vector Index preview feature](vector-search.md#enroll-in-the-vector-search-preview-feature) to use vector search in Azure Cosmos DB for NoSQL.> >[!IMPORTANT]-> A vector indexing policy must be on the path defined in the container's vector policy. [Learn more about container vector policies](vector-search.md#container-vector-policies).) +> A vector indexing policy must be on the same path defined in the container's vector policy. [Learn more about container vector policies](vector-search.md#container-vector-policies).) ```json { In addition to including or excluding paths for individual properties, you can a "excludedPaths": [ { "path": "/_etag/?"+ }, + { + "path": "/vector/*" } ], "vectorIndexes": [ In addition to including or excluding paths for individual properties, you can a } ``` +>[!IMPORTANT] +> The vector path added to the "excludedPaths" section of the indexing policy to ensure optimized performance for insertion. Not adding the vector path to "excludedPaths" will result in higher RU charge and latency for vector insertions. ++ You can define the following types of vector index policies: | Type | Description | Max dimensions | |
cosmos-db | How To Python Vector Index Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-python-vector-index-query.md | vector_embedding_policy = { ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "distanceFunction": "cosine", ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "dimensions": 10 ΓÇ» ΓÇ» ΓÇ» ΓÇ» } -ΓÇ» ΓÇ» ] + ] } ``` ++ ## Creating a vector index in the indexing policy Once the vector embedding paths are decided, vector indexes need to be added to the indexing policy. For this example, the indexing policy would look something like this: indexing_policy = { ΓÇ» ΓÇ» ], ΓÇ» ΓÇ» "excludedPaths": [ ΓÇ» ΓÇ» ΓÇ» ΓÇ» { -ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "path": "/\"_etag\"/?" +ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "path": "/\"_etag\"/?", + "path": "/coverImageVector/*", + "path": "/contentVector/*" + ΓÇ» ΓÇ» ΓÇ» ΓÇ» } ΓÇ» ΓÇ» ], ΓÇ» ΓÇ» "vectorIndexes": [ indexing_policy = { } ``` +>[!IMPORTANT] +> The vector path added to the "excludedPaths" section of the indexing policy to ensure optimized performance for insertion. Not adding the vector path to "excludedPaths" will result in higher RU charge and latency for vector insertions. ++ > [!IMPORTANT] > Currently vector search in Azure Cosmos DB for NoSQL is supported on new containers only. You need to set both the container vector policy and any vector indexing policy during the time of container creation as it canΓÇÖt be modified later. Both policies will be modifiable in a future improvement to the preview feature. |
cosmos-db | Vector Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/vector-database.md | Title: Vector database description: Vector database functionalities, implementation, and comparison.--++ - build-2024 Previously updated : 03/30/2024 Last updated : 07/23/2024 # Vector database DiskANN enables you to perform highly accurate, low latency queriers at any scal - [Vector indexing in Azure Cosmos DB for NoSQL](index-policy.md#vector-indexes) - [VectorDistance system function NoSQL queries](nosql/query/vectordistance.md) - [How to setup vector database capabilities in Azure Cosmos DB NoSQL](nosql/vector-search.md)-- [Python notebook tutorial](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples)-- [C# Solution accelerator for building AI apps](https://aka.ms/BuildModernAiAppsSolution)-- [C# Azure Cosmos DB Chatbot with Azure OpenAI](https://aka.ms/cosmos-chatgpt-sample)+- [Python - Notebook tutorial](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples) +- [C# - Build Your Own Copilot Complete Solution Accelerator with AKS and Semantic Kernel](https://aka.ms/cdbcopilot) +- [C# - Build Your Own Copilot Sample App and Hands-on-Lab](https://github.com/AzureCosmosDB/cosmosdb-nosql-copilot) +- [Python - Movie Chatbot](https://github.com/AzureCosmosDB/Fabric-Conf-2024-Build-AI-Apps/tree/main/AzureCosmosDBforNoSQL) -### API for MongoDB +### Azure Cosmos DB for MongoDB Use the natively [integrated vector database in Azure Cosmos DB for MongoDB](mongodb/vcore/vector-search.md) (vCore architecture), which offers an efficient way to store, index, and search high-dimensional vector data directly alongside other application data. This approach removes the necessity of migrating your data to costlier alternative vector databases and provides a seamless integration of your AI-driven applications. #### Code samples -- [.NET RAG Pattern retail reference solution](https://github.com/Azure/Vector-Search-AI-Assistant-MongoDBvCore)+- [Build Your Own Copilot for Azure Cosmos DB for MongoDB in C# with Semantic Kernel](https://github.com/AzureCosmosDB/cosmosdb-mongo-copilot) - [.NET tutorial - recipe chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-MongoDBvCore) - [C# RAG pattern - Integrate OpenAI Services with Cosmos](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-MongoDBvCore) - [Python RAG pattern - Azure product chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-MongoDB-vCore)-- [Python notebook tutorial - Vector database integration through LangChain](https://python.langchain.com/docs/integrations/vectorstores/azure_cosmos_db)-- [Python notebook tutorial - LLM Caching integration through LangChain](https://python.langchain.com/docs/integrations/llms/llm_caching#azure-cosmos-db-semantic-cache)+- [Python Notebook - Vector database integration through LangChain tutorial](https://python.langchain.com/docs/integrations/vectorstores/azure_cosmos_db) +- [Python Notebook - LLM Caching integration through LangChain tutorial](https://python.langchain.com/docs/integrations/llms/llm_caching#azure-cosmos-db-semantic-cache) - [Python - LlamaIndex integration](https://docs.llamaindex.ai/en/stable/examples/vector_stores/AzureCosmosDBMongoDBvCoreDemo.html) - [Python - Semantic Kernel memory integration](https://github.com/microsoft/semantic-kernel/tree/main/python/semantic_kernel/connectors/memory/azure_cosmosdb)+- [Python Notebook - Movie Chatbot](https://github.com/AzureCosmosDB/Fabric-Conf-2024-Build-AI-Apps/tree/main/AzureCosmosDBforMongoDB) > [!div class="nextstepaction"] > [Use Azure Cosmos DB for MongoDB lifetime free tier](mongodb/vcore/free-tier.md) |
cost-management-billing | Programmatically Create Subscription Enterprise Agreement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md | resource subToMG 'Microsoft.Management/managementGroups/subscriptions@2020-05-01 * Now that you created a subscription, you can grant that ability to other users and service principals. For more information, see [Grant access to create Azure Enterprise subscriptions (preview)](grant-access-to-create-subscription.md). * For more information about managing large numbers of subscriptions using management groups, see [Organize your resources with Azure management groups](../../governance/management-groups/overview.md).-* To change the management group for a subscription, see [Move subscriptions](../../governance/management-groups/manage.md#move-subscriptions). +* To change the management group for a subscription, see [Move subscriptions](../../governance/management-groups/manage.md#move-management-groups-and-subscriptions). * For advanced subscription creation scenarios using REST API, see [Alias - Create](/rest/api/subscription/2021-10-01/alias/create). |
cost-management-billing | Programmatically Create Subscription Microsoft Customer Agreement Across Tenants | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement-across-tenants.md | Content-Type: application/json * Now that you've created a subscription, you can grant that ability to other users and service principals. For more information, see [Grant access to create Azure Enterprise subscriptions (preview)](grant-access-to-create-subscription.md). * For more information about managing large numbers of subscriptions using management groups, see [Organize your resources with Azure management groups](../../governance/management-groups/overview.md).-* To change the management group for a subscription, see [Move subscriptions](../../governance/management-groups/manage.md#move-subscriptions). +* To change the management group for a subscription, see [Move subscriptions](../../governance/management-groups/manage.md#move-management-groups-and-subscriptions). |
cost-management-billing | Programmatically Create Subscription Microsoft Customer Agreement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md | resource subToMG 'Microsoft.Management/managementGroups/subscriptions@2020-05-01 * Now that you've created a subscription, you can grant that ability to other users and service principals. For more information, see [Grant access to create Azure Enterprise subscriptions (preview)](grant-access-to-create-subscription.md). * For more information about managing large numbers of subscriptions using management groups, see [Organize your resources with Azure management groups](../../governance/management-groups/overview.md).-* To change the management group for a subscription, see [Move subscriptions](../../governance/management-groups/manage.md#move-subscriptions). +* To change the management group for a subscription, see [Move subscriptions](../../governance/management-groups/manage.md#move-management-groups-and-subscriptions). * For advanced subscription creation scenarios using REST API, see [Alias - Create](/rest/api/subscription/2021-10-01/alias/create). |
data-factory | Self Hosted Integration Runtime Proxy Ssis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-proxy-ssis.md | The cloud staging tasks that run on your Azure-SSIS IR are not be billed separat ## Enforce TLS 1.2 -If you need to access data stores that have been configured to use only the strongest cryptography/most secure network protocol (TLS 1.2), including your Azure Blob Storage for staging, you must enable only TLS 1.2 and disable older SSL/TLS versions at the same time on your self-hosted IR. To do so, you can download and run the *main.cmd* script that we provide in the *CustomSetupScript/UserScenarios/TLS 1.2* folder of our public preview blob container. Using [Azure Storage Explorer](https://storageexplorer.com/), you can connect to our public preview blob container by entering the following SAS URI: --`https://ssisazurefileshare.blob.core.windows.net/publicpreview?sp=rl&st=2020-03-25T04:00:00Z&se=2025-03-25T04:00:00Z&sv=2019-02-02&sr=c&sig=WAD3DATezJjhBCO3ezrQ7TUZ8syEUxZZtGIhhP6Pt4I%3D` +If you need to access data stores that have been configured to use only the strongest cryptography/most secure network protocol (TLS 1.2), including your Azure Blob Storage for staging, you must enable only TLS 1.2 and disable older SSL/TLS versions at the same time on your self-hosted IR. To do so, you can download and run the *main.cmd* script from https://github.com/Azure/Azure-DataFactory/tree/main/SamplesV2/SQLServerIntegrationServices/publicpreview/CustomSetupScript/UserScenarios/TLS%201.2. ## Current limitations |
defender-for-cloud | Devops Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/devops-support.md | The following tables summarize the availability and prerequisites for each featu | [Security recommendations to fix code vulnerabilities](defender-for-devops-introduction.md#manage-your-devops-environments-in-defender-for-cloud)| ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | [GitHub Advanced Security for Azure DevOps](/azure/devops/repos/security/configure-github-advanced-security-features?view=azure-devops&tabs=yaml&preserve-view=true) for CodeQL findings, [Microsoft Security DevOps extension](azure-devops-extension.yml) | | [Security recommendations to discover exposed secrets](defender-for-devops-introduction.md#manage-your-devops-environments-in-defender-for-cloud) | ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | [GitHub Advanced Security for Azure DevOps](/azure/devops/repos/security/configure-github-advanced-security-features?view=azure-devops&tabs=yaml&preserve-view=true) | | [Security recommendations to fix open source vulnerabilities](defender-for-devops-introduction.md#manage-your-devops-environments-in-defender-for-cloud) | ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | [GitHub Advanced Security for Azure DevOps](/azure/devops/repos/security/configure-github-advanced-security-features?view=azure-devops&tabs=yaml&preserve-view=true) |-| [Security recommendations to fix infrastructure as code misconfigurations](iac-vulnerabilities.md#configure-iac-scanning-and-view-the-results-in-azure-devops) | ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | [Microsoft Security DevOps extension](azure-devops-extension.yml) | +| [Security recommendations to fix infrastructure as code misconfigurations](iac-vulnerabilities.md) | ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | [Microsoft Security DevOps extension](azure-devops-extension.yml) | | [Security recommendations to fix DevOps environment misconfigurations](concept-devops-posture-management-overview.md) | ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | N/A | | [Pull request annotations](review-pull-request-annotations.md) | | ![Yes Icon](./medi) | | [Code to cloud mapping for Containers](container-image-mapping.md) | | ![Yes Icon](./media/icons/yes-icon.png) | [Microsoft Security DevOps extension](azure-devops-extension.yml#configure-the-microsoft-security-devops-azure-devops-extension) | The following tables summarize the availability and prerequisites for each featu | Feature | Foundational CSPM | Defender CSPM | Prerequisites | |-|:--:|:--:|| | [Connect GitHub repositories](quickstart-onboard-github.md) | ![Yes Icon](./medi#prerequisites) |-| [Security recommendations to fix code vulnerabilities](defender-for-devops-introduction.md#manage-your-devops-environments-in-defender-for-cloud)| ![Yes Icon](./medi) | +| [Security recommendations to fix code vulnerabilities](defender-for-devops-introduction.md#manage-your-devops-environments-in-defender-for-cloud)| ![Yes Icon](./medi) | | [Security recommendations to discover exposed secrets](defender-for-devops-introduction.md#manage-your-devops-environments-in-defender-for-cloud) | ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | [GitHub Advanced Security](https://docs.github.com/en/get-started/learning-about-github/about-github-advanced-security) | | [Security recommendations to fix open source vulnerabilities](defender-for-devops-introduction.md#manage-your-devops-environments-in-defender-for-cloud) | ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | [GitHub Advanced Security](https://docs.github.com/en/get-started/learning-about-github/about-github-advanced-security) |-| [Security recommendations to fix infrastructure as code misconfigurations](iac-vulnerabilities.md#configure-iac-scanning-and-view-the-results-in-azure-devops) | ![Yes Icon](./medi) | +| [Security recommendations to fix infrastructure as code misconfigurations](iac-vulnerabilities.md) | ![Yes Icon](./medi) | | [Security recommendations to fix DevOps environment misconfigurations](concept-devops-posture-management-overview.md) | ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | N/A | | [Code to cloud mapping for Containers](container-image-mapping.md) | | ![Yes Icon](./medi) |-| [Code to cloud mapping for Infrastructure as Code templates](iac-template-mapping.md) | | ![Yes Icon](./medi) | | [Attack path analysis](how-to-manage-attack-path.md) | | ![Yes Icon](./media/icons/yes-icon.png) | Enable Defender CSPM on an Azure Subscription, AWS Connector, or GCP connector in the same tenant as the DevOps Connector | | [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | | ![Yes Icon](./media/icons/yes-icon.png) | Enable Defender CSPM on an Azure Subscription, AWS Connector, or GCP connector in the same tenant as the DevOps Connector | |
defender-for-cloud | Gain End User Context Ai | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/gain-end-user-context-ai.md | If a fieldΓÇÖs name is misspelled, the Azure OpenAI API call will still succeed. The provided schema consists of the `SecurityContext` objects that contains several parameters that describe the application itself, and the end user that interacts with the application. These fields assist your security operations teams to investigate and mitigate security incidents by providing a comprehensive approach to protecting your AI applications. -- End used ID+- End user ID - End user type - End user tenant's ID - Source IP address. |
defender-for-cloud | Github Action | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/github-action.md | Microsoft Security DevOps uses the following Open Source tools: - [Connect your GitHub repositories](quickstart-onboard-github.md). -- Follow the guidance to set up [GitHub Advanced Security](https://docs.github.com/en/organizations/keeping-your-organization-secure/managing-security-settings-for-your-organization/managing-security-and-analysis-settings-for-your-organization) to view the DevOps posture assessments in Defender for Cloud.- - Open the [Microsoft Security DevOps GitHub action](https://github.com/marketplace/actions/security-devops-action) in a new window. - Ensure that [Workflow permissions are set to Read and Write](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#setting-the-permissions-of-the-github_token-for-your-repository) on the GitHub repository. This includes setting "id-token: write" permissions in the GitHub Workflow for federation with Defender for Cloud. Microsoft Security DevOps uses the following Open Source tools: on: push: branches:- - master + - main jobs: sample: name: Microsoft Security DevOps - # MSDO runs on windows-latest. - # ubuntu-latest also supported + # Windows and Linux agents are supported runs-on: windows-latest permissions: contents: read id-token: write actions: read+ # Write access for security-events is only required for customers looking for MSDO results to appear in the codeQL security alerts tab on GitHub (Requires GHAS) security-events: write steps: Microsoft Security DevOps uses the following Open Source tools: - uses: actions/checkout@v3 # Run analyzers- - name: Run Microsoft Security DevOps Analysis + - name: Run Microsoft Security DevOps uses: microsoft/security-devops-action@latest id: msdo # with: Microsoft Security DevOps uses the following Open Source tools: # languages: string. Optional. A comma-separated list of languages to analyze. Example: 'javascript,typescript'. Defaults to all. # tools: string. Optional. A comma-separated list of analyzer tools to run. Values: 'bandit', 'binskim', 'checkov', 'eslint', 'templateanalyzer', 'terrascan', 'trivy'. - # Upload alerts to the Security tab - - name: Upload alerts to Security tab - uses: github/codeql-action/upload-sarif@v2 - with: - sarif_file: ${{ steps.msdo.outputs.sarifFile }} -- # Upload alerts file as a workflow artifact - - name: Upload alerts file as a workflow artifact - uses: actions/upload-artifact@v3 - with: - name: alerts - path: ${{ steps.msdo.outputs.sarifFile }} + # Upload alerts to the Security tab - required for MSDO results to appear in the codeQL security alerts tab on GitHub (Requires GHAS) + # - name: Upload alerts to Security tab + # uses: github/codeql-action/upload-sarif@v3 + # with: + # sarif_file: ${{ steps.msdo.outputs.sarifFile }} ++ # Upload alerts file as a workflow artifact - required for MSDO results to appear in the codeQL security alerts tab on GitHub (Requires GHAS) + # - name: Upload alerts file as a workflow artifact + # uses: actions/upload-artifact@v3 + # with: + # name: alerts + # path: ${{ steps.msdo.outputs.sarifFile }} ```-- > [!NOTE] - > **For additional tool configuration options and instructions, see [the Microsoft Security DevOps wiki](https://github.com/microsoft/security-devops-action/wiki)** + > [!NOTE] + > **For additional tool configuration options and instructions, see [the Microsoft Security DevOps wiki](https://github.com/microsoft/security-devops-action/wiki)** 1. Select **Start commit** - :::image type="content" source="media/msdo-github-action/start-commit.png" alt-text="Screenshot showing you where to select start commit."::: --1. Select **Commit new file**. + :::image type="content" source="media/msdo-github-action/start-commit.png" alt-text="Screenshot showing you where to select start commit."::: + +1. Select **Commit new file**. Note that the process can take up to one minute to complete. - :::image type="content" source="media/msdo-github-action/commit-new.png" alt-text="Screenshot showing you how to commit a new file."::: -- The process can take up to one minute to complete. + :::image type="content" source="media/msdo-github-action/commit-new.png" alt-text="Screenshot showing you how to commit a new file."::: 1. Select **Actions** and verify the new action is running. - :::image type="content" source="media/msdo-github-action/verify-actions.png" alt-text="Screenshot showing you where to navigate to, to see that your new action is running." lightbox="media/msdo-github-action/verify-actions.png"::: + :::image type="content" source="media/msdo-github-action/verify-actions.png" alt-text="Screenshot showing you where to navigate to, to see that your new action is running." lightbox="media/msdo-github-action/verify-actions.png"::: ## View Scan Results **To view your scan results**: -1. Sign in to [GitHub](https://www.github.com). --1. Navigate to **Security** > **Code scanning alerts** > **Tool**. +1. Sign in to Azure. -1. From the dropdown menu, select **Filter by tool**. +1. Navigate to Defender for Cloud > DevOps Security. -Code scanning findings will be filtered by specific MSDO tools in GitHub. These code scanning results are also pulled into Defender for Cloud recommendations. +1. From the DevOps security blade, you should begin seeing the same MSDO security results developers see in their CI logs within minutes for the associated repository. Customers with GitHub Advanced Security will see the findings ingested from these tools as well. ## Learn more Code scanning findings will be filtered by specific MSDO tools in GitHub. These - Learn how to [deploy apps from GitHub to Azure](/azure/developer/github/deploy-to-azure). -## Related content +## Next steps Learn more about [DevOps security in Defender for Cloud](defender-for-devops-introduction.md). |
defender-for-cloud | Iac Vulnerabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/iac-vulnerabilities.md | This article shows you how to apply a template YAML configuration file to scan y - If you manage your source code in Azure DevOps, set up the [Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.yml). - Ensure that you have an IaC template in your repository. -<a name="configure-iac-scanning-and-view-the-results-in-github"></a> - ## Set up and run a GitHub action to scan your connected IaC source code To set up an action and view scan results in GitHub: To set up an action and view scan results in GitHub: 1. Select the workflow to see the action status. -1. To view the results of the scan, go to **Security** > **Code scanning alerts**. -- You can filter by tool to see only the IaC findings. --<a name="configure-iac-scanning-and-view-the-results-in-azure-devops"></a> +1. To view the results of the scan, go to **Defender for Cloud** > **DevOps security** (No GHAS pre-requisite) or **Security** > **Code scanning alerts** natively in GitHub (Requires GHAS license). ## Set up and run an Azure DevOps extension to scan your connected IaC source code To set up an extension and view scan results in Azure DevOps: ## View details and remediation information for applied IaC rules -The IaC scanning tools that are included with Microsoft Security DevOps are [Template Analyzer](https://github.com/Azure/template-analyzer) ([PSRule](https://aka.ms/ps-rule-azure) is included in Template Analyzer) and [Terrascan](https://github.com/tenable/terrascan). +The IaC scanning tools that are included with Microsoft Security DevOps are [Template Analyzer](https://github.com/Azure/template-analyzer) ([PSRule](https://aka.ms/ps-rule-azure) is included in Template Analyzer), [Checkov](https://www.checkov.io/) and [Terrascan](https://github.com/tenable/terrascan). Template Analyzer runs rules on Azure Resource Manager templates (ARM templates) and Bicep templates. For more information, see the [Template Analyzer rules and remediation details](https://github.com/Azure/template-analyzer/blob/main/docs/built-in-rules.md#built-in-rules). Terrascan runs rules on ARM templates and templates for CloudFormation, Docker, Helm, Kubernetes, Kustomize, and Terraform. For more information, see the [Terrascan rules](https://runterrascan.io/docs/policies/). +Chekov runs rules on ARM templates and templates for CloudFormation, Docker, Helm, Kubernetes, Kustomize, and Terraform. For more information, see the [Checkov rules](https://www.checkov.io/5.Policy%20Index/all.html). + To learn more about the IaC scanning tools that are included with Microsoft Security DevOps, see: - [Template Analyzer](https://github.com/Azure/template-analyzer)-- [PSRule](https://aka.ms/ps-rule-azure)+- [Checkov](https://www.checkov.io/) - [Terrascan](https://runterrascan.io/) ## Related content |
defender-for-cloud | Quickstart Onboard Github | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-github.md | To complete this quick start, you need: - An Azure account with Defender for Cloud onboarded. If you don't already have an Azure account, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- GitHub Enterprise with GitHub Advanced Security enabled for posture assessments of secrets, dependencies, Infrastructure-as-Code misconfigurations, and code quality analysis within GitHub repositories.- ## Availability | Aspect | Details | To complete this quick start, you need: > [!NOTE] > **Security Reader** role can be applied on the Resource Group/GitHub connector scope to avoid setting highly privileged permissions on a Subscription level for read access of DevOps security posture assessments. -## Connect your GitHub account +## Connect your GitHub environment -To connect your GitHub account to Microsoft Defender for Cloud: +To connect your GitHub environment to Microsoft Defender for Cloud: 1. Sign in to the [Azure portal](https://portal.azure.com/). To connect your GitHub account to Microsoft Defender for Cloud: 1. Select **Install**. -1. Select the organizations to install the GitHub application. It's recommended to grant access to **all repositories** to ensure Defender for Cloud can secure your entire GitHub environment. -- This step grants Defender for Cloud access to the selected organizations. --1. For Organizations, select one of the following: -- - Select **all existing organizations** to autodiscover all repositories in GitHub organizations where the DevOps security GitHub application is installed. - - Select **all existing and future organizations** to autodiscover all repositories in GitHub organizations where the DevOps security GitHub application is installed and future organizations where the DevOps security GitHub application is installed. +1. Select the organizations to install the Defender for Cloud GitHub application. It's recommended to grant access to **all repositories** to ensure Defender for Cloud can secure your entire GitHub environment. + This step grants Defender for Cloud access to organizations that you wish to onboard. + +1. All organizations with the Defender for Cloud GitHub application installed will be onboarded to Defender for Cloud. To change the behavior going forward, select one of the following: ++ - Select **all existing organizations** to automatically discover all repositories in GitHub organizations where the DevOps security GitHub application is installed. + + - Select **all existing and future organizations** to automatically discover all repositories in GitHub organizations where the DevOps security GitHub application is installed and future organizations where the DevOps security GitHub application is installed. + > [!NOTE] + > Organizations can be removed from your connector after the connector creation is complete. See the [editing your DevOps connector](edit-devops-connector.md) page for more information. + 1. Select **Next: Review and generate**. 1. Select **Create**. |
defender-for-cloud | Recommendations Reference Devops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-devops.md | DevOps recommendations don't affect your [secure score](secure-score-security-co **Severity**: High +### [(Preview) Azure DevOps projects should have creation of classic pipelines disabled](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/9f4a17ee-7a02-4978-b968-8c36b74ac8e3) ++**Description**: Disabling the creation of classic build and release pipelines prevents a security concern that stems from YAML and classic pipelines sharing the same resources, for example the same service connections. Potential attackers can leverage classic pipelines to create processes that evade typical defense mechanisms set up around modern YAML pipelines. ++**Severity**: High + ## GitHub recommendations ### [GitHub organizations should not make action secrets accessible to all repositories](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6331fad3-a7a2-497d-b616-52672057e0f3) DevOps recommendations don't affect your [secure score](secure-score-security-co **Severity**: High +### [(Preview) GitHub organizations should block Copilot suggestions that match public code](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/98e858ed-6e88-4698-b538-f51b31ad57f6) ++**Description**: Enabling GitHub Copilot's filter to block code suggestions matching public code on GitHub enhances security and legal compliance. It prevents the unintentional incorporation of public or open-source code, reducing the risk of legal issues and ensuring adherence to licensing terms. Additionally, it helps avoid introducing potential vulnerabilities from public code into the organization's projects, thereby maintaining higher code quality and security. When the filter is enabled, GitHub Copilot checks code suggestions with their surrounding code of about 150 characters against public code on GitHub. If there is a match or near match, the suggestion will not be shown. ++**Severity**: High ++### [(Preview) GitHub organizations should enforce multifactor authentication for outside collaborators](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/a9621d26-9d8c-4cd6-8ad0-84501eb88f17) ++**Description**: Enforcing multifactor authentication for outside collaborators in a GitHub organization is a security measure that requires collaborators to use an additional form of identification besides their password to access the organization's repositories and resources. This enhances security by protecting against unauthorized access, even if a password is compromised, and helps ensure compliance with industry standards. It involves informing collaborators about the requirement and providing support for the transition, ultimately reducing the risk of data breaches. ++**Severity**: High ++### [(Preview) GitHub repositories should require minimum two-reviewer approval for code pushes](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/20be7df7-9ebb-4fb4-95a9-3ae19b78b80a) ++**Description**: To prevent unintended or malicious changes from being directly committed, it's important to implement protection policies for the default branch in GitHub repositories. We recommend requiring at least two code reviewers to approve pull requests before the code is merged with the default branch. By requiring approval from a minimum number of two reviewers, you can reduce the risk of unauthorized modifications, which could lead to system instability or security vulnerabilities. ++**Severity**: High + ### GitLab recommendations ### [GitLab projects should have secret scanning findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/867001c3-2d01-4db7-b513-5cb97638f23d/showSecurityCenterCommandBar~/false) DevOps recommendations don't affect your [secure score](secure-score-security-co ## Related content - [Learn about security recommendations](security-policy-concept.md)-- [Review security recommendations](review-security-recommendations.md)+- [Review security recommendations](review-security-recommendations.md) |
dev-box | Concept Dev Box Deployment Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-deployment-guide.md | When you have the following requirements, you need to use Azure network connecti When connecting to resources on-premises through Microsoft Entra hybrid joins, work with your Azure network topology expert. Best practice is to implement a [hub-and-spoke network topology](/azure/cloud-adoption-framework/ready/azure-best-practices/hub-spoke-network-topology). The hub is the central point that connects to your on-premises network; you can use an Express Route, a site-to-site VPN, or a point-to-site VPN. The spoke is the virtual network that contains the dev boxes. You peer the dev box virtual network to the on-premises connected virtual network to provide access to on-premises resources. Hub and spoke topology can help you manage network traffic and security. +Network planning should include an estimate of the number of IP addresses you'll need, and their distribution across VNETs. Additional free IP addresses are necessary for the Azure Network connection health check. You need 1 additional IP address per dev box, and two IP addresses for the health check and Dev Box infrastructure. + Learn more about [Microsoft Dev Box networking requirements](./concept-dev-box-network-requirements.md?tabs=W365). ### Step 3: Configure security groups for role-based access control |
dev-box | Concept Dev Box Network Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-network-requirements.md | These FQDNs and endpoints only correspond to client sites and resources. This li ## Troubleshooting +This section covers some common connection and network issues. + ### Connection issues - **Logon attempt failed** These FQDNs and endpoints only correspond to client sites and resources. This li For more information about troubleshooting group policy issues, see [Applying Group Policy troubleshooting guidance](/troubleshoot/windows-server/group-policy/applying-group-policy-troubleshooting-guidance). - ### IPv6 addressing issues If you're experiencing IPv6 issues, check that the *Microsoft.AzureActiveDirectory* service endpoint is not enabled on the virtual network or subnet. This service endpoint converts the IPv4 to IPv6. For more information, see [Virtual Network service endpoints](/azure/virtual-network/virtual-network-service-endpoints-overview). +### Updating dev box definition image issues ++When you update the image used in a dev box definition, you must ensure that you have sufficient IP addresses available in your virtual network. Additional free IP addresses are necessary for the Azure Network connection health check. If the health check fails the dev box definition will not update. You need 1 additional IP address per dev box, and two IP addresses for the health check and Dev Box infrastructure. ++For more information about updating dev box definition images, see [Update a dev box definition](how-to-manage-dev-box-definitions.md#update-a-dev-box-definition). ## Related content |
dev-box | How To Manage Dev Box Definitions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-definitions.md | The following steps show you how to create a dev box definition by using an exis Over time, your needs for dev boxes can change. You might want to move from a Windows 10 base operating system to a Windows 11 base operating system, or increase the default compute specification for your dev boxes. Your initial dev box definitions might no longer be appropriate for your needs. You can update a dev box definition so new dev boxes use the new configuration. +When you update the image used in a dev box definition, you must ensure that you have sufficient IP addresses available in your virtual network. Additional free IP addresses are necessary for the Azure Network connection health check. If the health check fails the dev box definition will not update. You need 1 additional IP address per dev box, and two IP addresses for the health check and Dev Box infrastructure. + You can update the image, image version, compute, and storage settings for a dev box definition: 1. Sign in to the [Azure portal](https://portal.azure.com). |
event-hubs | Monitor Event Hubs Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/monitor-event-hubs-reference.md | Title: Monitoring Azure Event Hubs data reference -description: Important reference material needed when you monitor Azure Event Hubs. + Title: Monitoring data reference for Azure Event Hubs +description: This article contains important reference material you need when you monitor Azure Event Hubs by using Azure Monitor. Last updated : 06/20/2024+ - Previously updated : 10/06/2022+ +# Azure Event Hubs monitoring data reference -# Monitoring Azure Event Hubs data reference -See [Monitoring Azure Event Hubs](monitor-event-hubs.md) for details on collecting and analyzing monitoring data for Azure Event Hubs. ++See [Monitor Azure Event Hubs](monitor-event-hubs.md) for details on the data you can collect for Event Hubs and how to use it. ++Azure Event Hubs creates monitoring data using [Azure Monitor](../azure-monitor/overview.md), which is a full stack monitoring service in Azure. Azure Monitor provides a complete set of features to monitor your Azure resources. It can also monitor resources in other clouds and on-premises. ++Azure Event Hubs collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data). +++### Supported metrics for Microsoft.EventHub/clusters ++The following table lists the metrics available for the Microsoft.EventHub/clusters resource type. +++### Supported metrics for Microsoft.EventHub/Namespaces ++The following table lists the metrics available for the Microsoft.EventHub/Namespaces resource type. +++The following tables list all the automatically collected platform metrics collected for Azure Event Hubs. The resource provider for these metrics is `Microsoft.EventHub/clusters` or `Microsoft.EventHub/namespaces`. ++*Request metrics* count the number of data and management operations requests. This table provides more information about values from the preceding tables. ++| Metric name | Description | +|:--|:| +| Incoming Requests | The number of requests made to the Event Hubs service over a specified period. This metric includes all the data and management plane operations. | +| Successful Requests | The number of successful requests made to the Event Hubs service over a specified period. | +| Throttled Requests | The number of requests that were throttled because the usage was exceeded. | ++This table provides more information for message metrics from the preceding tables. ++| Metric name | Description | +|:|:| +| Incoming Messages | The number of events or messages sent to Event Hubs over a specified period. | +| Outgoing Messages | The number of events or messages received from Event Hubs over a specified period. | +| Captured Messages | The number of captured messages. | +| Incoming Bytes | Incoming bytes for an event hub over a specified period. | +| Outgoing Bytes | Outgoing bytes for an event hub over a specified period. | +| Size | Size of an event hub in bytes. | > [!NOTE]-> Azure Monitor doesn't include dimensions in the exported metrics data, that's sent to a destination like Azure Storage, Azure Event Hubs, Log Analytics, etc. +> - These values are point-in-time values. Incoming messages that are consumed immediately after that point-in-time might not be reflected in these metrics. +> - The Incoming Requests metric includes all the data and management plane operations. The Incoming Messages metric gives you the total number of events that are sent to the event hub. For example, if you send a batch of 100 events to an event hub, it counts as 1 incoming request and 100 incoming messages. ++This table provides more information for capture metrics from the preceding tables. +| Metric name | Description | +|:|:| +| Captured Messages | The number of captured messages. | +| Captured Bytes | Captured bytes for an event hub. | +| Capture Backlog | Capture backlog for an event hub. | -## Metrics -This section lists all the automatically collected platform metrics collected for Azure Event Hubs. The resource provider for these metrics is `Microsoft.EventHub/clusters` or `Microsoft.EventHub/namespaces`. +This table provides more information for connection metrics from the preceding tables. -### Request metrics -Counts the number of data and management operations requests. +| Metric name | Description | +|:|:| +| Active Connections | The number of active connections on a namespace and on an entity (event hub) in the namespace. Value for this metric is a point-in-time value. Connections that were active immediately after that point-in-time might not be reflected in the metric. | +| Connections Opened | The number of open connections. | +| Connections Closed | The number of closed connections. | -| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions | -| - | - | -- | | | | -| Incoming Requests| Yes | Count | Count | The number of requests made to the Event Hubs service over a specified period. This metric includes all the data and management plane operations. | Entity name| -| Successful Requests| No | Count | Count | The number of successful requests made to the Event Hubs service over a specified period. | Entity name<br/><br/>Operation Result | -| Throttled Requests| No | Count | Count | The number of requests that were throttled because the usage was exceeded. | Entity name<br/><br/>Operation Result | +This table provides more information for error metrics from the preceding tables. -The following two types of errors are classified as **user errors**: +| Metric name | Description | +|:|:| +| Server Errors | The number of requests not processed because of an error in the Event Hubs service over a specified period. | +| User Errors | The number of requests not processed because of user errors over a specified period. | +| Quota Exceeded Errors | The number of errors caused by exceeding quotas over a specified period. | ++The following two types of errors are classified as *user errors*: 1. Client-side errors (In HTTP that would be 400 errors). 2. Errors that occur while processing messages. +> [!NOTE] +> Logic Apps creates epoch receivers. Receivers can be moved from one node to another depending on the service load. During those moves, `ReceiverDisconnection` exceptions might occur. They are counted as user errors on the Event Hubs service side. Logic Apps can collect failures from Event Hubs clients so that you can view them in user logs. -### Message metrics -| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions | -| - | - | -- | | | | -|Incoming Messages| Yes | Count | Count | The number of events or messages sent to Event Hubs over a specified period. | Entity name| -|Outgoing Messages| Yes | Count | Count | The number of events or messages received from Event Hubs over a specified period. | Entity name | -| Captured Messages| No | Count| Count | The number of captured messages. | Entity name | -|Incoming Bytes | Yes | Bytes | Count | Incoming bytes for an event hub over a specified period. | Entity name| -|Outgoing Bytes | Yes | Bytes | Count | Outgoing bytes for an event hub over a specified period. | Entity name | -| Size | No | Bytes | Average | Size of an event hub in bytes.|Entity name | -> [!NOTE] -> - These values are point-in-time values. Incoming messages that were consumed immediately after that point-in-time may not be reflected in these metrics. -> - The **Incoming requests** metric includes all the data and management plane operations. The **Incoming messages** metric gives you the total number of events that are sent to the event hub. For example, if you send a batch of 100 events to an event hub, it'll count as 1 incoming request and 100 incoming messages. --### Capture metrics -| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions | -| - | -- | | | | | -| Captured Messages| No | Count| Count | The number of captured messages. | Entity name | -| Captured Bytes | No | Bytes | Count | Captured bytes for an event hub | Entity name | -| Capture Backlog | No | Count| Count | Capture backlog for an event hub | Entity name | ---### Connection metrics -| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions | -| - | -- | | | | | -|Active Connections| No | Count | Average | The number of active connections on a namespace and on an entity (event hub) in the namespace. Value for this metric is a point-in-time value. Connections that were active immediately after that point-in-time may not be reflected in the metric.| Entity name | -|Connections Opened | No | Count | Average | The number of open connections. | Entity name | -|Connections Closed | No | Count | Average| The number of closed connections. | Entity name | --### Error metrics -| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions | -| - | -- | | | | | -|Server Errors| No | Count | Count | The number of requests not processed because of an error in the Event Hubs service over a specified period. | Entity name<br/><br/>Operation Result | -|User Errors | No | Count | Count | The number of requests not processed because of user errors over a specified period. | Entity name<br/><br/>Operation Result| -|Quota Exceeded Errors | No |Count | Count | The number of errors caused by exceeding quotas over a specified period. | Entity name<br/><br/>Operation Result| +| Dimension name | Description | +|:-|:| +| EntityName | Name of the event hub. With the 'Incoming Requests' metric, the Entity Name dimension has a value of `-NamespaceOnlyMetric-` in addition to all your event hubs. It represents the requests that were made at the namespace level. Examples include a request to list all event hubs in the namespace or requests to entities that failed authentication or authorization. | +| OperationResult | Either indicates `success` or the appropriate error state, such as `serverbusy`, `clienterror` or `quotaexceeded`. | ++Adding dimensions to your metrics is optional. If you don't add dimensions, metrics are specified at the namespace level. > [!NOTE]-> Logic Apps creates epoch receivers and receivers may be moved from one node to another depending on the service load. During those moves, `ReceiverDisconnection` exceptions may occur. They are counted as user errors on the Event Hubs service side. Logic Apps may collect failures from Event Hubs clients so that you can view them in user logs. +> When you enable metrics in a diagnostic setting, dimension information isn't currently included as part of the information sent to a storage account, event hub, or log analytics. + -## Metric dimensions +### Supported resource logs for Microsoft.EventHub/Namespaces -Azure Event Hubs supports the following dimensions for metrics in Azure Monitor. Adding dimensions to your metrics is optional. If you don't add dimensions, metrics are specified at the namespace level. -|Dimension name|Description| -| - | -- | -|Entity Name| Name of the event hub. With the 'Incoming Requests' metric, the Entity Name dimension has a value of '-NamespaceOnlyMetric-' in addition to all your event hubs. It represents the requests that were made at the namespace level. Examples include a request to list all event hubs in the namespace or requests to entities that failed authentication or authorization.| +### Event Hubs Microsoft.EventHub/namespaces -## Resource logs +- [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity#columns) +- [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics#columns) +- [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics#columns) +- [AZMSApplicationMetricLogs](/azure/azure-monitor/reference/tables/azmsapplicationmetriclogs#columns) +- [AZMSOperationalLogs](/azure/azure-monitor/reference/tables/azmsoperationallogs#columns) +- [AZMSRunTimeAuditLogs](/azure/azure-monitor/reference/tables/azmsruntimeauditlogs#columns) +- [AZMSDiagnosticErrorLogs](/azure/azure-monitor/reference/tables/azmsdiagnosticerrorlogs#columns) +- [AZMSVnetConnectionEvents](/azure/azure-monitor/reference/tables/azmsvnetconnectionevents#columns) +- [AZMSArchiveLogs](/azure/azure-monitor/reference/tables/azmsarchivelogs#columns) +- [AZMSAutoscaleLogs](/azure/azure-monitor/reference/tables/azmsautoscalelogs#columns) +- [AZMSKafkaCoordinatorLogs](/azure/azure-monitor/reference/tables/azmskafkacoordinatorlogs#columns) +- [AZMSKafkaUserErrorLogs](/azure/azure-monitor/reference/tables/azmskafkausererrorlogs#columns) +- [AZMSCustomerManagedKeyUserLogs](/azure/azure-monitor/reference/tables/azmscustomermanagedkeyuserlogs#columns) -Azure Event Hubs now has the capability to dispatch logs to either of two destination tables - Azure Diagnostic or [Resource specific tables](~/articles/azure-monitor/essentials/resource-logs.md) in Log Analytics. You could use the toggle available on Azure portal to choose destination tables. +### Event Hubs resource logs ++Azure Event Hubs now has the capability to dispatch logs to either of two destination tables: Azure Diagnostic or [Resource specific tables](~/articles/azure-monitor/essentials/resource-logs.md) in Log Analytics. You could use the toggle available on Azure portal to choose destination tables. :::image type="content" source="media/monitor-event-hubs-reference/destination-table-toggle.png" alt-text="Screenshot of dialog box to set destination table." lightbox="media/monitor-event-hubs-reference/destination-table-toggle.png"::: +Azure Event Hubs uses Kusto tables from Azure Monitor Logs. You can query these tables with Log Analytics. For a list of Kusto tables the service uses, see [Azure Monitor Logs table reference](/azure/azure-monitor/reference/tables/tables-resourcetype#event-hubs). ++You can view our sample queries to get started with different log categories. ++> [!IMPORTANT] +> Dimensions aren't exported to a Log Analytics workspace. + [!INCLUDE [event-hubs-diagnostic-log-schema](./includes/event-hubs-diagnostic-log-schema.md)] +### Runtime audit logs -## Runtime audit logs -Runtime audit logs capture aggregated diagnostic information for all data plane access operations (such as send or receive events) in Event Hubs. +Runtime audit logs capture aggregated diagnostic information for all data plane access operations (such as send or receive events) in Event Hubs. -> [!NOTE] +> [!NOTE] > Runtime audit logs are available only in **premium** and **dedicated** tiers. Runtime audit logs include the elements listed in the following table: - | Name | Description | Supported in Azure Diagnostics | Supported in Resource Specific table |-| - | -| --| --| +|:- |:-|:--|:--| | `ActivityId` | A randomly generated UUID that ensures uniqueness for the audit activity. | Yes | Yes | | `ActivityName` | Runtime operation name.| Yes | Yes | | `ResourceId` | Resource associated with the activity. | Yes | Yes | | `Timestamp` | Aggregation time. | Yes | No |-| `TimeGenerated [UTC]`|Time of executed operation (in UTC)| No | Yes | +| `TimeGenerated [UTC]`|Time of executed operation (in UTC) | No | Yes | | `Status` | Status of the activity (success or failure). | Yes | Yes | | `Protocol` | Type of the protocol associated with the operation. | Yes | Yes |-| `AuthType` | Type of authentication (Azure Active Directory or SAS Policy). | Yes | Yes | -| `AuthKey` | Azure Active Directory application ID or SAS policy name that's used to authenticate to a resource. | Yes | Yes | +| `AuthType` | Type of authentication (Microsoft Entra ID or SAS Policy). | Yes | Yes | +| `AuthKey` | Microsoft Entra ID application ID or SAS policy name that's used to authenticate to a resource. | Yes | Yes | | `NetworkType` | Type of the network access: `Public` or `Private`. | Yes | Yes | | `ClientIP` | IP address of the client application. | Yes | Yes | | `Count` | Total number of operations performed during the aggregated period of 1 minute. | Yes | Yes | | `Properties` | Metadata that are specific to the data plane operation. | Yes | Yes |-| `Category` | Log category | Yes | NO | -| `Provider`|Name of Service emitting the logs, such as Eventhub | No | Yes | +| `Category` | Log category | Yes | No | +| `Provider`|Name of Service emitting the logs, such as EventHubs | No | Yes | | `Type` | Type of logs emitted | No | Yes | Here's an example of a runtime audit log entry: -AzureDiagnostics : +AzureDiagnostics: + ```json { "ActivityId": "<activity id>", AzureDiagnostics : } ```+ Resource specific table entry:+ ```json { "ActivityId": "<activity id>", Resource specific table entry: ``` -## Application metrics logs -Application metrics logs capture the aggregated information on certain metrics related to data plane operations. The captured information includes the following runtime metrics. +### Application metrics logs -> [!NOTE] -> Application metrics logs are available only in **premium** and **dedicated** tiers. +Application metrics logs capture the aggregated information on certain metrics related to data plane operations. The captured information includes the following runtime metrics. ++> [!NOTE] +> Application metrics logs are available only in **premium** and **dedicated** tiers. | Name | Description |-| - | - | +|:-|:- | | `ConsumerLag` | Indicate the lag between consumers and producers. | | `NamespaceActiveConnections` | Details of active connections established from a client to the event hub. | | `GetRuntimeInfo` | Obtain run time information from Event Hubs. | Application metrics logs capture the aggregated information on certain metrics r | `OffsetCommit` | Number of offset commit calls made to the event hub | | `OffsetFetch` | Number of offset fetch calls made to the event hub. | -## Diagnostic Error Logs -Diagnostic error logs capture error messages for any client side, throttling and Quota exceeded errors. They provide detailed diagnostics for error identification. +### Diagnostic Error Logs ++Diagnostic error logs capture error messages for any client side, throttling, and Quota exceeded errors. They provide detailed diagnostics for error identification. -Diagnostic Error Logs include elements listed in below table: +Diagnostic Error Logs include elements listed in following table: | Name | Description | Supported in Azure Diagnostics | Supported in AZMSDiagnosticErrorLogs (Resource specific table) |-| ||| | +|:|:|:|:| | `ActivityId` | A randomly generated UUID that ensures uniqueness for the audit activity. | Yes | Yes | | `ActivityName` | Operation name | Yes | Yes | | `NamespaceName` | Name of Namespace | Yes | yes | Here's an example of Diagnostic error log entry: } ```+ Resource specific table entry:+ ```json { "ActivityId": "0000000000-0000-0000-0000-00000000000000", Resource specific table entry: ``` -## Azure Monitor Logs tables -Azure Event Hubs uses Kusto tables from Azure Monitor Logs. You can query these tables with Log Analytics. For a list of Kusto tables the service uses, see [Azure Monitor Logs table reference](/azure/azure-monitor/reference/tables/tables-resourcetype#event-hubs). -You can view our sample queries to get started with different log categories. --> [!IMPORTANT] -> Dimensions aren't exported to a Log Analytics workspace. +- [Microsoft.EventHub resource provider operations](/azure/role-based-access-control/permissions/integration#microsofteventhub) +## Related content -## Next steps -- For details on monitoring Azure Event Hubs, see [Monitoring Azure Event Hubs](monitor-event-hubs.md).-- For details on monitoring Azure resources, see [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).+- See [Monitor Azure Event Hubs](monitor-event-hubs.md) for a description of monitoring Event Hubs. +- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources. |
event-hubs | Monitor Event Hubs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/monitor-event-hubs.md | Title: Monitoring Azure Event Hubs + Title: Monitor Azure Event Hubs description: Learn how to use Azure Monitor to view, analyze, and create alerts on metrics from Azure Event Hubs. Last updated : 06/20/2024+ - Previously updated : 04/05/2024+ # Monitor Azure Event Hubs-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by Azure Event Hubs and how to analyze and alert on this data with Azure Monitor. -## What is Azure Monitor? -Azure Event Hubs creates monitoring data using [Azure Monitor](../azure-monitor/overview.md), which is a full stack monitoring service in Azure. Azure Monitor provides a complete set of features to monitor your Azure resources. It can also monitor resources in other clouds and on-premises. -Start with the article [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md), which describes the following concepts: +Azure Monitor documentation describes the following concepts: - What is Azure Monitor? - Costs associated with monitoring Start with the article [Monitoring Azure resources with Azure Monitor](../azure- - Configuring data collection - Standard tools in Azure for analyzing and alerting on monitoring data -The following sections build on this article by describing the specific data gathered for Azure Event Hubs. These sections also provide examples for configuring data collection and analyzing this data with Azure tools. +The following sections describe the specific data gathered for Azure Event Hubs. These sections also provide examples for configuring data collection and analyzing this data with Azure tools. > [!TIP] > To understand costs associated with Azure Monitor, see [Azure Monitor cost and usage](../azure-monitor/cost-usage.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md). -## Monitoring data from Azure Event Hubs -Azure Event Hubs collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data). +For more information about the resource types for Event Hubs, see [Azure Event Hubs monitoring data reference](monitor-event-hubs-reference.md). -See [Azure Event Hubs monitoring data reference](monitor-event-hubs-reference.md) for a detailed reference of the logs and metrics created by Azure Event Hubs. -## Collection and routing -Platform metrics and the activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting. +- Azure Storage -Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations. + If you use Azure Storage to store the diagnostic logging information, the information is stored in containers named *insights-logs-operationlogs* and *insights-metrics-pt1m*. Sample URL for an operation log: `https://<Azure Storage account>.blob.core.windows.net/insights-logs-operationallogs/resourceId=/SUBSCRIPTIONS/<Azure subscription ID>/RESOURCEGROUPS/<Resource group name>/PROVIDERS/MICROSOFT.EVENTHUB/NAMESPACES/<Namespace name>/y=<YEAR>/m=<MONTH-NUMBER>/d=<DAY-NUMBER>/h=<HOUR>/m=<MINUTE>/PT1H.json`. The URL for a metric log is similar. -See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure Event Hubs are listed in [Azure Event Hubs monitoring data reference](monitor-event-hubs-reference.md#resource-logs). +- Azure Event Hubs -> [!NOTE] -> Azure Monitor doesn't include dimensions in the exported metrics data, that's sent to a destination like Azure Storage, Azure Event Hubs, Log Analytics, etc. ---### Azure Storage -If you use **Azure Storage** to store the diagnostic logging information, the information is stored in containers named **insights-logs-operationlogs** and **insights-metrics-pt1m**. Sample URL for an operation log: `https://<Azure Storage account>.blob.core.windows.net/insights-logs-operationallogs/resourceId=/SUBSCRIPTIONS/<Azure subscription ID>/RESOURCEGROUPS/<Resource group name>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<Namespace name>/y=<YEAR>/m=<MONTH-NUMBER>/d=<DAY-NUMBER>/h=<HOUR>/m=<MINUTE>/PT1H.json`. The URL for a metric log is similar. + If you use Azure Event Hubs to store the diagnostic logging information, the information is stored in Event Hubs instances named *insights-logs-operationlogs* and *insights-metrics-pt1m*. You can also select an existing event hub except for the event hub for which you're configuring diagnostic settings. -### Azure Event Hubs -If you use **Azure Event Hubs** to store the diagnostic logging information, the information is stored in Event Hubs instances named **insights-logs-operationlogs** and **insights-metrics-pt1m**. You can also select an existing event hub except for the event hub for which you're configuring diagnostic settings. +- Log Analytics -### Log Analytics -If you use **Log Analytics** to store the diagnostic logging information, the information is stored in tables named **AzureDiagnostics** / **AzureMetrics** or **resource specific tables** + If you use Log Analytics to store the diagnostic logging information, the information is stored in tables named *AzureDiagnostics / AzureMetrics* or resource specific tables. > [!IMPORTANT]-> Enabling these settings requires additional Azure services (storage account, event hub, or Log Analytics), which may increase your cost. To calculate an estimated cost, visit the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator). +> Enabling these settings requires additional Azure > [!NOTE] > When you enable metrics in a diagnostic setting, dimension information is not currently included as part of the information sent to a storage account, event hub, or log analytics. -The metrics and logs you can collect are discussed in the following sections. ++Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure Event Hubs are listed in [Azure Event Hubs monitoring data reference](monitor-event-hubs-reference.md#resource-logs). ++> [!NOTE] +> Azure Monitor doesn't include dimensions in the exported metrics data that's sent to a destination like Azure Storage, Azure Event Hubs, and Log Analytics. ++For a list of available metrics for Event Hubs, see [Azure Event Hubs monitoring data reference](monitor-event-hubs-reference.md#metrics). ++### Analyze metrics -## Analyze metrics You can analyze metrics for Azure Event Hubs, along with metrics from other Azure services, by selecting **Metrics** from the **Azure Monitor** section on the home page for your Event Hubs namespace. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool. For a list of the platform metrics collected, see [Monitoring Azure Event Hubs data reference metrics](monitor-event-hubs-reference.md#metrics). :::image type="content" source="./media/monitor-event-hubs/metrics.png" alt-text="Screenshot showing the Metrics Explorer for an Event Hubs namespace." lightbox="./media/monitor-event-hubs/metrics.png"::: For reference, you can see a list of [all resource metrics supported in Azure Mo > Azure Monitor metrics data is available for 90 days. However, when creating charts only 30 days can be visualized. For example, if you want to visualize a 90 day period, you must break it into three charts of 30 days within the 90 day period. ### Filter and split+ For metrics that support dimensions, you can apply filters using a dimension value. For example, add a filter with `EntityName` set to the name of an event hub. You can also split a metric by dimension to visualize how different segments of the metric compare with each other. For more information of filtering and splitting, see [Advanced features of Azure Monitor](../azure-monitor/essentials/metrics-charts.md). :::image type="content" source="./media/monitor-event-hubs/metrics-filter-split.png" alt-text="Screenshot showing the Metrics Explorer for an Event Hubs namespace with a filter." lightbox="./media/monitor-event-hubs/metrics-filter-split.png"::: -## Analyze logs -Using Azure Monitor Log Analytics requires you to create a diagnostic configuration and enable __Send information to Log Analytics__. For more information, see the [Collection and routing](#collection-and-routing) section. Data in Azure Monitor Logs is stored in tables, with each table having its own set of unique properties. Azure Event Hubs has the capability to dispatch logs to either of two destination tables - Azure Diagnostic or Resource specific tables in Log Analytics.For a detailed reference of the logs and metrics, see [Azure Event Hubs monitoring data reference](monitor-event-hubs-reference.md). ++For the available resource log categories, their associated Log Analytics tables, and the log schemas for Event Hubs, see [Azure Event Hubs monitoring data reference](monitor-event-hubs-reference.md#resource-logs). ++### Analyze logs ++Using Azure Monitor Log Analytics requires you to create a diagnostic configuration and enable **Send information to Log Analytics**. For more information, see the [Metrics](#azure-monitor-platform-metrics) section. Data in Azure Monitor Logs is stored in tables, with each table having its own set of unique properties. Azure Event Hubs has the capability to dispatch logs to either of two destination tables: Azure Diagnostic or Resource specific tables in Log Analytics. For a detailed reference of the logs and metrics, see [Azure Event Hubs monitoring data reference](monitor-event-hubs-reference.md). > [!IMPORTANT] > When you select **Logs** from the Azure Event Hubs menu, Log Analytics is opened with the query scope set to the current workspace. This means that log queries will only include data from that resource. If you want to run a query that includes data from other databases or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details. -### Sample Kusto queries +### Use runtime logs -> [!IMPORTANT] -> When you select **Logs** from the Azure Event Hubs menu, Log Analytics is opened with the query scope set to the current Azure Event Hubs namespace. This means that log queries will only include data from that resource. If you want to run a query that includes data from other workspaces or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details. +Azure Event Hubs allows you to monitor and audit data plane interactions of your client applications using runtime audit logs and application metrics logs. -Following are sample queries that you can use to help you monitor your Azure Event Hubs resources: +Using *Runtime audit logs* you can capture aggregated diagnostic information for all data plane access operations such as publishing or consuming events. *Application metrics logs* capture the aggregated data on certain runtime metrics (such as consumer lag and active connections) related to client applications are connected to Event Hubs. -### [AzureDiagnostics](#tab/AzureDiagnostics) +> [!NOTE] +> Runtime audit logs are available only in **premium** and **dedicated** tiers. -+ Get errors from the past seven days -- ```Kusto - AzureDiagnostics - | where TimeGenerated > ago(7d) - | where ResourceProvider =="MICROSOFT.EVENTHUB" - | where Category == "OperationalLogs" - | summarize count() by "EventName" --+ Get runtime audit logs generated in the last one hour. -- ```Kusto - AzureDiagnostics - | where TimeGenerated > ago(1h) - | where ResourceProvider =="MICROSOFT.EVENTHUB" - | where Category == "RuntimeAuditLogs" - ``` -+ Get access attempts to a key vault that resulted in "key not found" error. -- ```Kusto - AzureDiagnostics - | where ResourceProvider == "MICROSOFT.EVENTHUB" - | where Category == "Error" and OperationName == "wrapkey" - | project Message - ``` --+ Get operations performed with a key vault to disable or restore the key. -- ```Kusto - AzureDiagnostics - | where ResourceProvider == "MICROSOFT.EVENTHUB" - | where Category == "info" and OperationName == "disable" or OperationName == "restore" - | project Message - ``` -+ Get capture failures and their duration in seconds -- ```kusto - AzureDiagnostics - | where ResourceProvider == "MICROSOFT.EVENTHUB" - | where Category == "ArchiveLogs" - | summarize count() by "failures", "durationInSeconds" - ``` - -### [Resource Specific Table](#tab/Resourcespecifictable) +### Enable runtime logs +You can enable either runtime audit or application metrics logging by selecting *Diagnostic settings* from the *Monitoring* section on the Event Hubs namespace page in Azure portal. Select **Add diagnostic setting** as shown in the following image. -+ Get Operational Logs for event hub resource for last 7 days - ```Kusto - AZMSOperationalLogs - | where Timegenerated > ago(7d) - | where Provider == "EVENTHUB" - | where resourceId == "<Resource Id>" // Replace your resource Id - ``` +Then you can enable log categories *RuntimeAuditLogs* or *ApplicationMetricsLogs* as needed. -+ Get capture logs for event hub for last 7 days - ```Kusto - AZMSArchiveLogs - | where EventhubName == "<Event Hub Name>" //Enter event hub entity name - | where TimeGenerated > ago(7d) - ``` +Once runtime logs are enabled, Event Hubs start collecting and storing them according to the diagnostic setting configuration. +### Publish and consume sample data --## Use runtime logs +To collect sample runtime audit logs in your Event Hubs namespace, you can publish and consume sample data using client applications that are based on the [Event Hubs SDK](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md). That SDK uses Advanced Message Queuing Protocol (AMQP). Or you can use any [Apache Kafka client application](../event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs.md). -Azure Event Hubs allows you to monitor and audit data plane interactions of your client applications using runtime audit logs and application metrics logs. +Application metrics include the following runtime metrics. -Using *Runtime audit logs* you can capture aggregated diagnostic information for all data plane access operations such as publishing or consuming events. -*Application metrics logs* capture the aggregated data on certain runtime metrics (such as consumer lag and active connections) related to client applications are connected to Event Hubs. -> [!NOTE] -> Runtime audit logs are available only in **premium** and **dedicated** tiers. +Therefore you can use application metrics to monitor runtime metrics such as consumer lag or active connection from a given client application. Fields associated with runtime audit logs are defined in [application metrics logs reference](../event-hubs/monitor-event-hubs-reference.md#runtime-audit-logs). -### Enable runtime logs -You can enable either runtime audit or application metrics logging by selecting *Diagnostic settings* from the *Monitoring* section on the Event Hubs namespace page in Azure portal. Select **Add diagnostic setting** as shown in the following image. -Then you can enable log categories *RuntimeAuditLogs* or *ApplicationMetricsLogs* as needed. ++### Sample Kusto queries ++Following are sample queries that you can use to help you monitor your Azure Event Hubs resources: ++### [AzureDiagnostics](#tab/AzureDiagnostics) ++- Get errors from the past seven days. ++ ```Kusto + AzureDiagnostics + | where TimeGenerated > ago(7d) + | where ResourceProvider =="MICROSOFT.EVENTHUB" + | where Category == "OperationalLogs" + | summarize count() by "EventName" ++- Get runtime audit logs generated in the last one hour. ++ ```Kusto + AzureDiagnostics + | where TimeGenerated > ago(1h) + | where ResourceProvider =="MICROSOFT.EVENTHUB" + | where Category == "RuntimeAuditLogs" + ``` ++- Get access attempts to a key vault that resulted in "key not found" error. ++ ```Kusto + AzureDiagnostics + | where ResourceProvider == "MICROSOFT.EVENTHUB" + | where Category == "Error" and OperationName == "wrapkey" + | project Message + ``` ++- Get operations performed with a key vault to disable or restore the key. ++ ```Kusto + AzureDiagnostics + | where ResourceProvider == "MICROSOFT.EVENTHUB" + | where Category == "info" and OperationName == "disable" or OperationName == "restore" + | project Message + ``` ++- Get capture failures and their duration in seconds. ++ ```kusto + AzureDiagnostics + | where ResourceProvider == "MICROSOFT.EVENTHUB" + | where Category == "ArchiveLogs" + | summarize count() by "failures", "durationInSeconds" + ``` -Once runtime logs are enabled, Event Hubs start collecting and storing them according to the diagnostic setting configuration. +### [Resource Specific Table](#tab/Resourcespecifictable) ++- Get Operational Logs for event hub resource for last seven days. ++ ```Kusto + AZMSOperationalLogs + | where Timegenerated > ago(7d) + | where Provider == "EVENTHUB" + | where resourceId == "<Resource Id>" // Replace your resource Id + ``` ++- Get capture logs for event hub for last seven days. -### Publish and consume sample data -To collect sample runtime audit logs in your Event Hubs namespace, you can publish and consume sample data using client applications, which are based on [Event Hubs SDK](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md), which uses Advanced Message Queuing Protocol (AMQP) or using any [Apache Kafka client application](../event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs.md). + ```Kusto + AZMSArchiveLogs + | where EventhubName == "<Event Hub Name>" //Enter event hub entity name + | where TimeGenerated > ago(7d) + ``` + ### Analyze runtime audit logs-You can analyze the collected runtime audit logs using the following sample query. ++You can analyze the collected runtime audit logs using the following sample query. ### [AzureDiagnostics](#tab/AzureDiagnosticsforRuntimeAudit) AzureDiagnostics | where ResourceProvider == "MICROSOFT.EVENTHUB" | where Category == "RuntimeAuditLogs" ```+ ### [Resource Specific Table](#tab/ResourcespecifictableforRuntimeAudit) ```kusto AZMSRuntimeAuditLogs | where TimeGenerated > ago(1h) | where Provider == "EVENTHUB" ```+ -Up on the execution of the query you should be able to obtain corresponding audit logs in the following format. ++Up on the execution of the query you should be able to obtain corresponding audit logs in the following format. + :::image type="content" source="./media/monitor-event-hubs/runtime-audit-logs.png" alt-text="Image showing the result of a sample query to analyze runtime audit logs." lightbox="./media/monitor-event-hubs/runtime-audit-logs.png"::: -By analyzing these logs, you should be able to audit how each client application interacts with Event Hubs. Each field associated with runtime audit logs is defined in [runtime audit logs reference](../event-hubs/monitor-event-hubs-reference.md#runtime-audit-logs). +By analyzing these logs, you should be able to audit how each client application interacts with Event Hubs. Each field associated with runtime audit logs is defined in [runtime audit logs reference](../event-hubs/monitor-event-hubs-reference.md#runtime-audit-logs). +### Analyze application metrics -### Analyze application metrics -You can analyze the collected application metrics logs using the following sample query. +You can analyze the collected application metrics logs using the following sample query. ### [AzureDiagnostics](#tab/AzureDiagnosticsforAppMetrics) AZMSApplicationMetricLogs | where TimeGenerated > ago(1h) | where Provider == "EVENTHUB" ```--Application metrics include the following runtime metrics. -Therefore you can use application metrics to monitor runtime metrics such as consumer lag or active connection from a given client application. Fields associated with runtime audit logs are defined in [application metrics logs reference](../event-hubs/monitor-event-hubs-reference.md#runtime-audit-logs). + -## Alerts You can access alerts for Azure Event Hubs by selecting **Alerts** from the **Azure Monitor** section on the home page for your Event Hubs namespace. See [Create, view, and manage metric alerts using Azure Monitor](../azure-monitor/alerts/alerts-metric.md) for details on creating alerts. +### Event Hubs alert rules ++The following table lists some suggested alert rules for Event Hubs. These alerts are just examples. You can set alerts for any metric, log entry, or activity log entry listed in the [Azure Event Hubs monitoring data reference](monitor-event-hubs-reference.md). ++| Alert type | Condition | Description | +|:|:|:| +| Metric | CPU | When CPU utilization exceeds a set value. | +| Metric | Available Memory | When Available Memory drops below a set value. | +| Metric | Capture Backlog | When Capture Backlog is above a certain value. | + -## Next steps +## Related content -- For a reference of the logs and metrics, see [Monitoring Azure Event Hubs data reference](monitor-event-hubs-reference.md).-- For details on monitoring Azure resources, see [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).+- See [Azure Event Hubs monitoring data reference](monitor-event-hubs-reference.md) for a reference of the metrics, logs, and other important values created for Event Hubs. +- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources. |
event-hubs | Resource Governance With App Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/resource-governance-with-app-groups.md | You can create an application group using the Azure portal by following these st 1. Confirm that **Enabled** is selected. To have the application group in the disabled state first, clear the **Enabled** option. This flag determines whether the clients of an application group can access Event Hubs or not. 1. For **Security context type**, select **Namespace Shared access policy**, **event hub Shared Access Policy** or **Microsoft Entra application**.Application group supports the selection of SAS key at either namespace or at entity (event hub) level. When you create the application group, you should associate with either a shared access signatures (SAS) or Microsoft Entra application ID, which is used by client applications. 1. If you selected **Namespace Shared access policy**:- 1. For **SAS key name**, select the SAS policy that can be used as a security context for this application group.You can select **Add SAS Policy** to add a new policy and then associate with the application group. + 1. For **SAS key name**, select the SAS policy that can be used as a security context for this application group. You can select **Add SAS Policy** to add a new policy and then associate with the application group. :::image type="content" source="./media/resource-governance-with-app-groups/create-application-groups-with-namespace-shared-access-key.png" alt-text="Screenshot of the Add application group page with Namespace Shared access policy option selected."::: 1. If you selected **Event Hubs Shared access policy**: The following ARM template shows how to update an existing namespace (`contosona ### Decide threshold value for throttling policies -Azure Event Hubs supports [Application Metric Logs ](monitor-event-hubs-reference.md#application-metrics-logs) functionality to observe usual throughput within your system and accordingly decide on the threshold value for application group. You can follow these steps to decide on a threshold value: +Azure Event Hubs supports [Application Metric Logs](monitor-event-hubs-reference.md#application-metrics-logs) functionality to observe usual throughput within your system and accordingly decide on the threshold value for application group. You can follow these steps to decide on a threshold value: -1. Turn on [diagnostic settings](monitor-event-hubs.md#collection-and-routing) in Event Hubs with **Application Metric logs** as selected category and choose **Log Analytics** as destination. +1. Turn on [diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md) in Event Hubs with **Application Metric logs** as selected category and choose **Log Analytics** as destination. 2. Create an empty application group without any throttling policy. 3. Continue sending messages/events to event hub at usual throughput. 4. Go to **Log Analytics workspace** and query for the right activity name (based on the (resource-governance-overview.md#throttling-policythreshold-limits)) in **AzureDiagnostics** table. The following sample query is set to track threshold value for incoming messages: You can use the below example query to find out all the throttled requests in ce | where Outcome_s =="Throttled" ``` -Due to restrictions at protocol level, throttled request logs are not generated for consumer operations within event hub ( `OutgoingMessages` or `OutgoingBytes`). when requests are throttled at consumer side, you would observe sluggish egress throughput. +Due to restrictions at protocol level, throttled request logs are not generated for consumer operations within event hub ( `OutgoingMessages` or `OutgoingBytes`). When requests are throttled at consumer side, you would observe sluggish egress throughput. ## Next steps |
expressroute | Design Architecture For Resiliency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/design-architecture-for-resiliency.md | ExpressRoute uses [Azure Service Health](../service-health/overview.md) to notif #### Configure gateway health monitoring & alerting -[Setup monitoring](expressroute-monitoring-metrics-alerts.md#expressroute-gateways) using Azure Monitor for ExpressRoute Gateway availability, performance, and scalability. When you deploy an ExpressRoute gateway, Azure manages the compute and functions of your gateway. There are multiple [gateway metrics](expressroute-monitoring-metrics-alerts.md#expressroute-virtual-network-gateway-metrics) available to you to better understand the performance of your gateway. +[Setup monitoring](monitor-expressroute-reference.md#supported-metrics-for-microsoftnetworkexpressroutegateways) using Azure Monitor for ExpressRoute Gateway availability, performance, and scalability. When you deploy an ExpressRoute gateway, Azure manages the compute and functions of your gateway. There are multiple [gateway metrics](expressroute-monitoring-metrics-alerts.md#expressroute-virtual-network-gateway-metrics) available to you to better understand the performance of your gateway. |
expressroute | Expressroute Howto Linkvnet Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-arm.md | With Virtual Network Peering and UDR support, FastPath will send traffic directl With FastPath and Private Link, Private Link traffic sent over ExpressRoute bypasses the ExpressRoute virtual network gateway in the data path. With both of these features enabled, FastPath will directly send traffic to a Private Endpoint deployed in a "spoke" Virtual Network. -These scenarios are Generally Available for limited scenarios with connections associated to 100 Gbps ExpressRoute Direct circuits. To enable, follow the below guidance: +These scenarios are Generally Available for limited scenarios with connections associated to 10 Gbps and 100 Gbps ExpressRoute Direct circuits. To enable, follow the below guidance: 1. Complete this [Microsoft Form](https://aka.ms/fastpathlimitedga) to request to enroll your subscription. Requests may take up to 4 weeks to complete, so plan deployments accordingly. 2. Once you receive a confirmation from Step 1, run the following Azure PowerShell command in the target Azure subscription. ```azurepowershell-interactive |
expressroute | Expressroute Monitoring Metrics Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-monitoring-metrics-alerts.md | - Title: 'Azure ExpressRoute: Monitoring, Metrics, and Alerts' -description: Learn about Azure ExpressRoute monitoring, metrics, and alerts using Azure Monitor, the one stop shop for all metrics, alerting, diagnostic logs across Azure. --- Previously updated : 03/31/2024----# ExpressRoute monitoring, metrics, and alerts --This article helps you understand ExpressRoute monitoring, metrics, and alerts using Azure Monitor. Azure Monitor is one stop shop for all metrics, alerting, diagnostic logs across all of Azure. - -> [!NOTE] -> Using **Classic Metrics** is not recommended. -> --## ExpressRoute metrics --To view **Metrics**, go to the *Azure Monitor* page and select *Metrics*. To view **ExpressRoute** metrics, filter by Resource Type *ExpressRoute circuits*. To view **Global Reach** metrics, filter by Resource Type *ExpressRoute circuits* and select an ExpressRoute circuit resource that has Global Reach enabled. To view **ExpressRoute Direct** metrics, filter Resource Type by *ExpressRoute Ports*. --Once a metric is selected, the default aggregation is applied. Optionally, you can apply splitting, which shows the metric with different dimensions. --> [!IMPORTANT] -> When viewing ExpressRoute metrics in the Azure portal, select a time granularity of **5 minutes or greater** for best possible results. -> -> :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/metric-granularity.png" alt-text="Screenshot of time granularity options."::: --### Aggregation Types: --Metrics explorer supports sum, maximum, minimum, average and count as [aggregation types](../azure-monitor/essentials/metrics-charts.md#aggregation). You should use the recommended Aggregation type when reviewing the insights for each ExpressRoute metric. --* Sum: The sum of all values captured during the aggregation interval. -* Count: The number of measurements captured during the aggregation interval. -* Average: The average of the metric values captured during the aggregation interval. -* Min: The smallest value captured during the aggregation interval. -* Max: The largest value captured during the aggregation interval. --### ExpressRoute circuit --| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? | -| | | | | | | | -| [ARP Availability](#arp) | Availability | Percent | Average | ARP Availability from MSEE towards all peers. | Peering Type, Peer | Yes | -| [BGP Availability](#bgp) | Availability | Percent | Average | BGP Availability from MSEE towards all peers. | Peering Type, Peer | Yes | -| [BitsInPerSecond](#circuitbandwidth) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | Peering Type | Yes | -| [BitsOutPerSecond](#circuitbandwidth) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second | Peering Type | Yes | -| DroppedInBitsPerSecond | Traffic | BitsPerSecond | Average | Ingress bits of data dropped per second | Peering Type | Yes | -| DroppedOutBitsPerSecond | Traffic | BitPerSecond | Average | Egress bits of data dropped per second | Peering Type | Yes | -| GlobalReachBitsInPerSecond | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | PeeredCircuitSKey | Yes | -| GlobalReachBitsOutPerSecond | Traffic | BitsPerSecond | Average | Bits egressing Azure per second | PeeredCircuitSKey | Yes | -| [FastPathRoutesCount](#fastpath-routes-count-at-circuit-level) | Fastpath | Count | Maximum | Count of FastPath routes configured on the circuit | None | Yes | -->[!NOTE] ->Using *GlobalGlobalReachBitsInPerSecond* and *GlobalGlobalReachBitsOutPerSecond* will only be visible if at least one Global Reach connection is established. -> --### ExpressRoute gateways --| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? | -| | | | | | | | -| [Bits received per second](#gwbits) | Performance | BitsPerSecond | Average | Total bits received on ExpressRoute gateway per second | roleInstance | Yes | -| [CPU utilization](#cpu) | Performance | Count | Average | CPU Utilization of the ExpressRoute Gateway | roleInstance | Yes | -| [Packets per second](#packets) | Performance | CountPerSecond | Average | Total Packets received on ExpressRoute Gateway per second | roleInstance | Yes | -| [Count of routes advertised to peer](#advertisedroutes) | Availability | Count | Maximum | Count Of Routes Advertised To Peer by ExpressRouteGateway | roleInstance | Yes | -| [Count of routes learned from peer](#learnedroutes)| Availability | Count | Maximum | Count Of Routes Learned From Peer by ExpressRouteGateway | roleInstance | Yes | -| [Frequency of routes changed](#frequency) | Availability | Count | Total | Frequency of Routes change in ExpressRoute Gateway | roleInstance | Yes | -| [Number of VMs in virtual network](#vm) | Availability | Count | Maximum | Estimated number of VMs in the virtual network | No Dimensions | Yes | -| [Active flows](#activeflows) | Scalability | Count | Average | Number of active flows on ExpressRoute Gateway | roleInstance | Yes | -| [Max flows created per second](#maxflows) | Scalability | FlowsPerSecond | Maximum | Maximum number of flows created per second on ExpressRoute Gateway | roleInstance, direction | Yes | --### ExpressRoute Gateway connections --| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? | -| | | | | | | | -| [BitsInPerSecond](#connectionbandwidth) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second through ExpressRoute gateway | ConnectionName | Yes | -| [BitsOutPerSecond](#connectionbandwidth) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second through ExpressRoute gateway | ConnectionName | Yes | --### ExpressRoute Direct --| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? | -| | | | | | | | -| [BitsInPerSecond](#directin) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | Link | Yes | -| [BitsOutPerSecond](#directout) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second | Link | Yes | -| DroppedInBitsPerSecond | Traffic | BitsPerSecond | Average | Ingress bits of data dropped per second | Link | Yes | -| DroppedOutBitsPerSecond | Traffic | BitPerSecond | Average | Egress bits of data dropped per second | Link | Yes | -| [AdminState](#admin) | Physical Connectivity | Count | Average | Admin state of the port | Link | Yes | -| [LineProtocol](#line) | Physical Connectivity | Count | Average | Line protocol status of the port | Link | Yes | -| [RxLightLevel](#rxlight) | Physical Connectivity | Count | Average | Rx Light level in dBm | Link, Lane | Yes | -| [TxLightLevel](#txlight) | Physical Connectivity | Count | Average | Tx light level in dBm | Link, Lane | Yes | -| [FastPathRoutesCount](#fastpath-routes-count-at-port-level) | FastPath | Count | Maximum | Count of FastPath routes configured on the port | None | Yes | --### ExpressRoute Traffic Collector --| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? | -| | | | | | | | -| CPU utilization | Performance | Count | Average | CPU Utilization of the ExpressRoute Traffic Collector | roleInstance | Yes | -| Memory Utilization | Performance | CountPerSecond | Average | Memory Utilization of the ExpressRoute Traffic Collector | roleInstance | Yes | -| Count of flow records processed | Availability | Count | Maximum | Count of number of flow records processed or ingested | roleInstance, ExpressRoute Circuit | Yes | --## Circuits metrics --### <a name = "circuitbandwidth"></a>Bits In and Out - Metrics across all peerings --Aggregation type: *Avg* --You can view metrics across all peerings on a given ExpressRoute circuit. ---### Bits In and Out - Metrics per peering --Aggregation type: *Avg* --You can view metrics for private, public, and Microsoft peering in bits/second. ---### <a name = "bgp"></a>BGP Availability - Split by Peer --Aggregation type: *Avg* --You can view near to real-time availability of BGP (Layer-3 connectivity) across peerings and peers (Primary and Secondary ExpressRoute routers). This dashboard shows the Primary BGP session status is up for private peering and the Second BGP session status is down for private peering. --->[!NOTE] ->During maintenance between the Microsoft edge and core network, BGP availability will appear down even if the BGP session between the customer edge and Microsoft edge remains up. For information about maintenance between the Microsoft edge and core network, make sure to have your [maintenance alerts turned on and configured](./maintenance-alerts.md). -> --### FastPath routes count (at circuit level) --Aggregation type: *Max* --This metric shows the number of FastPath routes configured on a circuit. Set an alert for when the number of FastPath routes on a circuit goes beyond the threshold limit. For more information, see [ExpressRoute FastPath limits](about-fastpath.md#ip-address-limits). ---### <a name = "arp"></a>ARP Availability - Split by Peering --Aggregation type: *Avg* --You can view near to real-time availability of [ARP](./expressroute-troubleshooting-arp-resource-manager.md) (Layer-2 connectivity) across peerings and peers (Primary and Secondary ExpressRoute routers). This dashboard shows the Private Peering ARP session status is up across both peers, but down for Microsoft peering for both peers. The default aggregation (Average) was utilized across both peers. ---## ExpressRoute Direct Metrics --### <a name = "admin"></a>Admin State - Split by link --Aggregation type: *Avg* --You can view the Admin state for each link of the ExpressRoute Direct port pair. The Admin state represents if the physical port is on or off. This state is required to pass traffic across the ExpressRoute Direct connection. ---### <a name = "directin"></a>Bits In Per Second - Split by link --Aggregation type: *Avg* --You can view the bits in per second across both links of the ExpressRoute Direct port pair. Monitor this dashboard to compare inbound bandwidth for both links. ---### <a name = "directout"></a>Bits Out Per Second - Split by link --Aggregation type: *Avg* --You can also view the bits out per second across both links of the ExpressRoute Direct port pair. Monitor this dashboard to compare outbound bandwidth for both links. ---### <a name = "line"></a>Line Protocol - Split by link --Aggregation type: *Avg* --You can view the line protocol across each link of the ExpressRoute Direct port pair. The Line Protocol indicates if the physical link is up and running over ExpressRoute Direct. Monitor this dashboard and set alerts to know when the physical connection goes down. ---### <a name = "rxlight"></a>Rx Light Level - Split by link --Aggregation type: *Avg* --You can view the Rx light level (the light level that the ExpressRoute Direct port is **receiving**) for each port. Healthy Rx light levels generally fall within a range of -10 dBm to 0 dBm. Set alerts to be notified if the Rx light level falls outside of the healthy range. --->[!NOTE] -> ExpressRoute Direct connectivity is hosted across different device platforms. Some ExpressRoute Direct connections will support a split view for Rx light levels by lane. However, this is not supported on all deployments. -> --### <a name = "txlight"></a>Tx Light Level - Split by link --Aggregation type: *Avg* --You can view the Tx light level (the light level that the ExpressRoute Direct port is **transmitting**) for each port. Healthy Tx light levels generally fall within a range of -10 dBm to 0 dBm. Set alerts to be notified if the Tx light level falls outside of the healthy range. --->[!NOTE] -> ExpressRoute Direct connectivity is hosted across different device platforms. Some ExpressRoute Direct connections will support a split view for Tx light levels by lane. However, this is not supported on all deployments. -> --### FastPath routes count (at port level) --Aggregation type: *Max* --This metric shows the number of FastPath routes configured on an ExpressRoute Direct port. --*Guidance:* Set an alert for when the number of FastPath routes on the port goes beyond the threshold limit. For more information, see [ExpressRoute FastPath limits](about-fastpath.md#ip-address-limits). ---## ExpressRoute Virtual Network Gateway Metrics --Aggregation type: *Avg* --When you deploy an ExpressRoute gateway, Azure manages the compute and functions of your gateway. There are six gateway metrics available to you to better understand the performance of your gateway: --* Bits received per second -* CPU Utilization -* Packets per seconds -* Count of routes advertised to peers -* Count of routes learned from peers -* Frequency of routes changed -* Number of VMs in the virtual network -* Active flows -* Max flows created per second --We highly recommended you set alerts for each of these metrics so that you're aware of when your gateway could be seeing performance issues. --### <a name = "gwbits"></a>Bits received per second - Split by instance --Aggregation type: *Avg* --This metric captures inbound bandwidth utilization on the ExpressRoute virtual network gateway instances. Set an alert for how frequent the bandwidth utilization exceeds a certain threshold. If you need more bandwidth, increase the size of the ExpressRoute virtual network gateway. ---### <a name = "cpu"></a>CPU Utilization - Split by instance --Aggregation type: *Avg* --You can view the CPU utilization of each gateway instance. The CPU utilization might spike briefly during routine host maintenance but prolong high CPU utilization could indicate your gateway is reaching a performance bottleneck. Increasing the size of the ExpressRoute gateway might resolve this issue. Set an alert for how frequent the CPU utilization exceeds a certain threshold. ---### <a name = "packets"></a>Packets Per Second - Split by instance --Aggregation type: *Avg* --This metric captures the number of inbound packets traversing the ExpressRoute gateway. You should expect to see a consistent stream of data here if your gateway is receiving traffic from your on-premises network. Set an alert for when the number of packets per second drops below a threshold indicating that your gateway is no longer receiving traffic. ---### <a name = "advertisedroutes"></a>Count of Routes Advertised to Peer - Split by instance --Aggregation type: *Max* --This metric shows the number of routes the ExpressRoute gateway is advertising to the circuit. The address spaces might include virtual networks that are connected using virtual network peering and uses remote ExpressRoute gateway. You should expect the number of routes to remain consistent unless there are frequent changes to the virtual network address spaces. Set an alert for when the number of advertised routes drop below the threshold for the number of virtual network address spaces you're aware of. ---### <a name = "learnedroutes"></a>Count of routes learned from peer - Split by instance --Aggregation type: *Max* --This metric shows the number of routes the ExpressRoute gateway is learning from peers connected to the ExpressRoute circuit. These routes can be either from another virtual network connected to the same circuit or learned from on-premises. Set an alert for when the number of learned routes drop below a certain threshold. This metric can indicate either the gateway is seeing a performance problem or remote peers are no longer advertising routes to the ExpressRoute circuit. ---### <a name = "frequency"></a>Frequency of routes change - Split by instance --Aggregation type: *Sum* --This metric shows the frequency of routes being learned from or advertised to remote peers. You should first investigate your on-premises devices to understand why the network is changing so frequently. A high frequency in routes change could indicate a performance problem on the ExpressRoute gateway where scaling the gateway SKU up might resolve the problem. Set an alert for a frequency threshold to be aware of when your ExpressRoute gateway is seeing abnormal route changes. ---### <a name = "vm"></a>Number of VMs in the virtual network --Aggregation type: *Max* --This metric shows the number of virtual machines that are using the ExpressRoute gateway. The number of virtual machines might include VMs from peered virtual networks that use the same ExpressRoute gateway. Set an alert for this metric if the number of VMs goes above a certain threshold that could affect the gateway performance. --->[!NOTE] -> To maintain reliability of the service, Microsoft often performs platform or OS maintenance on the gateway service. During this time, this metric may fluctuate and report inaccurately. -> --## <a name = "activeflows"></a>Active flows --Aggregation type: *Avg* --Split by: Gateway Instance ---This metric displays a count of the total number of active flows on the ExpressRoute Gateway. Only inbound traffic from on-premises is captured for active flows. Through split at instance level, you can see active flow count per gateway instance. For more information, see [understand network flow limits](../virtual-network/virtual-machine-network-throughput.md#network-flow-limits). ---## <a name = "maxflows"></a>Max flows created per second --Aggregation type: *Max* --Split by: Gateway Instance and Direction (Inbound/Outbound) --This metric displays the maximum number of flows created per second on the ExpressRoute Gateway. Through split at instance level and direction, you can see max flow creation rate per gateway instance and inbound/outbound direction respectively. For more information, see [understand network flow limits](../virtual-network/virtual-machine-network-throughput.md#network-flow-limits). ---## <a name = "connectionbandwidth"></a>ExpressRoute gateway connections in bits/seconds --Aggregation type: *Avg* --This metric shows the bits per second for ingress and egress to Azure through the ExpressRoute gateway. You can split this metric further to see specific connections to the ExpressRoute circuit. ---## ExpressRoute Traffic Collector metrics --### CPU Utilization - Split by instance --Aggregation type:ΓÇ»*Avg* (of percentage of total utilized CPU) --*Granularity: 5 min* --You can view the CPU utilization of each ExpressRoute Traffic Collector instance. The CPU utilization might spike briefly during routine host maintenance, but prolonged high CPU utilization could indicate your ExpressRoute Traffic Collector is reaching a performance bottleneck. --**Guidance:** Set an alert for when avg CPU utilization exceeds a certain threshold. ---### Memory Utilization - Split by instance --Aggregation type:ΓÇ»*Avg* (of percentage of total utilized Memory) --*Granularity: 5 min* --You can view the memory utilization of each ExpressRoute Traffic Collector instance. Memory utilization might spike briefly during routine host maintenance, but prolonged high memory utilization could indicate your Azure Traffic Collector is reaching a performance bottleneck. --**Guidance:** Set an alert for when avg memory utilization exceeds a certain threshold. ---### Count of flow records processed - Split by instances or ExpressRoute circuit --Aggregation type: *Count* --*Granularity: 5 min* --You can view the count of number of flow records processed by ExpressRoute Traffic Collector, aggregated across ExpressRoute Circuits. Customer can split the metrics across each ExpressRoute Traffic Collector instance or ExpressRoute circuit when multiple circuits are associated to the ExpressRoute Traffic Collector. Monitoring this metric helps you understand if you need to deploy more ExpressRoute Traffic Collector instances or migrate ExpressRoute circuit association from one ExpressRoute Traffic Collector deployment to another. --**Guidance:** Splitting by circuits is recommended when multiple ExpressRoute circuits are associated with an ExpressRoute Traffic Collector deployment. This metric helps determine the flow count of each ExpressRoute circuit and ExpressRoute Traffic Collector utilization by each ExpressRoute circuit. ---## Alerts for ExpressRoute gateway connections --1. To configure alerts, navigate to **Azure Monitor**, then select **Alerts**. -- :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/monitor-overview.png" alt-text="Screenshot of the alerts option from the monitor overview page."::: --1. Select **+ Create > Alert rule** and select the ExpressRoute gateway connection resource. Select **Next: Condition >** to configure the signal. -- :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/select-expressroute-gateway.png" alt-text="Screenshot of the selecting ExpressRoute virtual network gateway from the select a resource page."::: --1. On the *Select a signal* page, select a metric, resource health, or activity log that you want to be alerted. Depending on the signal you select, you might need to enter additional information such as a threshold value. You can also combine multiple signals into a single alert. Select **Next: Actions >** to define who and how they get notify. -- :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/signal.png" alt-text="Screenshot of list of signals that can be alerted for ExpressRoute gateways."::: --1. Select **+ Select action groups** to choose an existing action group you previously created or select **+ Create action group** to define a new one. In the action group, you determine how notifications get sent and who receives them. -- :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/action-group.png" alt-text="Screenshot of add action groups page."::: --1. Select **Review + create** and then **Create** to deploy the alert into your subscription. --### Alerts based on each peering --After you select a metric, certain metric allow you to set up dimensions based on peering or a specific peer (virtual networks). ---### Configure alerts for activity logs on circuits --When selecting signals to be alerted on, you can select **Activity Log** signal type. ---## More metrics in Log Analytics --You can also view ExpressRoute metrics by going to your ExpressRoute circuit resource and selecting the *Logs* tab. For any metrics you query, the output contains the following columns. --| **Column** | **Type** | **Description** | -| | | | -| TimeGrain | string | PT1M (metric values are pushed every minute) | -| Count | real | Usually is 2 (each MSEE pushes a single metric value every minute) | -| Minimum | real | The minimum of the two metric values pushed by the two MSEEs | -| Maximum | real | The maximum of the two metric values pushed by the two MSEEs | -| Average | real | Equal to (Minimum + Maximum)/2 | -| Total | real | Sum of the two metric values from both MSEEs (the main value to focus on for the metric queried) | - -## Next steps --Set up your ExpressRoute connection. - -* [Create and modify a circuit](expressroute-howto-circuit-arm.md) -* [Create and modify peering configuration](expressroute-howto-routing-arm.md) -* [Link a virtual network to an ExpressRoute circuit](expressroute-howto-linkvnet-arm.md) |
expressroute | Monitor Expressroute Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/monitor-expressroute-reference.md | Title: Monitoring ExpressRoute data reference -description: Important reference material needed when you monitor ExpressRoute -+ Title: Monitoring data reference for Azure ExpressRoute +description: This article contains important reference material you need when you monitor Azure ExpressRoute by using Azure Monitor. Last updated : 07/11/2024+ + - Previously updated : 06/22/2021 +# Azure ExpressRoute monitoring data reference -# Monitoring ExpressRoute data reference -This article provides a reference of log and metric data collected to analyze the performance and availability of ExpressRoute. -See [Monitoring ExpressRoute](monitor-expressroute.md) for details on collecting and analyzing monitoring data for ExpressRoute. +See [Monitor Azure ExpressRoute](monitor-expressroute.md) for details on the data you can collect for ExpressRoute and how to use it. -## Metrics -This section lists all the automatically collected platform metrics for ExpressRoute. For more information, see a list of [all platform metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md). +>[!NOTE] +> Using *GlobalGlobalReachBitsInPerSecond* and *GlobalGlobalReachBitsOutPerSecond* are only visible if at least one Global Reach connection is established. +> -| Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics | -|-|--| -| ExpressRoute circuit | [Microsoft.Network/expressRouteCircuits](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkexpressroutecircuits) | -| ExpressRoute circuit peering | [Microsoft.Network/expressRouteCircuits/peerings](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkexpressroutecircuitspeerings) | -| ExpressRoute Gateways | [Microsoft.Network/expressRouteGateways](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkexpressroutegateways) | -| ExpressRoute Direct | [Microsoft.Network/expressRoutePorts](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkexpressrouteports) | +### Supported metrics for Microsoft.Network/expressRouteCircuits ->[!NOTE] +The following table lists the metrics available for the Microsoft.Network/expressRouteCircuits resource type. +++### Supported metrics for Microsoft.Network/expressRouteCircuits/peerings ++The following table lists the metrics available for the Microsoft.Network/expressRouteCircuits/peerings resource type. +++### Supported metrics for microsoft.network/expressroutegateways ++The following table lists the metrics available for the microsoft.network/expressroutegateways resource type. +++### Supported metrics for Microsoft.Network/expressRoutePorts ++The following table lists the metrics available for the Microsoft.Network/expressRoutePorts resource type. +++### Metrics information ++Follow links in these lists for more information about metrics from the preceding tables. ++ExpressRoute circuits metrics: ++- [ARP Availability](#arp) +- [BGP Availability](#bgp) +- [BitsInPerSecond](#circuitbandwidth) +- [BitsOutPerSecond](#circuitbandwidth) +- DroppedInBitsPerSecond +- DroppedOutBitsPerSecond +- GlobalReachBitsInPerSecond +- GlobalReachBitsOutPerSecond +- [FastPathRoutesCount](#fastpath-routes-count-at-circuit-level) ++> [!NOTE] > Using *GlobalGlobalReachBitsInPerSecond* and *GlobalGlobalReachBitsOutPerSecond* will only be visible if at least one Global Reach connection is established.++ExpressRoute gateways metrics: ++- [Bits received per second](#gwbits) +- [CPU utilization](#cpu) +- [Packets per second](#packets) +- [Count of routes advertised to peer](#advertisedroutes) +- [Count of routes learned from peer](#learnedroutes) +- [Frequency of routes changed](#frequency) +- [Number of VMs in virtual network](#vm) +- [Active flows](#activeflows) +- [Max flows created per second](#maxflows) ++ExpressRoute gateway connections metrics: ++- [BitsInPerSecond](#connectionbandwidth) +- [BitsOutPerSecond](#connectionbandwidth) ++ExpressRoute Direct metrics: ++- [BitsInPerSecond](#directin) +- [BitsOutPerSecond](#directout) +- DroppedInBitsPerSecond +- DroppedOutBitsPerSecond +- [AdminState](#admin) +- [LineProtocol](#line) +- [RxLightLevel](#rxlight) +- [TxLightLevel](#txlight) +- [FastPathRoutesCount](#fastpath-routes-count-at-port-level) ++ExpressRoute Traffic Collector metrics: ++- [CPU utilization](#cpu-utilizationsplit-by-instance-1) +- [Memory Utilization](#memory-utilizationsplit-by-instance) +- [Count of flow records processed](#count-of-flow-records-processedsplit-by-instances-or-expressroute-circuit) ++### Circuits metrics ++#### <a name = "arp"></a>ARP Availability - Split by Peering ++Aggregation type: *Avg* ++You can view near to real-time availability of [ARP](./expressroute-troubleshooting-arp-resource-manager.md) (Layer-2 connectivity) across peerings and peers (Primary and Secondary ExpressRoute routers). This dashboard shows the Private Peering ARP session status is up across both peers, but down for Microsoft peering for both peers. The default aggregation (Average) was utilized across both peers. +++#### <a name = "bgp"></a>BGP Availability - Split by Peer ++Aggregation type: *Avg* ++You can view near to real-time availability of BGP (Layer-3 connectivity) across peerings and peers (Primary and Secondary ExpressRoute routers). This dashboard shows the Primary BGP session status is up for private peering and the Second BGP session status is down for private peering. +++>[!NOTE] +>During maintenance between the Microsoft edge and core network, BGP availability will appear down even if the BGP session between the customer edge and Microsoft edge remains up. For information about maintenance between the Microsoft edge and core network, make sure to have your [maintenance alerts turned on and configured](./maintenance-alerts.md). +> ++#### <a name = "circuitbandwidth"></a>Bits In and Out - Metrics across all peerings ++Aggregation type: *Avg* ++You can view metrics across all peerings on a given ExpressRoute circuit. +++#### Bits In and Out - Metrics per peering ++Aggregation type: *Avg* ++You can view metrics for private, public, and Microsoft peering in bits/second. +++#### FastPath routes count (at circuit level) ++Aggregation type: *Max* ++This metric shows the number of FastPath routes configured on a circuit. Set an alert for when the number of FastPath routes on a circuit goes beyond the threshold limit. For more information, see [ExpressRoute FastPath limits](about-fastpath.md#ip-address-limits). +++### Virtual network gateway metrics ++Aggregation type: *Avg* ++When you deploy an ExpressRoute gateway, Azure manages the compute and functions of your gateway. There are six gateway metrics available to you to better understand the performance of your gateway: ++- Bits received per second +- CPU Utilization +- Packets per seconds +- Count of routes advertised to peers +- Count of routes learned from peers +- Frequency of routes changed +- Number of VMs in the virtual network +- Active flows +- Max flows created per second ++We highly recommended you set alerts for each of these metrics so that you're aware of when your gateway could be seeing performance issues. ++#### <a name = "gwbits"></a>Bits received per second - Split by instance ++Aggregation type: *Avg* ++This metric captures inbound bandwidth utilization on the ExpressRoute virtual network gateway instances. Set an alert for how frequent the bandwidth utilization exceeds a certain threshold. If you need more bandwidth, increase the size of the ExpressRoute virtual network gateway. +++#### <a name = "cpu"></a>CPU Utilization - Split by instance ++Aggregation type: *Avg* ++You can view the CPU utilization of each gateway instance. The CPU utilization might spike briefly during routine host maintenance but prolong high CPU utilization could indicate your gateway is reaching a performance bottleneck. Increasing the size of the ExpressRoute gateway might resolve this issue. Set an alert for how frequent the CPU utilization exceeds a certain threshold. +++#### <a name = "packets"></a>Packets Per Second - Split by instance ++Aggregation type: *Avg* ++This metric captures the number of inbound packets traversing the ExpressRoute gateway. You should expect to see a consistent stream of data here if your gateway is receiving traffic from your on-premises network. Set an alert for when the number of packets per second drops below a threshold indicating that your gateway is no longer receiving traffic. +++#### <a name = "advertisedroutes"></a>Count of Routes Advertised to Peer - Split by instance ++Aggregation type: *Max* ++This metric shows the number of routes the ExpressRoute gateway is advertising to the circuit. The address spaces might include virtual networks that are connected using virtual network peering and uses remote ExpressRoute gateway. You should expect the number of routes to remain consistent unless there are frequent changes to the virtual network address spaces. Set an alert for when the number of advertised routes drop below the threshold for the number of virtual network address spaces you're aware of. +++#### <a name = "learnedroutes"></a>Count of routes learned from peer - Split by instance ++Aggregation type: *Max* ++This metric shows the number of routes the ExpressRoute gateway is learning from peers connected to the ExpressRoute circuit. These routes can be either from another virtual network connected to the same circuit or learned from on-premises. Set an alert for when the number of learned routes drop below a certain threshold. This metric can indicate either the gateway is seeing a performance problem or remote peers are no longer advertising routes to the ExpressRoute circuit. +++#### <a name = "frequency"></a>Frequency of routes change - Split by instance ++Aggregation type: *Sum* ++This metric shows the frequency of routes being learned from or advertised to remote peers. You should first investigate your on-premises devices to understand why the network is changing so frequently. A high frequency in routes change could indicate a performance problem on the ExpressRoute gateway where scaling the gateway SKU up might resolve the problem. Set an alert for a frequency threshold to be aware of when your ExpressRoute gateway is seeing abnormal route changes. +++#### <a name = "vm"></a>Number of VMs in the virtual network ++Aggregation type: *Max* ++This metric shows the number of virtual machines that are using the ExpressRoute gateway. The number of virtual machines might include VMs from peered virtual networks that use the same ExpressRoute gateway. Set an alert for this metric if the number of VMs goes above a certain threshold that could affect the gateway performance. +++>[!NOTE] +> To maintain reliability of the service, Microsoft often performs platform or OS maintenance on the gateway service. During this time, this metric may fluctuate and report inaccurately. +> ++#### <a name = "activeflows"></a>Active flows ++Aggregation type: *Avg* ++Split by: Gateway Instance ++This metric displays a count of the total number of active flows on the ExpressRoute Gateway. Only inbound traffic from on-premises is captured for active flows. Through split at instance level, you can see active flow count per gateway instance. For more information, see [understand network flow limits](../virtual-network/virtual-machine-network-throughput.md#network-flow-limits). +++#### <a name = "maxflows"></a>Max flows created per second ++Aggregation type: *Max* ++Split by: Gateway Instance and Direction (Inbound/Outbound) ++This metric displays the maximum number of flows created per second on the ExpressRoute Gateway. Through split at instance level and direction, you can see max flow creation rate per gateway instance and inbound/outbound direction respectively. For more information, see [understand network flow limits](../virtual-network/virtual-machine-network-throughput.md#network-flow-limits). +++### <a name = "connectionbandwidth"></a>Gateway connections in bits/seconds ++Aggregation type: *Avg* ++This metric shows the bits per second for ingress and egress to Azure through the ExpressRoute gateway. You can split this metric further to see specific connections to the ExpressRoute circuit. +++### ExpressRoute Direct metrics ++#### <a name = "directin"></a>Bits In Per Second - Split by link ++Aggregation type: *Avg* ++You can view the bits in per second across both links of the ExpressRoute Direct port pair. Monitor this dashboard to compare inbound bandwidth for both links. +++#### <a name = "directout"></a>Bits Out Per Second - Split by link ++Aggregation type: *Avg* ++You can also view the bits out per second across both links of the ExpressRoute Direct port pair. Monitor this dashboard to compare outbound bandwidth for both links. +++#### <a name = "admin"></a>Admin State - Split by link ++Aggregation type: *Avg* ++You can view the Admin state for each link of the ExpressRoute Direct port pair. The Admin state represents if the physical port is on or off. This state is required to pass traffic across the ExpressRoute Direct connection. +++#### <a name = "line"></a>Line Protocol - Split by link ++Aggregation type: *Avg* ++You can view the line protocol across each link of the ExpressRoute Direct port pair. The Line Protocol indicates if the physical link is up and running over ExpressRoute Direct. Monitor this dashboard and set alerts to know when the physical connection goes down. +++#### <a name = "rxlight"></a>Rx Light Level - Split by link ++Aggregation type: *Avg* ++You can view the Rx light level (the light level that the ExpressRoute Direct port is **receiving**) for each port. Healthy Rx light levels generally fall within a range of -10 dBm to 0 dBm. Set alerts to be notified if the Rx light level falls outside of the healthy range. +++>[!NOTE] +> ExpressRoute Direct connectivity is hosted across different device platforms. Some ExpressRoute Direct connections will support a split view for Rx light levels by lane. However, this is not supported on all deployments. +> ++#### <a name = "txlight"></a>Tx Light Level - Split by link ++Aggregation type: *Avg* ++You can view the Tx light level (the light level that the ExpressRoute Direct port is **transmitting**) for each port. Healthy Tx light levels generally fall within a range of -10 dBm to 0 dBm. Set alerts to be notified if the Tx light level falls outside of the healthy range. +++>[!NOTE] +> ExpressRoute Direct connectivity is hosted across different device platforms. Some ExpressRoute Direct connections will support a split view for Tx light levels by lane. However, this is not supported on all deployments. > -## Metric dimensions +#### FastPath routes count (at port level) ++Aggregation type: *Max* ++This metric shows the number of FastPath routes configured on an ExpressRoute Direct port. ++*Guidance:* Set an alert for when the number of FastPath routes on the port goes beyond the threshold limit. For more information, see [ExpressRoute FastPath limits](about-fastpath.md#ip-address-limits). +++### ExpressRoute Traffic Collector metrics -For more information on what metric dimensions are, see [Multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics). +#### CPU Utilization - Split by instance -ExpressRoute has the following dimensions associated with its metrics. +Aggregation type:ΓÇ»*Avg* (of percentage of total utilized CPU) -### Dimension for ExpressRoute circuit +*Granularity: 5 min* ++You can view the CPU utilization of each ExpressRoute Traffic Collector instance. The CPU utilization might spike briefly during routine host maintenance, but prolonged high CPU utilization could indicate your ExpressRoute Traffic Collector is reaching a performance bottleneck. ++**Guidance:** Set an alert for when avg CPU utilization exceeds a certain threshold. +++#### Memory Utilization - Split by instance ++Aggregation type:ΓÇ»*Avg* (of percentage of total utilized Memory) ++*Granularity: 5 min* ++You can view the memory utilization of each ExpressRoute Traffic Collector instance. Memory utilization might spike briefly during routine host maintenance, but prolonged high memory utilization could indicate your Azure Traffic Collector is reaching a performance bottleneck. ++**Guidance:** Set an alert for when avg memory utilization exceeds a certain threshold. +++#### Count of flow records processed - Split by instances or ExpressRoute circuit ++Aggregation type: *Count* ++*Granularity: 5 min* ++You can view the count of number of flow records processed by ExpressRoute Traffic Collector, aggregated across ExpressRoute Circuits. Customer can split the metrics across each ExpressRoute Traffic Collector instance or ExpressRoute circuit when multiple circuits are associated to the ExpressRoute Traffic Collector. Monitoring this metric helps you understand if you need to deploy more ExpressRoute Traffic Collector instances or migrate ExpressRoute circuit association from one ExpressRoute Traffic Collector deployment to another. ++**Guidance:** Splitting by circuits is recommended when multiple ExpressRoute circuits are associated with an ExpressRoute Traffic Collector deployment. This metric helps determine the flow count of each ExpressRoute circuit and ExpressRoute Traffic Collector utilization by each ExpressRoute circuit. +++++Dimension for ExpressRoute circuit: | Dimension Name | Description |-| - | -- | -| **PeeringType** | The type of peering configured. The supported values are Microsoft and Private peering. | -| **Peering** | The supported values are Primary and Secondary. | -| **PeeredCircuitSkey** | The remote ExpressRoute circuit service key connected using Global Reach. | +|:|:| +| PeeringType | The type of peering configured. The supported values are Microsoft and Private peering. | +| Peering | The supported values are Primary and Secondary. | +| DeviceRole | | +| PeeredCircuitSkey | The remote ExpressRoute circuit service key connected using Global Reach. | -### Dimension for ExpressRoute gateway +Dimension for ExpressRoute gateway: | Dimension Name | Description |-| - | -- | -| **roleInstance** | The gateway instance. Each ExpressRoute gateway is comprised of multiple instances, and the supported values are GatewayTenantWork_IN_X (where X is a minimum of 0 and a maximum of the number of gateway instances -1). | +|:-- |:-- | +| BgpPeerAddress | | +| ConnectionName | | +| direction | | +| roleInstance | The gateway instance. Each ExpressRoute gateway is composed of multiple instances. The supported values are `GatewayTenantWork_IN_X`, where X is a minimum of 0 and a maximum of the number of gateway instances -1. | -### Dimension for Express Direct +Dimension for Express Direct: | Dimension Name | Description |-| - | -- | -| **Link** | The physical link. Each ExpressRoute Direct port pair is comprised of two physical links for redundancy, and the supported values are link1 and link2. | +|:|:| +| Lane | | +| Link | The physical link. Each ExpressRoute Direct port pair is composed of two physical links for redundancy, and the supported values are link1 and link2. | + -## Resource logs +### Supported resource logs for Microsoft.Network/expressRouteCircuits -This section lists the types of resource logs you can collect for ExpressRoute. -|Resource Log Type | Resource Provider / Type Namespace<br/> and link to individual metrics | -|-|--| -| ExpressRoute Circuit | [Microsoft.Network/expressRouteCircuits](../azure-monitor/essentials/resource-logs-categories.md#microsoftnetworkexpressroutecircuits) | -For reference, see a list of [all resource logs category types supported in Azure Monitor](../azure-monitor/essentials/resource-logs-schema.md). +### ExpressRoute Microsoft.Network/expressRouteCircuits -## Azure Monitor Logs tables +- [AzureActivity](/azure/azure-monitor/reference/tables/AzureActivity#columns) +- [AzureMetrics](/azure/azure-monitor/reference/tables/AzureMetrics#columns) +- [AzureDiagnostics](/azure/azure-monitor/reference/tables/AzureDiagnostics#columns) -Azure ExpressRoute uses Kusto tables from Azure Monitor Logs. You can query these tables with Log analytics. For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype). -## Activity log +- [Microsoft.Network resource provider operations](/azure/role-based-access-control/resource-provider-operations#microsoftnetwork) -The following table lists the operations related to ExpressRoute that may be created in the Activity log. +The following table lists the operations related to ExpressRoute that might be created in the Activity log. | Operation | Description | |:|:|-| All Administrative operations | All administrative operations including create, update and delete of an ExpressRoute circuit. | +| All Administrative operations | All administrative operations including create, update, and delete of an ExpressRoute circuit. | | Create or update ExpressRoute circuit | An ExpressRoute circuit was created or updated. | | Deletes ExpressRoute circuit | An ExpressRoute circuit was deleted.| -For more information on the schema of Activity Log entries, see [Activity Log schema](../azure-monitor/essentials/activity-log-schema.md). +For more information on the schema of Activity Log entries, see [Activity Log schema](../azure-monitor/essentials/activity-log-schema.md). ## Schemas For detailed description of the top-level diagnostic logs schema, see [Supported services, schemas, and categories for Azure Diagnostic Logs](../azure-monitor/essentials/resource-logs-schema.md). -When reviewing any metrics through Log Analytics, the output will contain the following columns: +When you review any metrics through Log Analytics, the output contains the following columns: -|**Column**|**Type**|**Description**| -| | | | -|TimeGrain|string|PT1M (metric values are pushed every minute)| -|Count|real|Usually equal to 2 (each MSEE pushes a single metric value every minute)| -|Minimum|real|The minimum of the two metric values pushed by the two MSEEs| -|Maximum|real|The maximum of the two metric values pushed by the two MSEEs| -|Average|real|Equal to (Minimum + Maximum)/2| -|Total|real|Sum of the two metric values from both MSEEs (the main value to focus on for the metric queried)| +| Column | Type | Description | +|:-|:--|:| +| TimeGrain | string | PT1M (metric values are pushed every minute) | +| Count | real | Usually equal to 2 (each MSEE pushes a single metric value every minute) | +| Minimum | real | The minimum of the two metric values pushed by the two MSEEs | +| Maximum | real | The maximum of the two metric values pushed by the two MSEEs | +| Average | real | Equal to (Minimum + Maximum)/2 | +| Total | real | Sum of the two metric values from both MSEEs (the main value to focus on for the metric queried) | -## See also +## Related content -- See [Monitoring Azure ExpressRoute](monitor-expressroute.md) for a description of monitoring Azure ExpressRoute.-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.+- See [Monitor Azure ExpressRoute](monitor-expressroute.md) for a description of monitoring ExpressRoute. +- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources. |
expressroute | Monitor Expressroute | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/monitor-expressroute.md | Title: Monitoring Azure ExpressRoute -description: Start here to learn how to monitor Azure ExpressRoute. -+ Title: Monitor Azure ExpressRoute +description: Start here to learn how to monitor Azure ExpressRoute by using Azure Monitor. This article includes links to other resources. Last updated : 07/11/2024++ -- Previously updated : 03/31/2024 -# Monitoring Azure ExpressRoute +# Monitor Azure ExpressRoute -When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. -This article describes the monitoring data generated by Azure ExpressRoute. Azure ExpressRoute uses [Azure Monitor](../azure-monitor/overview.md). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md). - -## ExpressRoute insights --Some services in Azure have a special focused prebuilt monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called *insights*. ExpressRoute uses Network insights to provide a detailed topology mapping of all ExpressRoute components (peerings, connections, gateways) in relation with one another. Network insights for ExpressRoute also have preloaded metrics dashboard for availability, throughput, packet drops, and gateway metrics. For more information, see [Azure ExpressRoute Insights using Networking Insights](expressroute-network-insights.md). -## Monitoring data --Azure ExpressRoute collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data). --See [Monitoring Azure ExpressRoute data reference](monitor-expressroute-reference.md) for detailed information on the metrics and logs metrics created by Azure ExpressRoute. --## Collection and routing +For more information about the resource types for ExpressRoute, see [Azure ExpressRoute monitoring data reference](monitor-expressroute-reference.md). -Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting. Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations. See [Create diagnostic setting to collect platform logs and metrics in Azure](.. > [!IMPORTANT] > Enabling these settings requires additional Azure services (storage account, event hub, or Log Analytics), which may increase your cost. To calculate an estimated cost, visit the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator). -The metrics and logs you can collect are discussed in the following sections. ++For a list of available metrics for ExpressRoute, see [Azure ExpressRoute monitoring data reference](monitor-expressroute-reference.md#metrics). ++> [!NOTE] +> Using **Classic Metrics** is not recommended. +> ## Analyzing metrics -You can analyze metrics for *Azure ExpressRoute* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool. +You can analyze metrics for *Azure ExpressRoute* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool. :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/metrics-page.png" alt-text="Screenshot of the metrics dashboard for ExpressRoute."::: For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md). -* To view **ExpressRoute** metrics, filter by Resource Type *ExpressRoute circuits*. -* To view **Global Reach** metrics, filter by Resource Type *ExpressRoute circuits* and select an ExpressRoute circuit resource that has Global Reach enabled. -* To view **ExpressRoute Direct** metrics, filter Resource Type by *ExpressRoute Ports*. |