Updates from: 07/24/2024 01:12:51
Service Microsoft Docs article Related commit history on GitHub Change details
advisor Advisor Alerts Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-alerts-arm.md
Title: Create Azure Advisor alerts for new recommendations using Resource Manager template
-description: Learn how to set up an alert for new recommendations from Azure Advisor using an Azure Resource Manager template (ARM template).
+ Title: Create Advisor alerts for new recommendations by using Resource Manager template
+description: Learn how to set up an alert for new recommendations from Azure Advisor by using an Azure Resource Manager template (ARM template).
Last updated 06/29/2020
-# Quickstart: Create Azure Advisor alerts on new recommendations using an ARM template
+# Quickstart: Create Advisor alerts on new recommendations by using an ARM template
-This article shows you how to set up an alert for new recommendations from Azure Advisor using an Azure Resource Manager template (ARM template).
+This article shows you how to set up an alert for new recommendations from Azure Advisor by using an Azure Resource Manager template (ARM template).
[!INCLUDE [About Azure Resource Manager](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-introduction.md)]
-Whenever Azure Advisor detects a new recommendation for one of your resources, an event is stored in [Azure Activity log](../azure-monitor/essentials/platform-logs-overview.md). You can set up alerts for these events from Azure Advisor using a recommendation-specific alerts creation experience. You can select a subscription and optionally a resource group to specify the resources that you want to receive alerts on.
+Whenever Advisor detects a new recommendation for one of your resources, an event is stored in an [Azure activity log](../azure-monitor/essentials/platform-logs-overview.md). You can set up alerts for these events from Advisor by using a recommendation-specific alerts creation experience. You can select a subscription and optionally a resource group to specify the resources that you want to receive alerts on.
You can also determine the types of recommendations by using these properties:
You can also determine the types of recommendations by using these properties:
- Impact level - Recommendation type
-You can also configure the action that will take place when an alert is triggered by:
+You can also configure the action that takes place when an alert is triggered by:
-- Selecting an existing action group-- Creating a new action group
+- Selecting an existing action group.
+- Creating a new action group.
To learn more about action groups, see [Create and manage action groups](../azure-monitor/alerts/action-groups.md). > [!NOTE]
-> Advisor alerts are currently only available for High Availability, Performance, and Cost recommendations. Security recommendations are not supported.
+> Advisor alerts are currently only available for High Availability, Performance, and Cost recommendations. Security recommendations aren't supported.
## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- To run the commands from your local computer, install Azure CLI or the Azure PowerShell modules. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
+- To run the commands from your local computer, install the Azure CLI or the Azure PowerShell modules. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
## Review the template
The template defines two resources:
## Deploy the template
-Deploy the template using any standard method for [deploying an ARM template](../azure-resource-manager/templates/deploy-portal.md) such as the following examples using CLI and PowerShell. Replace the sample values for **Resource Group**, and **emailAddress** with appropriate values for your environment. The workspace name must be unique among all Azure subscriptions.
+Deploy the template by using any standard method for [deploying an ARM template](../azure-resource-manager/templates/deploy-portal.md), such as the following examples that use the CLI and PowerShell. Replace the sample values for `ResourceGroup`, and `emailAddress` with appropriate values for your environment. The workspace name must be unique among all Azure subscriptions.
# [CLI](#tab/CLI)
New-AzResourceGroupDeployment -Name CreateAdvisorAlert -ResourceGroupName my-res
## Validate the deployment
-Verify that the workspace has been created using one of the following commands. Replace the sample values for **Resource Group** with the value you used above.
+Verify that the workspace was created by using one of the following commands. Replace the sample values for **Resource Group** with the value that you used in the previous example.
# [CLI](#tab/CLI)
Get-AzActivityLogAlert -ResourceGroupName my-resource-group -Name AdvisorAlertsT
## Clean up resources
-If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When no longer needed, delete the resource group, which deletes the alert rule and the related resources. To delete the resource group by using Azure CLI or Azure PowerShell
+If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When you no longer need the resources, delete the resource group, which deletes the alert rule and the related resources. To delete the resource group by using the CLI or PowerShell:
# [CLI](#tab/CLI)
Remove-AzResourceGroup -Name my-resource-group
-## Next steps
+## Related content
-- Get an [overview of activity log alerts](../azure-monitor/alerts/alerts-overview.md), and learn how to receive alerts.
+- Get an [overview of activity log alerts](../azure-monitor/alerts/alerts-overview.md) and learn how to receive alerts.
- Learn more about [action groups](../azure-monitor/alerts/action-groups.md).
advisor Advisor Alerts Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-alerts-bicep.md
Title: Create Azure Advisor alerts for new recommendations using Bicep
-description: Learn how to set up an alert for new recommendations from Azure Advisor using Bicep.
+ Title: Create Advisor alerts for new recommendations by using Bicep
+description: Learn how to set up an alert for new recommendations from Azure Advisor by using Bicep.
Last updated 04/26/2022
-# Quickstart: Create Azure Advisor alerts on new recommendations using Bicep
+# Quickstart: Create Advisor alerts on new recommendations by using Bicep
-This article shows you how to set up an alert for new recommendations from Azure Advisor using Bicep.
+This article shows you how to set up an alert for new recommendations from Azure Advisor by using Bicep.
[!INCLUDE [About Bicep](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-bicep-introduction.md)]
-Whenever Azure Advisor detects a new recommendation for one of your resources, an event is stored in [Azure Activity log](../azure-monitor/essentials/platform-logs-overview.md). You can set up alerts for these events from Azure Advisor using a recommendation-specific alerts creation experience. You can select a subscription and optionally select a resource group to specify the resources that you want to receive alerts on.
+Whenever Advisor detects a new recommendation for one of your resources, an event is stored in an [Azure activity log](../azure-monitor/essentials/platform-logs-overview.md). You can set up alerts for these events from Advisor by using a recommendation-specific alerts creation experience. You can select a subscription and optionally select a resource group to specify the resources that you want to receive alerts on.
You can also determine the types of recommendations by using these properties:
You can also determine the types of recommendations by using these properties:
- Impact level - Recommendation type
-You can also configure the action that will take place when an alert is triggered by:
+You can also configure the action that takes place when an alert is triggered by:
-- Selecting an existing action group-- Creating a new action group
+- Selecting an existing action group.
+- Creating a new action group.
To learn more about action groups, see [Create and manage action groups](../azure-monitor/alerts/action-groups.md). > [!NOTE]
-> Advisor alerts are currently only available for High Availability, Performance, and Cost recommendations. Security recommendations are not supported.
+> Advisor alerts are currently only available for High Availability, Performance, and Cost recommendations. Security recommendations aren't supported.
## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- To run the commands from your local computer, install Azure CLI or the Azure PowerShell modules. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
+- To run the commands from your local computer, install the Azure CLI or the Azure PowerShell modules. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
## Review the Bicep file
The Bicep file defines two resources:
## Deploy the Bicep file
-1. Save the Bicep file as **main.bicep** to your local computer.
-1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+1. Save the Bicep file as `main.bicep` to your local computer.
+1. Deploy the Bicep file by using either the Azure CLI or Azure PowerShell.
# [CLI](#tab/CLI)
The Bicep file defines two resources:
> [!NOTE]
- > Replace **\<alert-name\>** with the name of the alert.
+ > Replace \<alert-name\> with the name of the alert.
- When the deployment finishes, you should see a message indicating the deployment succeeded.
+ When the deployment finishes, you should see a message that indicates the deployment succeeded.
## Validate the deployment
-Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+Use the Azure portal, the Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
# [CLI](#tab/CLI)
Get-AzResource -ResourceGroupName exampleRG
## Clean up resources
-When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group.
+When you no longer need the resources, use the Azure portal, the Azure CLI, or Azure PowerShell to delete the resource group.
# [CLI](#tab/CLI)
Remove-AzResourceGroup -Name exampleRG
-## Next steps
+## Related content
-- Get an [overview of activity log alerts](../azure-monitor/alerts/alerts-overview.md), and learn how to receive alerts.
+- Get an [overview of activity log alerts](../azure-monitor/alerts/alerts-overview.md) and learn how to receive alerts.
- Learn more about [action groups](../azure-monitor/alerts/action-groups.md).
advisor Advisor Alerts Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-alerts-portal.md
Title: Create Azure Advisor alerts for new recommendations using Azure portal
-description: Create Azure Advisor alerts for new recommendation
+ Title: Create Advisor alerts for new recommendations using Azure portal
+description: Create Azure Advisor alerts for new recommendations by using the Azure portal.
Last updated 09/09/2019
-# Create Azure Advisor alerts on new recommendations using the Azure portal
+# Create Azure Advisor alerts on new recommendations by using the Azure portal
-This article shows you how to set up an alert for new recommendations from Azure Advisor using the Azure portal.
+This article shows you how to set up an alert for new recommendations from Azure Advisor by using the Azure portal.
-Whenever Azure Advisor detects a new recommendation for one of your resources, an event is stored in [Azure Activity log](../azure-monitor/essentials/platform-logs-overview.md). You can set up alerts for these events from Azure Advisor using a recommendation-specific alerts creation experience. You can select a subscription and optionally a resource group to specify the resources that you want to receive alerts on.
+Whenever Advisor detects a new recommendation for one of your resources, an event is stored in the [Azure activity log](../azure-monitor/essentials/platform-logs-overview.md). You can set up alerts for these events from Advisor by using a recommendation-specific alerts creation experience. You can select a subscription and optionally a resource group to specify the resources that you want to receive alerts on.
You can also determine the types of recommendations by using these properties:
You can also determine the types of recommendations by using these properties:
* Impact level * Recommendation type
-You can also configure the action that will take place when an alert is triggered by:
+You can also configure the action that takes place when an alert is triggered by:
-* Selecting an existing action group
-* Creating a new action group
+* Selecting an existing action group.
+* Creating a new action group.
To learn more about action groups, see [Create and manage action groups](../azure-monitor/alerts/action-groups.md).
-> [!NOTE]
-> Advisor alerts are currently only available for High Availability, Performance, and Cost recommendations. Security recommendations are not supported.
+> [!NOTE]
+> Advisor alerts are currently only available for High Availability, Performance, and Cost recommendations. Security recommendations aren't supported.
-## Create alert rule
-1. In the **portal**, select **Azure Advisor**.
+## Create an alert rule
- ![Azure Advisor in portal](./media/advisor-alerts/create1.png)
+Follow these steps to create an alert rule.
-2. In the **Monitoring** section of the left menu, select **Alerts**.
+1. In the [Azure portal](https://portal.azure.com), select **Advisor**.
- ![Alerts in Advisor](./media/advisor-alerts/create2.png)
+ ![Screenshot that shows Advisor in the portal.](./media/advisor-alerts/create1.png)
-3. Select **New Advisor Alert**.
+1. In the **Monitoring** section on the left menu, select **Alerts**.
- ![New Advisor alert](./media/advisor-alerts/create3.png)
+ ![Screenshot that shows Alerts in Advisor.](./media/advisor-alerts/create2.png)
-4. In the **Scope** section, select the subscription and optionally the resource group that you want to be alerted on.
+1. Select **New Advisor Alert**.
- ![Advisor alert scope](./media/advisor-alerts/create4.png)
+ ![Screenshot that shows New Advisor Alert.](./media/advisor-alerts/create3.png)
-5. In the **Condition** section, select the method you want to use for configuring your alert. If you want to alert for all recommendations for a certain category and/or impact level, select **Category and impact level**. If you want to alert for all recommendations of a certain type, select **Recommendation type**.
+1. In the **Scope** section, select the subscription and optionally the resource group that you want to be alerted on.
- ![Azure Advisor alert condition](./media/advisor-alerts/create5.png)
+ ![Screenshot that shows Advisor alert scope.](./media/advisor-alerts/create4.png)
-6. Depending on the Configure by option that you select, you will be able to specify the criteria. If you want all recommendations, just leave the remaining fields blank.
+1. In the condition section, select the method you want to use for configuring your alert. If you want to alert for all recommendations for a certain category or impact level, select **Category and impact level**. If you want to alert for all recommendations of a certain type, select **Recommendation type**.
- ![Advisor alert action group](./media/advisor-alerts/create6.png)
+ ![Screenshot that shows Advisor alert conditions.](./media/advisor-alerts/create5.png)
-7. In the **action groups** section, select **Add existing** to use an action group you already created or select **Create new** to set up a new [action group](../azure-monitor/alerts/action-groups.md).
+1. Depending on the **Configured by** option that you select, you can specify the criteria. If you want all recommendations, leave the remaining fields blank.
- ![Advisor alert add existing](./media/advisor-alerts/create7.png)
+ ![Screenshot that shows Advisor alert action group.](./media/advisor-alerts/create6.png)
-8. In the Alert details section, give your alert a name and short description. If you want your alert to be enabled, leave **Enable rule upon creation** selection set to **Yes**. Then select the resource group to save your alert to. This will not impact the targeting scope of the recommendation.
+1. In the action groups section, choose **Select existing** to use an action group that you already created or select **Create new** to set up a new [action group](../azure-monitor/alerts/action-groups.md).
- :::image type="content" source="./media/advisor-alerts/create8.png" alt-text="Screenshot of the Alert details section.":::
+ ![Screenshot that shows Advisor alert Select existing.](./media/advisor-alerts/create7.png)
+1. In the alert details section, give your alert a name and short description. If you want your alert to be enabled, leave the **Enable rule upon creation** selection set to **Yes**. Then select the resource group to save your alert to. This setting won't affect the targeting scope of the recommendation.
+
+ :::image type="content" source="./media/advisor-alerts/create8.png" alt-text="Screenshot that shows the alert details section.":::
## Configure recommendation alerts to use a webhook
-This section shows you how to configure Azure Advisor alerts to send recommendation data through webhooks to your existing systems.
-You can set up alerts to be notified when you have a new Advisor recommendation on one of your resources. These alerts can notify you through email or text message, but they can also be used to integrate with your existing systems through a webhook.
+This section shows you how to configure Advisor alerts to send recommendation data through webhooks to your existing systems.
+
+You can set up alerts to be notified when you have a new Advisor recommendation on one of your resources. These alerts can notify you through email or text message. They can also be used to integrate with your existing systems through a webhook.
+### Use the Advisor recommendation alert payload
-### Using the Advisor recommendation alert payload
-If you want to integrate Advisor alerts into your own systems using a webhook, you will need to parse the JSON payload that is sent from the notification.
+If you want to integrate Advisor alerts into your own systems by using a webhook, you need to parse the JSON payload that's sent from the notification.
-When you set up your action group for this alert, you select if you would like to use the common alert schema. If you select the common alert schema, your payload will look like:
+When you set up your action group for this alert, you select if you want to use the common alert schema. If you select the common alert schema, your payload looks like this example:
```json {
When you set up your action group for this alert, you select if you would like t
} ```
-If you do not use the common schema, your payload looks like the following:
+If you don't use the common schema, your payload looks like the following example:
```json {
If you do not use the common schema, your payload looks like the following:
} ```
-In either schema, you can identify Advisor recommendation events by looking for **eventSource** is `Recommendation` and **operationName** is `Microsoft.Advisor/recommendations/available/action`.
+In either schema, you can identify Advisor recommendation events by looking for `eventSource` is `Recommendation` and `operationName` is `Microsoft.Advisor/recommendations/available/action`.
-Some of the other important fields that you may want to use are:
+Some of the other important fields that you might want to use are:
-* *alertTargetIDs* (in the common schema) or *resourceId* (legacy schema)
-* *recommendationType*
-* *recommendationName*
-* *recommendationCategory*
-* *recommendationImpact*
-* *recommendationResourceLink*
+* `alertTargetIDs` (in the common schema) or `resourceId` (legacy schema)
+* `recommendationType`
+* `recommendationName`
+* `recommendationCategory`
+* `recommendationImpact`
+* `recommendationResourceLink`
+## Manage your alerts
-## Manage your alerts
+From Advisor, you can edit, delete, or disable and enable your recommendations alerts.
-From Azure Advisor, you can edit, delete, or disable and enable your recommendations alerts.
+1. In the [Azure portal](https://portal.azure.com), select **Advisor**.
-1. In the **portal**, select **Azure Advisor**.
+ :::image type="content" source="./media/advisor-alerts/create1.png" alt-text="Screenshot that shows the Azure portal menu with Advisor selected.":::
- :::image type="content" source="./media/advisor-alerts/create1.png" alt-text="Screenshot of the Azure portal menu showing Azure Advisor selected.":::
+1. In the **Monitoring** section on the left menu, select **Alerts**.
-2. In the **Monitoring** section of the left menu, select **Alerts**.
+ :::image type="content" source="./media/advisor-alerts/create2.png" alt-text="Screenshot that shows the Azure portal menu with Alerts selected.":::
- :::image type="content" source="./media/advisor-alerts/create2.png" alt-text="Screenshot of the Azure portal menu showing Alerts selected.":::
+1. To edit an alert, select the alert name to open the alert and edit the fields you want to edit.
-3. To edit an alert, click on the Alert name to open the alert and edit the fields you want to edit.
+1. To delete, enable, or disable an alert, select the ellipsis at the end of the row. Then select the action you want to take.
-4. To delete, enable, or disable an alert, click on the ellipse at the end of the row and then select the action you would like to take.
-
+## Related content
-## Next steps
-- Get an [overview of activity log alerts](../azure-monitor/alerts/alerts-overview.md), and learn how to receive alerts.
+- Get an [overview of activity log alerts](../azure-monitor/alerts/alerts-overview.md) and learn how to receive alerts.
- Learn more about [action groups](../azure-monitor/alerts/action-groups.md).
advisor Azure Advisor Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/azure-advisor-score.md
Last updated 07/12/2024
# Use Advisor score
+This article shows you how to use Azure Advisor score to measure optimization progress.
+ ## Introduction to score
-Azure Advisor provides best practice recommendations for your workloads. These recommendations are personalized and actionable to help you:
+Advisor provides best-practice recommendations for your workloads. These recommendations are personalized and actionable to help you:
* Improve the posture of your workloads and optimize your Azure deployments. * Proactively prevent top issues by following best practices.
-* Assess your Azure workloads against the five pillars of the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
+* Assess your Azure workloads against the five pillars of the [Azure Well-Architected Framework](/azure/architecture/framework/).
As a core feature of Advisor, Advisor score can help you achieve these goals effectively and efficiently.
To get the most out of Azure, it's crucial to understand where you are in your w
It's also important to track and report the progress you're making in this optimization journey. With Advisor score, you can easily do all these things with the new gamification experience.
-As your personalized cloud consultant, Azure Advisor continually assesses your usage telemetry and resource configuration to check for industry best practices. Advisor then aggregates its findings into a single score. With this score, you can tell at a glance if you're taking the necessary steps to build reliable, secure, and cost-efficient solutions.
+As your personalized cloud consultant, Advisor continually assesses your usage telemetry and resource configuration to check for industry best practices. Advisor then aggregates its findings into a single score. With this score, you can tell at a glance if you're taking the necessary steps to build reliable, secure, and cost-efficient solutions.
The Advisor score consists of an overall score, which can be further broken down into five category scores. One score for each category of Advisor represents the five pillars of the Well-Architected Framework.
You can track the progress you make over time by viewing your overall score and
## Use Advisor score in the portal
-1. Sign in to the [**Azure portal**](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Search for and select [**Advisor**](https://aka.ms/azureadvisordashboard) from any page.
-1. Select **Advisor score** in the left menu pane to open score page.
+1. Select **Advisor score** on the left pane to open the score page.
## Interpret an Advisor score
Advisor displays your overall Advisor score and a breakdown for Advisor categori
* **Score by category** for each recommendation tells you which outstanding recommendations improve your score the most. These values reflect both the weight of the recommendation and the predicted ease of implementation. These factors help to make sure you can get the most value with your time. They also help you with prioritization. * **Category score impact** for each recommendation helps you prioritize your remediation actions for each category.
-The contribution of each recommendation to your category score is shown clearly on the **Advisor score** page in the Azure portal. You can increase each category score by the percentage point listed in the **Potential score increase** column. This value reflects both the weight of the recommendation within the category and the predicted ease of implementation to address the potentially easiest tasks. Focusing on the recommendations with the greatest score impact will help you make the most progress with time.
+The contribution of each recommendation to your category score is shown clearly on the **Advisor score** page in the Azure portal. You can increase each category score by the percentage point listed in the **Potential score increase** column. This value reflects both the weight of the recommendation within the category and the predicted ease of implementation to address the potentially easiest tasks. Focusing on the recommendations with the greatest score impact helps you make the most progress with time.
![Screenshot that shows the Advisor score impact.](https://user-images.githubusercontent.com/41593141/195171044-6a45fa99-a291-49f3-8914-2b596771e63b.png)
-If any Advisor recommendations aren't relevant for an individual resource, you can postpone or dismiss those recommendations. They'll be excluded from the score calculation with the next refresh. Advisor will also use this input as feedback to improve the model.
+If any Advisor recommendations aren't relevant for an individual resource, you can postpone or dismiss those recommendations. They're excluded from the score calculation with the next refresh. Advisor also uses this input as feedback to improve the model.
## How is an Advisor score calculated? Advisor displays your category scores and your overall Advisor score as percentages. A score of 100% in any category means all your resources, *assessed by Advisor*, follow the best practices that Advisor recommends. On the other end of the spectrum, a score of 0% means that none of your resources, assessed by Advisor, follows Advisor recommendations.
-**Each of the five categories has a highest potential score of 100.** Your overall Advisor score is calculated as a sum of each applicable category score, divided by the sum of the highest potential score from all applicable categories. In most cases this means adding up five Advisor scores for each category and dividing by 500. But *each category score is calculated only if you use resources that are assessed by Advisor*.
+**Each of the five categories has a highest potential score of 100.** Your overall Advisor score is calculated as a sum of each applicable category score, divided by the sum of the highest potential score from all applicable categories. In most cases, this means adding up five Advisor scores for each category and dividing by 500. But *each category score is calculated only if you use resources that are assessed by Advisor*.
### Advisor score calculation example
-* **Single subscription score:** This example is the simple mean of all Advisor category scores for your subscription. If the Advisor category scores are - **Cost** = 73, **Reliability** = 85, **Operational excellence** = 77, and **Performance** = 100, the Advisor score would be (73 + 85 + 77 + 100)/(4x100) = 0.84% or 84%.
-* **Multiple subscriptions score:** When multiple subscriptions are selected, the overall Advisor score is calculated as an average of aggregated category scores. Each category score is calculated using individual subscription score and subscription consumsumption based weight. Overall score is calculated as sum of aggregated category scores divided by the sum of the highest potential scores.
+* **Single subscription score:** This example is the simple mean of all Advisor category scores for your subscription. If the Advisor category scores are **Cost** = 73, **Reliability** = 85, **Operational excellence** = 77, and **Performance** = 100, the Advisor score would be (73 + 85 + 77 + 100)/(4x100) = 0.84% or 84%.
+* **Multiple subscriptions score:** When multiple subscriptions are selected, the overall Advisor score is calculated as an average of aggregated category scores. Each category score is calculated by using the individual subscription score and the subscription consumption-based weight. The overall score is calculated as the sum of aggregated category scores divided by the sum of the highest potential scores.
### Scoring methodology
The calculation of the Advisor score can be summarized in four steps:
Advisor applies this model at an Advisor category level to give an Advisor score for each category. **Security** uses a [secure score](../defender-for-cloud/secure-score-security-controls.md) model. A simple average produces the final Advisor score.
-## Frequently Asked Questions (FAQs)
+## Frequently asked questions (FAQs)
+
+Here are answers to common questions about Advisor score.
### How often is my score refreshed?
Your score is refreshed at least once per day.
Your score can change if you remediate impacted resources by adopting the best practices that Advisor recommends. If you or anyone with permissions on your subscription has modified or created new resources, you might also see fluctuations in your score. Your score is based on a ratio of the cost-impacted resources relative to the total cost of all resources.
-### I implemented a recommendation but my score did not change. Why the score did not increase?
+### I implemented a recommendation but my score didn't change. Why didn't the score increase?
-The score does not reflect adopted recommendations right away. It takes at least 24 hours for the score to change after the recommendation is remediated.
+The score doesn't reflect adopted recommendations right away. It takes at least 24 hours for the score to change after the recommendation is remediated.
### Why do some recommendations have the empty "-" value in the category score impact column?
This message means that the recommendation is new, and we're working on bringing
### What if a recommendation isn't relevant?
-If you dismiss a recommendation from Advisor, it is excluded from the calculation of your score. Dismissing recommendations also helps Advisor improve the quality of recommendations.
+If you dismiss a recommendation from Advisor, it's excluded from the calculation of your score. Dismissing recommendations also helps Advisor improve the quality of recommendations.
### Why don't I have a score for one or more categories or subscriptions?
The scoring methodology is designed to control for the number of resources on a
### Does my score depend on how much I spend on Azure?
-No. Your score isn't necessarily a reflection of how much you spend. Unnecessary spending will result in a lower **Cost** score.
+No. Your score isn't necessarily a reflection of how much you spend. Unnecessary spending results in a lower **Cost** score.
-## Next steps
+## Related content
For more information about Advisor recommendations, see:
advisor View Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/view-recommendations.md
Last updated 01/02/2024
-# Configure Azure Advisor recommendations view
+# Configure the Azure Advisor recommendations view
-Azure Advisor provides recommendations to help you optimize your Azure deployments. Within Advisor, you have access to a few features that help you to narrow down your recommendations to only those that matter to you.
+Azure Advisor provides recommendations to help you optimize your Azure deployments. Within Advisor, you have access to a few features that help you narrow down your recommendations to only the ones that matter to you.
## Configure subscriptions and resource groups
-Advisor gives you the ability to select Subscriptions and Resource Groups that matter to you and your organization. You only see recommendations for the subscriptions and resource groups that you select. By default, all are selected. Configuration settings apply to the subscription or resource group, so the same settings apply to everyone that has access to that subscription or resource group. Configuration settings can be changed in the Azure portal or programmatically.
+Advisor gives you the ability to select subscriptions and resource groups that matter to you and your organization. You only see recommendations for the subscriptions and resource groups that you select. By default, all are selected. Configuration settings apply to the subscription or resource group, so the same settings apply to everyone that has access to that subscription or resource group. Configuration settings can be changed in the Azure portal or programmatically.
To make changes in the Azure portal:
To make changes in the Azure portal:
1. Select **Configuration** from the menu.
- :::image type="content" source="./media/view-recommendations/configuration.png" alt-text="Screenshot of Azure Advisor showing configuration pane.":::
+ :::image type="content" source="./media/view-recommendations/configuration.png" alt-text="Screenshot of Azure Advisor showing the Configuration pane.":::
-1. Check the box in the **Include** column for any subscriptions or resource groups to receive Advisor recommendations. If the box is disabled, you may not have permission to make a configuration change on that subscription or resource group. Learn more about [permissions in Azure Advisor](permissions.md).
+1. Select the checkbox in the **Include** column for any subscriptions or resource groups to receive Advisor recommendations. If the box is disabled, you might not have permission to make a configuration change on that subscription or resource group. Learn more about [permissions in Azure Advisor](permissions.md).
-1. Click **Apply** at the bottom after you make a change.
+1. Select **Apply** at the bottom after you make a change.
-## Filtering your view in the Azure portal
+## Filter your view in the Azure portal
-Configuration settings remain active until changed. If you want to limit the view of recommendations for a single viewing, you can use the drop downs provided at the top of the Advisor panel. You can filter recommendations by subscription, resource group, workload, resource type, recommendation status and impact. These filters are available for Overview, Score, Cost, Security, Reliability, Operational Excellence, Performance and All Recommendations pages.
+Configuration settings remain active until changed. If you want to limit the view of recommendations for a single viewing, you can use the dropdown lists provided at the top of the Advisor pane. You can filter recommendations by subscription, resource group, workload, resource type, recommendation status, and impact. These filters are available for **Overview**, **Score**, **Cost**, **Security**, **Reliability**, **Operational excellence**, **Performance**, and **All recommendations** pages.
- :::image type="content" source="./media/view-recommendations/filtering.png" alt-text="Screenshot of Azure Advisor showing filtering options.":::
+ :::image type="content" source="./media/view-recommendations/filtering.png" alt-text="Screenshot of Advisor showing filtering options.":::
> [!NOTE]
-> Contact your account team to add new workloads to the workload filter or edit workload names.
+> Contact your account team to add new workloads to the workload filter or edit workload names.
-## Dismissing and postponing recommendations
+## Dismiss and postpone recommendations
-Azure Advisor allows you to dismiss or postpone recommendations on a single resource. If you dismiss a recommendation, you do not see it again unless you manually activate it. However, postponing a recommendation allows you to specify a duration after which the recommendation is automatically activated again. Postponing can be done in the Azure portal or programmatically.
+Advisor allows you to dismiss or postpone recommendations on a single resource. If you dismiss a recommendation, you don't see it again unless you manually activate it. However, postponing a recommendation allows you to specify a duration after which the recommendation is automatically activated again. Postponing can be done in the Azure portal or programmatically.
### Postpone a single recommendation in the Azure portal 1. Open [Azure Advisor](https://aka.ms/azureadvisordashboard) in the Azure portal.
-1. Select a recommendation category to view your recommendations
-1. Select a recommendation from the list of recommendations
-1. Select Postpone or Dismiss for the recommendation you want to postpone or dismiss
+1. Select a recommendation category to view your recommendations.
+1. Select a recommendation from the list of recommendations.
+1. Select **Postpone** or **Dismiss** for the recommendation you want to postpone or dismiss.
- :::image type="content" source="./media/view-recommendations/postpone-dismiss.png" alt-text="Screenshot of the Use Managed Disks window showing the select column and Postpone and Dismiss actions for a single recommendation highlighted.":::
+ :::image type="content" source="./media/view-recommendations/postpone-dismiss.png" alt-text="Screenshot that shows the Use Managed Disks page with the Select column and Postpone and Dismiss actions for a single recommendation highlighted.":::
### Postpone or dismiss multiple recommendations in the Azure portal
Azure Advisor allows you to dismiss or postpone recommendations on a single reso
1. Select a recommendation category to view your recommendations. 1. Select a recommendation from the list of recommendations. 1. Select the checkbox at the left of the row for all resources you want to postpone or dismiss the recommendation.
-1. Select **Postpone** or **Dismiss** at the top left of the table.
+1. Select **Postpone** or **Dismiss** in the upper-left corner of the table.
- :::image type="content" source="./media/view-recommendations/postpone-dismiss-multiple.png" alt-text="Screenshot of the Use Managed Disks window showing the select column and Postpone and Dismiss actions on the top left of the table highlighted.":::
+ :::image type="content" source="./media/view-recommendations/postpone-dismiss-multiple.png" alt-text="Screenshot that shows the Use Managed Disks page with the Select column and Postpone and Dismiss actions in the table highlighted.":::
> [!NOTE]
-> You need contributor or owner permission to dismiss or postpone a recommendation. Learn more about permissions in Azure Advisor.
+> You need Contributor or Owner permission to dismiss or postpone a recommendation. Learn more about permissions in Advisor.
-> [!NOTE]
-> If the selection boxes are disabled, recommendations may still be loading. Please wait for all recommendations to load before trying to postpone or dismiss.
+If the selection boxes are disabled, recommendations might still be loading. Wait for all recommendations to load before you try to postpone or dismiss.
### Reactivate a postponed or dismissed recommendation
-You can activate a recommendation that has been postponed or dismissed. This action can be done in the Azure portal or programmatically. In the Azure portal:
+You can activate a recommendation that was postponed or dismissed. This action can be done in the Azure portal or programmatically. In the Azure portal:
-1. Open [Azure Advisor](https://aka.ms/azureadvisordashboard) in the Azure portal.
+1. Open [Advisor](https://aka.ms/azureadvisordashboard) in the Azure portal.
-1. Change the filter on the Overview panel to **Postponed**. Advisor then displays postponed or dismissed recommendations.
+1. Change the filter on the **Overview** pane to **Postponed**. Advisor then displays postponed or dismissed recommendations.
- :::image type="content" source="./media/view-recommendations/activate-postponed.png" alt-text="Screenshot of the Azure Advisor window showing the Postponed drop-down menu selected.":::
+ :::image type="content" source="./media/view-recommendations/activate-postponed.png" alt-text="Screenshot that shows the Advisor pane with the Postponed dropdown menu selected.":::
1. Select a category to see **Postponed** and **Dismissed** recommendations.
-1. Select a recommendation from the list of recommendations. This opens recommendations with the **Postponed & Dismissed** tab already selected to show the resources for which this recommendation has been postponed or dismissed.
+1. Select a recommendation from the list of recommendations. This action opens recommendations with the **Postponed & Dismissed** tab already selected to show the resources for which this recommendation was postponed or dismissed.
-1. Click on **Activate** at the end of the row. Once clicked, the recommendation is active for that resource and so removed from this table. The recommendation is now visible in the **Active** tab.
-
- :::image type="content" source="./media/view-recommendations/activate-postponed-2.png" alt-text="Screenshot of the Enable Soft Delete window showing the Postponed & Dismissed tab with the Activate action highlighted.":::
+1. Select **Activate** at the end of the row. The recommendation is now active for that resource and removed from the table. The recommendation is visible on the **Active** tab.
-## Next steps
+ :::image type="content" source="./media/view-recommendations/activate-postponed-2.png" alt-text="Screenshot that shows the Enable Soft Delete pane with the Postponed & Dismissed tab and the Activate action highlighted.":::
-This article explains how you can view recommendations that matter to you in Azure Advisor. To learn more about Advisor, see:
+## Related content
+
+This article explains how you can view recommendations that matter to you in Advisor. To learn more about Advisor, see:
- [What is Azure Advisor?](advisor-overview.md)-- [Getting Started with Advisor](advisor-get-started.md)
+- [Get started with Advisor](advisor-get-started.md)
- [Permissions in Azure Advisor](permissions.md)---
ai-services Liveness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/Tutorials/liveness.md
The high-level steps involved in liveness orchestration are illustrated below:
#### [C#](#tab/csharp) ```csharp
- var endpoint = new Uri(System.Environment.GetEnvironmentVariable("VISION_ENDPOINT"));
- var credential = new AzureKeyCredential(System.Environment.GetEnvironmentVariable("VISION_KEY"));
+ var endpoint = new Uri(System.Environment.GetEnvironmentVariable("FACE_ENDPOINT"));
+ var credential = new AzureKeyCredential(System.Environment.GetEnvironmentVariable("FACE_APIKEY"));
var sessionClient = new FaceSessionClient(endpoint, credential);
The high-level steps involved in liveness orchestration are illustrated below:
#### [Java](#tab/java) ```java
- String endpoint = System.getenv("VISION_ENDPOINT");
- String accountKey = System.getenv("VISION_KEY");
+ String endpoint = System.getenv("FACE_ENDPOINT");
+ String accountKey = System.getenv("FACE_APIKEY");
FaceSessionClient sessionClient = new FaceSessionClientBuilder() .endpoint(endpoint)
The high-level steps involved in liveness orchestration are illustrated below:
#### [Python](#tab/python) ```python
- endpoint = os.environ["VISION_ENDPOINT"]
- key = os.environ["VISION_KEY"]
+ endpoint = os.environ["FACE_ENDPOINT"]
+ key = os.environ["FACE_APIKEY"]
face_session_client = FaceSessionClient(endpoint=endpoint, credential=AzureKeyCredential(key))
The high-level steps involved in liveness orchestration are illustrated below:
#### [REST API (Windows)](#tab/cmd) ```console
- curl --request POST --location "%VISION_ENDPOINT%/face/v1.1-preview.1/detectliveness/singlemodal/sessions" ^
- --header "Ocp-Apim-Subscription-Key: %VISION_KEY%" ^
+ curl --request POST --location "%FACE_ENDPOINT%/face/v1.1-preview.1/detectliveness/singlemodal/sessions" ^
+ --header "Ocp-Apim-Subscription-Key: %FACE_APIKEY%" ^
--header "Content-Type: application/json" ^ --data ^ "{ ^
The high-level steps involved in liveness orchestration are illustrated below:
#### [REST API (Linux)](#tab/bash) ```bash
- curl --request POST --location "${VISION_ENDPOINT}/face/v1.1-preview.1/detectliveness/singlemodal/sessions" \
- --header "Ocp-Apim-Subscription-Key: ${VISION_KEY}" \
+ curl --request POST --location "${FACE_ENDPOINT}/face/v1.1-preview.1/detectliveness/singlemodal/sessions" \
+ --header "Ocp-Apim-Subscription-Key: ${FACE_APIKEY}" \
--header "Content-Type: application/json" \ --data \ '{
The high-level steps involved in liveness orchestration are illustrated below:
#### [REST API (Windows)](#tab/cmd) ```console
- curl --request GET --location "%VISION_ENDPOINT%/face/v1.1-preview.1/detectliveness/singlemodal/sessions/<session-id>" ^
- --header "Ocp-Apim-Subscription-Key: %VISION_KEY%"
+ curl --request GET --location "%FACE_ENDPOINT%/face/v1.1-preview.1/detectliveness/singlemodal/sessions/<session-id>" ^
+ --header "Ocp-Apim-Subscription-Key: %FACE_APIKEY%"
``` #### [REST API (Linux)](#tab/bash) ```bash
- curl --request GET --location "${VISION_ENDPOINT}/face/v1.1-preview.1/detectliveness/singlemodal/sessions/<session-id>" \
- --header "Ocp-Apim-Subscription-Key: ${VISION_KEY}"
+ curl --request GET --location "${FACE_ENDPOINT}/face/v1.1-preview.1/detectliveness/singlemodal/sessions/<session-id>" \
+ --header "Ocp-Apim-Subscription-Key: ${FACE_APIKEY}"
```
The high-level steps involved in liveness orchestration are illustrated below:
#### [REST API (Windows)](#tab/cmd) ```console
- curl --request DELETE --location "%VISION_ENDPOINT%/face/v1.1-preview.1/detectliveness/singlemodal/sessions/<session-id>" ^
- --header "Ocp-Apim-Subscription-Key: %VISION_KEY%"
+ curl --request DELETE --location "%FACE_ENDPOINT%/face/v1.1-preview.1/detectliveness/singlemodal/sessions/<session-id>" ^
+ --header "Ocp-Apim-Subscription-Key: %FACE_APIKEY%"
``` #### [REST API (Linux)](#tab/bash) ```bash
- curl --request DELETE --location "${VISION_ENDPOINT}/face/v1.1-preview.1/detectliveness/singlemodal/sessions/<session-id>" \
- --header "Ocp-Apim-Subscription-Key: ${VISION_KEY}"
+ curl --request DELETE --location "${FACE_ENDPOINT}/face/v1.1-preview.1/detectliveness/singlemodal/sessions/<session-id>" \
+ --header "Ocp-Apim-Subscription-Key: ${FACE_APIKEY}"
```
The high-level steps involved in liveness with verification orchestration are il
#### [C#](#tab/csharp) ```csharp
- var endpoint = new Uri(System.Environment.GetEnvironmentVariable("VISION_ENDPOINT"));
- var credential = new AzureKeyCredential(System.Environment.GetEnvironmentVariable("VISION_KEY"));
+ var endpoint = new Uri(System.Environment.GetEnvironmentVariable("FACE_ENDPOINT"));
+ var credential = new AzureKeyCredential(System.Environment.GetEnvironmentVariable("FACE_APIKEY"));
var sessionClient = new FaceSessionClient(endpoint, credential);
The high-level steps involved in liveness with verification orchestration are il
#### [Java](#tab/java) ```java
- String endpoint = System.getenv("VISION_ENDPOINT");
- String accountKey = System.getenv("VISION_KEY");
+ String endpoint = System.getenv("FACE_ENDPOINT");
+ String accountKey = System.getenv("FACE_APIKEY");
FaceSessionClient sessionClient = new FaceSessionClientBuilder() .endpoint(endpoint)
The high-level steps involved in liveness with verification orchestration are il
#### [Python](#tab/python) ```python
- endpoint = os.environ["VISION_ENDPOINT"]
- key = os.environ["VISION_KEY"]
+ endpoint = os.environ["FACE_ENDPOINT"]
+ key = os.environ["FACE_APIKEY"]
face_session_client = FaceSessionClient(endpoint=endpoint, credential=AzureKeyCredential(key))
The high-level steps involved in liveness with verification orchestration are il
#### [REST API (Windows)](#tab/cmd) ```console
- curl --request POST --location "%VISION_ENDPOINT%/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions" ^
- --header "Ocp-Apim-Subscription-Key: %VISION_KEY%" ^
+ curl --request POST --location "%FACE_ENDPOINT%/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions" ^
+ --header "Ocp-Apim-Subscription-Key: %FACE_APIKEY%" ^
--form "Parameters=""{\\\""livenessOperationMode\\\"": \\\""passive\\\"", \\\""deviceCorrelationId\\\"": \\\""723d6d03-ef33-40a8-9682-23a1feb7bccd\\\""}""" ^ --form "VerifyImage=@""test.png""" ``` #### [REST API (Linux)](#tab/bash) ```bash
- curl --request POST --location "${VISION_ENDPOINT}/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions" \
- --header "Ocp-Apim-Subscription-Key: ${VISION_KEY}" \
+ curl --request POST --location "${FACE_ENDPOINT}/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions" \
+ --header "Ocp-Apim-Subscription-Key: ${FACE_APIKEY}" \
--form 'Parameters="{ \"livenessOperationMode\": \"passive\", \"deviceCorrelationId\": \"723d6d03-ef33-40a8-9682-23a1feb7bccd\"
The high-level steps involved in liveness with verification orchestration are il
#### [REST API (Windows)](#tab/cmd) ```console
- curl --request GET --location "%VISION_ENDPOINT%/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions/<session-id>" ^
- --header "Ocp-Apim-Subscription-Key: %VISION_KEY%"
+ curl --request GET --location "%FACE_ENDPOINT%/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions/<session-id>" ^
+ --header "Ocp-Apim-Subscription-Key: %FACE_APIKEY%"
``` #### [REST API (Linux)](#tab/bash) ```bash
- curl --request GET --location "${VISION_ENDPOINT}/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions/<session-id>" \
- --header "Ocp-Apim-Subscription-Key: ${VISION_KEY}"
+ curl --request GET --location "${FACE_ENDPOINT}/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions/<session-id>" \
+ --header "Ocp-Apim-Subscription-Key: ${FACE_APIKEY}"
```
The high-level steps involved in liveness with verification orchestration are il
#### [REST API (Windows)](#tab/cmd) ```console
- curl --request DELETE --location "%VISION_ENDPOINT%/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions/<session-id>" ^
- --header "Ocp-Apim-Subscription-Key: %VISION_KEY%"
+ curl --request DELETE --location "%FACE_ENDPOINT%/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions/<session-id>" ^
+ --header "Ocp-Apim-Subscription-Key: %FACE_APIKEY%"
``` #### [REST API (Linux)](#tab/bash) ```bash
- curl --request DELETE --location "${VISION_ENDPOINT}/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions/<session-id>" \
- --header "Ocp-Apim-Subscription-Key: ${VISION_KEY}"
+ curl --request DELETE --location "${FACE_ENDPOINT}/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions/<session-id>" \
+ --header "Ocp-Apim-Subscription-Key: ${FACE_APIKEY}"
```
ai-services Add Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/add-faces.md
[!INCLUDE [Gate notice](../includes/identity-gate-notice.md)]
-This guide demonstrates how to add a large number of persons and faces to a **PersonGroup** object. The same strategy also applies to **LargePersonGroup**, **FaceList**, and **LargeFaceList** objects. This sample is written in C# and uses the Azure AI Face .NET client library.
+This guide demonstrates how to add a large number of persons and faces to a **PersonGroup** object. The same strategy also applies to **LargePersonGroup**, **FaceList**, and **LargeFaceList** objects. This sample is written in C#.
## Initialization
static async Task WaitCallLimitPerSecondAsync()
} ```
-## Authorize the API call
-
-When you use the Face client library, the key and subscription endpoint are passed in through the constructor of the FaceClient class. See the [quickstart](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp&tabs=visual-studio) for instructions on creating a Face client object.
- ## Create the PersonGroup
This code creates a **PersonGroup** named `"MyPersonGroup"` to save the persons.
const string personGroupId = "mypersongroupid"; const string personGroupName = "MyPersonGroup"; _timeStampQueue.Enqueue(DateTime.UtcNow);
-await faceClient.LargePersonGroup.CreateAsync(personGroupId, personGroupName);
+using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = personGroupName, ["recognitionModel"] = "recognition_04" }))))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ await httpClient.PutAsync($"{ENDPOINT}/face/v1.0/persongroups/{personGroupId}", content);
+}
``` ## Create the persons for the PersonGroup
await faceClient.LargePersonGroup.CreateAsync(personGroupId, personGroupName);
This code creates **Persons** concurrently, and uses `await WaitCallLimitPerSecondAsync()` to avoid exceeding the call rate limit. ```csharp
-Person[] persons = new Person[PersonCount];
+string?[] persons = new string?[PersonCount];
Parallel.For(0, PersonCount, async i => { await WaitCallLimitPerSecondAsync(); string personName = $"PersonName#{i}";
- persons[i] = await faceClient.PersonGroupPerson.CreateAsync(personGroupId, personName);
+ using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = personName }))))
+ {
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ using (var response = await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/persongroups/{personGroupId}/persons", content))
+ {
+ string contentString = await response.Content.ReadAsStringAsync();
+ persons[i] = (string?)(JsonConvert.DeserializeObject<Dictionary<string, object>>(contentString)?["personId"]);
+ }
+ }
}); ```
Faces added to different persons are processed concurrently. Faces added for one
```csharp Parallel.For(0, PersonCount, async i => {
- Guid personId = persons[i].PersonId;
string personImageDir = @"/path/to/person/i/images"; foreach (string imagePath in Directory.GetFiles(personImageDir, "*.jpg"))
Parallel.For(0, PersonCount, async i =>
using (Stream stream = File.OpenRead(imagePath)) {
- await faceClient.PersonGroupPerson.AddFaceFromStreamAsync(personGroupId, personId, stream);
+ using (var content = new StreamContent(stream))
+ {
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
+ await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/persongroups/{personGroupId}/persons/{persons[i]}/persistedfaces?detectionModel=detection_03", content);
+ }
} } });
ai-services Find Similar Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/find-similar-faces.md
[!INCLUDE [Gate notice](../includes/identity-gate-notice.md)]
-The [Find Similar](/rest/api/face/face-recognition-operations/find-similar-from-large-face-list) operation does face matching between a target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This is useful for doing a face search by image.
+The [Find Similar](/rest/api/face/face-recognition-operations/find-similar) operation does face matching between a target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This is useful for doing a face search by image.
This guide demonstrates how to use the Find Similar feature in the different language SDKs. The following sample code assumes you have already authenticated a Face client object. For details on how to do this, follow a [quickstart](../quickstarts-sdk/identity-client-library.md).
You need to detect faces in images before you can compare them. In this guide, t
The following face detection method is optimized for comparison operations. It doesn't extract detailed face attributes, and it uses an optimized recognition model.
-[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_face_detect_recognize)]
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FindSimilar.cs?name=snippet_face_detect_recognize)]
The following code uses the above method to get face data from a series of images.
-[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_loadfaces)]
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FindSimilar.cs?name=snippet_loadfaces)]
#### [REST API](#tab/rest)
In this guide, the face detected in the *Family1-Dad1.jpg* image should be retur
The following code calls the Find Similar API on the saved list of faces.
-[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_find_similar)]
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FindSimilar.cs?name=snippet_find_similar)]
The following code prints the match details to the console:
-[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_find_similar_print)]
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FindSimilar.cs?name=snippet_find_similar_print)]
#### [REST API](#tab/rest)
ai-services Identity Detect Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/identity-detect-faces.md
The code snippets in this guide are written in C# by using the Azure AI Face cli
## Setup
-This guide assumes that you already constructed a [FaceClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceclient) object, named `faceClient`, using a Face key and endpoint URL. For instructions on how to set up this feature, follow one of the quickstarts.
+This guide assumes that you already constructed a [FaceClient](/dotnet/api/azure.ai.vision.face.faceclient) object, named `faceClient`, using a Face key and endpoint URL. For instructions on how to set up this feature, follow one of the quickstarts.
## Submit data to the service
-To find faces and get their locations in an image, call the [DetectWithUrlAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithurlasync) or [DetectWithStreamAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithstreamasync) method. **DetectWithUrlAsync** takes a URL string as input, and **DetectWithStreamAsync** takes the raw byte stream of an image as input.
+To find faces and get their locations in an image, call the [DetectAsync](/dotnet/api/azure.ai.vision.face.faceclient.detectasync). It takes either a URL string or the raw image binary as input.
-The service returns a [DetectedFace](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.detectedface) object, which you can query for different kinds of information, specified below.
+The service returns a [FaceDetectionResult](/dotnet/api/azure.ai.vision.face.facedetectionresult) object, which you can query for different kinds of information, specified below.
-For information on how to parse the location and dimensions of the face, see [FaceRectangle](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.facerectangle). Usually, this rectangle contains the eyes, eyebrows, nose, and mouth. The top of head, ears, and chin aren't necessarily included. To use the face rectangle to crop a complete head or get a mid-shot portrait, you should expand the rectangle in each direction.
+For information on how to parse the location and dimensions of the face, see [FaceRectangle](/dotnet/api/azure.ai.vision.face.facedetectionresult.facerectangle). Usually, this rectangle contains the eyes, eyebrows, nose, and mouth. The top of head, ears, and chin aren't necessarily included. To use the face rectangle to crop a complete head or get a mid-shot portrait, you should expand the rectangle in each direction.
## Determine how to process the data
This guide focuses on the specifics of the Detect call, such as what arguments y
If you set the parameter _returnFaceId_ to `true` (approved customers only), you can get the unique ID for each face, which you can use in later face recognition tasks. The optional _faceIdTimeToLive_ parameter specifies how long (in seconds) the face ID should be stored on the server. After this time expires, the face ID is removed. The default value is 86400 (24 hours). ### Get face landmarks
-[Face landmarks](../concept-face-detection.md#face-landmarks) are a set of easy-to-find points on a face, such as the pupils or the tip of the nose. To get face landmark data, set the _detectionModel_ parameter to `DetectionModel.Detection01` and the _returnFaceLandmarks_ parameter to `true`.
+[Face landmarks](../concept-face-detection.md#face-landmarks) are a set of easy-to-find points on a face, such as the pupils or the tip of the nose. To get face landmark data, set the _detectionModel_ parameter to `FaceDetectionModel.Detection03` and the _returnFaceLandmarks_ parameter to `true`.
### Get face attributes Besides face rectangles and landmarks, the face detection API can analyze several conceptual attributes of a face. For a full list, see the [Face attributes](../concept-face-detection.md#attributes) conceptual section.
-To analyze face attributes, set the _detectionModel_ parameter to `DetectionModel.Detection01` and the _returnFaceAttributes_ parameter to a list of [FaceAttributeType Enum](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.faceattributetype) values.
+To analyze face attributes, set the _detectionModel_ parameter to `FaceDetectionModel.Detection03` and the _returnFaceAttributes_ parameter to a list of [FaceAttributeType Enum](/dotnet/api/azure.ai.vision.face.faceattributetype) values.
## Get results from the service
To analyze face attributes, set the _detectionModel_ parameter to `DetectionMode
The following code demonstrates how you might retrieve the locations of the nose and pupils: You also can use face landmark data to accurately calculate the direction of the face. For example, you can define the rotation of the face as a vector from the center of the mouth to the center of the eyes. The following code calculates this vector: When you know the direction of the face, you can rotate the rectangular face frame to align it more properly. To crop faces in an image, you can programmatically rotate the image so the faces always appear upright.
When you know the direction of the face, you can rotate the rectangular face fra
The following code shows how you might retrieve the face attribute data that you requested in the original call. To learn more about each of the attributes, see the [Face detection and attributes](../concept-face-detection.md) conceptual guide.
In this guide, you learned how to use the various functionalities of face detect
## Related articles - [Reference documentation (REST)](/rest/api/face/operation-groups)-- [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/face-readme)
+- [Reference documentation (.NET SDK)](https://aka.ms/azsdk-csharp-face-ref)
ai-services Mitigate Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/mitigate-latency.md
We recommend that you select a region that is closest to your users to minimize
The Face service provides two ways to upload images for processing: uploading the raw byte data of the image directly in the request, or providing a URL to a remote image. Regardless of the method, the Face service needs to download the image from its source location. If the connection from the Face service to the client or the remote server is slow or poor, it affects the response time of requests. If you have an issue with latency, consider storing the image in Azure Blob Storage and passing the image URL in the request. For more implementation details, see [storing the image in Azure Premium Blob Storage](../../../storage/blobs/storage-upload-process-images.md?tabs=dotnet). An example API call: ``` csharp
-var faces = await client.Face.DetectWithUrlAsync("https://<storage_account_name>.blob.core.windows.net/<container_name>/<file_name>");
+var url = "https://<storage_account_name>.blob.core.windows.net/<container_name>/<file_name>";
+var response = await faceClient.DetectAsync(new Uri(url), FaceDetectionModel.Detection03, FaceRecognitionModel.Recognition04, returnFaceId: false);
+var faces = response.Value;
``` Be sure to use a storage account in the same region as the Face resource. This reduces the latency of the connection between the Face service and the storage account.
To achieve the optimal balance between accuracy and speed, follow these tips to
#### Other file size tips Note the following additional tips:-- For face detection, when using detection model `DetectionModel.Detection01`, reducing the image file size increases processing speed. When you use detection model `DetectionModel.Detection02`, reducing the image file size will only increase processing speed if the image file is smaller than 1920x1080 pixels.
+- For face detection, when using detection model `FaceDetectionModel.Detection01`, reducing the image file size increases processing speed. When you use detection model `FaceDetectionModel.Detection02`, reducing the image file size will only increase processing speed if the image file is smaller than 1920x1080 pixels.
- For face recognition, reducing the face size will only increase the speed if the image is smaller than 200x200 pixels. - The performance of the face detection methods also depends on how many faces are in an image. The Face service can return up to 100 faces for an image. Faces are ranked by face rectangle size from large to small.
Note the following additional tips:
If you need to call multiple APIs, consider calling them in parallel if your application design allows for it. For example, if you need to detect faces in two images to perform a face comparison, you can call them in an asynchronous task: ```csharp
-var faces_1 = client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedymini-biography.jpg");
-var faces_2 = client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1NDY3OTIxMzExNzM3NjE3/john-f-kennedydebating-richard-nixon.jpg");
+string url1 = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg";
+string url2 = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection2.jpg";
+var response1 = client.DetectAsync(new Uri(url1), FaceDetectionModel.Detection03, FaceRecognitionModel.Recognition04, returnFaceId: false);
+var response2 = client.DetectAsync(new Uri(url2), FaceDetectionModel.Detection03, FaceRecognitionModel.Recognition04, returnFaceId: false);
-Task.WaitAll (new Task<IList<DetectedFace>>[] { faces_1, faces_2 });
-IEnumerable<DetectedFace> results = faces_1.Result.Concat (faces_2.Result);
+Task.WaitAll(new Task<Response<IReadOnlyList<FaceDetectionResult>>>[] { response1, response2 });
+IEnumerable<FaceDetectionResult> results = response1.Result.Value.Concat(response2.Result.Value);
``` ## Smooth over spiky traffic
ai-services Specify Detection Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/specify-detection-model.md
When you use the [Detect] API, you can assign the model version with the `detect
A request URL for the [Detect] REST API looks like this:
-`https://westus.api.cognitive.microsoft.com/face/v1.0/detect[?returnFaceId][&returnFaceLandmarks][&returnFaceAttributes][&recognitionModel][&returnRecognitionModel][&detectionModel]&subscription-key=<Subscription key>`
+`https://westus.api.cognitive.microsoft.com/face/v1.0/detect?detectionModel={detectionModel}&recognitionModel={recognitionModel}&returnFaceId={returnFaceId}&returnFaceAttributes={returnFaceAttributes}&returnFaceLandmarks={returnFaceLandmarks}&returnRecognitionModel={returnRecognitionModel}&faceIdTimeToLive={faceIdTimeToLive}`
If you are using the client library, you can assign the value for `detectionModel` by passing in an appropriate string. If you leave it unassigned, the API uses the default model version (`detection_01`). See the following code example for the .NET client library. ```csharp string imageUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg";
-var faces = await faceClient.Face.DetectWithUrlAsync(url: imageUrl, returnFaceId: false, returnFaceLandmarks: false, recognitionModel: "recognition_04", detectionModel: "detection_03");
+var response = await faceClient.DetectAsync(new Uri(imageUrl), FaceDetectionModel.Detection03, FaceRecognitionModel.Recognition04, returnFaceId: false, returnFaceLandmarks: false);
+var faces = response.Value;
``` ## Add face to Person with specified model
See the following code example for the .NET client library.
```csharp // Create a PersonGroup and add a person with face detected by "detection_03" model string personGroupId = "mypersongroupid";
-await faceClient.PersonGroup.CreateAsync(personGroupId, "My Person Group Name", recognitionModel: "recognition_04");
-
-string personId = (await faceClient.PersonGroupPerson.CreateAsync(personGroupId, "My Person Name")).PersonId;
+using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = "My Person Group Name", ["recognitionModel"] = "recognition_04" }))))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ await httpClient.PutAsync($"{ENDPOINT}/face/v1.0/persongroups/{personGroupId}", content);
+}
+
+string? personId = null;
+using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = "My Person Name" }))))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ using (var response = await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/persongroups/{personGroupId}/persons", content))
+ {
+ string contentString = await response.Content.ReadAsStringAsync();
+ personId = (string?)(JsonConvert.DeserializeObject<Dictionary<string, object>>(contentString)?["personId"]);
+ }
+}
string imageUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg";
-await client.PersonGroupPerson.AddFaceFromUrlAsync(personGroupId, personId, imageUrl, detectionModel: "detection_03");
+using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["url"] = imageUrl }))))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/persongroups/{personGroupId}/persons/{personId}/persistedfaces?detectionModel=detection_03", content);
+}
``` This code creates a **PersonGroup** with ID `mypersongroupid` and adds a **Person** to it. Then it adds a Face to this **Person** using the `detection_03` model. If you don't specify the *detectionModel* parameter, the API uses the default model, `detection_01`.
This code creates a **PersonGroup** with ID `mypersongroupid` and adds a **Perso
You can also specify a detection model when you add a face to an existing **FaceList** object. See the following code example for the .NET client library. ```csharp
-await faceClient.FaceList.CreateAsync(faceListId, "My face collection", recognitionModel: "recognition_04");
+using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = "My face collection", ["recognitionModel"] = "recognition_04" }))))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ await httpClient.PutAsync($"{ENDPOINT}/face/v1.0/facelists/{faceListId}", content);
+}
string imageUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg";
-await client.FaceList.AddFaceFromUrlAsync(faceListId, imageUrl, detectionModel: "detection_03");
+using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["url"] = imageUrl }))))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/facelists/{faceListId}/persistedfaces?detectionModel=detection_03", content);
+}
``` This code creates a **FaceList** called `My face collection` and adds a Face to it with the `detection_03` model. If you don't specify the *detectionModel* parameter, the API uses the default model, `detection_01`.
In this article, you learned how to specify the detection model to use with diff
* [Face .NET SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp) * [Face Python SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-python%253fpivots%253dprogramming-language-python)
+* [Face JavaScript SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-javascript%253fpivots%253dprogramming-language-javascript)
[Detect]: /rest/api/face/face-detection-operations/detect [Identify From Person Group]: /rest/api/face/face-recognition-operations/identify-from-person-group
ai-services Specify Recognition Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/specify-recognition-model.md
When using the [Detect] API, assign the model version with the `recognitionModel
Optionally, you can specify the _returnRecognitionModel_ parameter (default **false**) to indicate whether _recognitionModel_ should be returned in response. So, a request URL for the [Detect] REST API will look like this:
-`https://westus.api.cognitive.microsoft.com/face/v1.0/detect[?returnFaceId][&returnFaceLandmarks][&returnFaceAttributes][&recognitionModel][&returnRecognitionModel]&subscription-key=<Subscription key>`
+`https://westus.api.cognitive.microsoft.com/face/v1.0/detect?detectionModel={detectionModel}&recognitionModel={recognitionModel}&returnFaceId={returnFaceId}&returnFaceAttributes={returnFaceAttributes}&returnFaceLandmarks={returnFaceLandmarks}&returnRecognitionModel={returnRecognitionModel}&faceIdTimeToLive={faceIdTimeToLive}`
If you're using the client library, you can assign the value for `recognitionModel` by passing a string representing the version. If you leave it unassigned, a default model version of `recognition_01` will be used. See the following code example for the .NET client library. ```csharp
-string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
-var faces = await faceClient.Face.DetectWithUrlAsync(url: imageUrl, returnFaceId: true, returnFaceLandmarks: true, recognitionModel: "recognition_01", returnRecognitionModel: true);
+string imageUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg";
+var response = await faceClient.DetectAsync(new Uri(imageUrl), FaceDetectionModel.Detection03, FaceRecognitionModel.Recognition04, returnFaceId: true, returnFaceLandmarks: true, returnRecognitionModel: true);
+var faces = response.Value;
``` > [!NOTE]
The Face service can extract face data from an image and associate it with a **P
A **PersonGroup** should have one unique recognition model for all of the **Person**s, and you can specify this using the `recognitionModel` parameter when you create the group ([Create Person Group] or [Create Large Person Group]). If you don't specify this parameter, the original `recognition_01` model is used. A group will always use the recognition model it was created with, and new faces will become associated with this model when they're added to it. This can't be changed after a group's creation. To see what model a **PersonGroup** is configured with, use the [Get Person Group] API with the _returnRecognitionModel_ parameter set as **true**.
-See the following code example for the .NET client library.
+See the following .NET code example.
```csharp // Create an empty PersonGroup with "recognition_04" model string personGroupId = "mypersongroupid";
-await faceClient.PersonGroup.CreateAsync(personGroupId, "My Person Group Name", recognitionModel: "recognition_04");
+using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = "My Person Group Name", ["recognitionModel"] = "recognition_04" }))))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ await httpClient.PutAsync($"{ENDPOINT}/face/v1.0/persongroups/{personGroupId}", content);
+}
``` In this code, a **PersonGroup** with ID `mypersongroupid` is created, and it's set up to use the _recognition_04_ model to extract face features.
There is no change in the [Identify From Person Group] API; you only need to spe
You can also specify a recognition model for similarity search. You can assign the model version with `recognitionModel` when creating the **FaceList** with [Create Face List] API or [Create Large Face List]. If you don't specify this parameter, the `recognition_01` model is used by default. A **FaceList** will always use the recognition model it was created with, and new faces will become associated with this model when they're added to the list; you can't change this after creation. To see what model a **FaceList** is configured with, use the [Get Face List] API with the _returnRecognitionModel_ parameter set as **true**.
-See the following code example for the .NET client library.
+See the following .NET code example.
```csharp
-await faceClient.FaceList.CreateAsync(faceListId, "My face collection", recognitionModel: "recognition_04");
+using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = "My face collection", ["recognitionModel"] = "recognition_04" }))))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ await httpClient.PutAsync($"{ENDPOINT}/face/v1.0/facelists/{faceListId}", content);
+}
``` This code creates a **FaceList** called `My face collection`, using the _recognition_04_ model for feature extraction. When you search this **FaceList** for similar faces to a new detected face, that face must have been detected ([Detect]) using the _recognition_04_ model. As in the previous section, the model needs to be consistent.
In this article, you learned how to specify the recognition model to use with di
* [Face .NET SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp) * [Face Python SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-python%253fpivots%253dprogramming-language-python)
+* [Face JavaScript SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-javascript%253fpivots%253dprogramming-language-javascript)
[Detect]: /rest/api/face/face-detection-operations/detect [Verify Face To Face]: /rest/api/face/face-recognition-operations/verify-face-to-face [Identify From Person Group]: /rest/api/face/face-recognition-operations/identify-from-person-group
-[Find Similar]: /rest/api/face/face-recognition-operations/find-similar-from-large-face-list
+[Find Similar]: /rest/api/face/face-recognition-operations/find-similar-from-face-list
[Create Person Group]: /rest/api/face/person-group-operations/create-person-group [Get Person Group]: /rest/api/face/person-group-operations/get-person-group [Train Person Group]: /rest/api/face/person-group-operations/train-person-group
ai-services Use Headpose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/use-headpose.md
In this guide, you'll see how you can use the HeadPose attribute of a detected f
The face rectangle, returned with every detected face, marks the location and size of the face in the image. By default, the rectangle is always aligned with the image (its sides are vertical and horizontal); this can be inefficient for framing angled faces. In situations where you want to programmatically crop faces in an image, it's better to be able to rotate the rectangle to crop.
-The [Azure AI Face WPF (Windows Presentation Foundation)](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) sample app uses the HeadPose attribute to rotate its detected face rectangles.
+The [Azure AI Face WPF (Windows Presentation Foundation)](https://github.com/Azure-Samples/azure-ai-vision/tree/main/face/DemoWPF) sample app uses the HeadPose attribute to rotate its detected face rectangles.
### Explore the sample code
-You can programmatically rotate the face rectangle by using the HeadPose attribute. If you specify this attribute when detecting faces (see [Call the detect API](identity-detect-faces.md)), you will be able to query it later. The following method from the [Azure AI Face WPF](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) app takes a list of **DetectedFace** objects and returns a list of **[Face](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/app-samples/Cognitive-Services-Face-WPF/Sample-WPF/Controls/Face.cs)** objects. **Face** here is a custom class that stores face data, including the updated rectangle coordinates. New values are calculated for **top**, **left**, **width**, and **height**, and a new field **FaceAngle** specifies the rotation.
+You can programmatically rotate the face rectangle by using the HeadPose attribute. If you specify this attribute when detecting faces (see [Call the detect API](identity-detect-faces.md)), you will be able to query it later. The following method from the [Azure AI Face WPF](https://github.com/Azure-Samples/azure-ai-vision/tree/main/face/DemoWPF) app takes a list of **FaceDetectionResult** objects and returns a list of **[Face](https://github.com/Azure-Samples/azure-ai-vision/blob/main/face/DemoWPF/Sample-WPF/Controls/Face.cs)** objects. **Face** here is a custom class that stores face data, including the updated rectangle coordinates. New values are calculated for **top**, **left**, **width**, and **height**, and a new field **FaceAngle** specifies the rotation.
```csharp /// <summary>
You can programmatically rotate the face rectangle by using the HeadPose attribu
/// <param name="maxSize">Image rendering size</param> /// <param name="imageInfo">Image width and height</param> /// <returns>Face structure for rendering</returns>
-public static IEnumerable<Face> CalculateFaceRectangleForRendering(IList<DetectedFace> faces, int maxSize, Tuple<int, int> imageInfo)
+public static IEnumerable<Face> CalculateFaceRectangleForRendering(IList<FaceDetectionResult> faces, int maxSize, Tuple<int, int> imageInfo)
{ var imageWidth = imageInfo.Item1; var imageHeight = imageInfo.Item2;
public static IEnumerable<Face> CalculateFaceRectangleForRendering(IList<Detecte
### Display the updated rectangle
-From here, you can use the returned **Face** objects in your display. The following lines from [FaceDetectionPage.xaml](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/app-samples/Cognitive-Services-Face-WPF/Sample-WPF/Controls/FaceDetectionPage.xaml) show how the new rectangle is rendered from this data:
+From here, you can use the returned **Face** objects in your display. The following lines from [FaceDetectionPage.xaml](https://github.com/Azure-Samples/azure-ai-vision/blob/main/face/DemoWPF/Sample-WPF/Controls/FaceDetectionPage.xaml) show how the new rectangle is rendered from this data:
```xaml <DataTemplate>
From here, you can use the returned **Face** objects in your display. The follow
## Next steps
-* See the [Azure AI Face WPF](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) app on GitHub for a working example of rotated face rectangles.
-* Or, see the [Face HeadPose Sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples) app, which tracks the HeadPose attribute in real time to detect head movements.
+* See the [Azure AI Face WPF](https://github.com/Azure-Samples/azure-ai-vision/tree/main/face/DemoWPF) app on GitHub for a working example of rotated face rectangles.
ai-services Use Large Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/use-large-scale.md
This guide demonstrates the migration process. It assumes a basic familiarity wi
**LargePersonGroup** and **LargeFaceList** are collectively referred to as large-scale operations. **LargePersonGroup** can contain up to 1 million persons, each with a maximum of 248 faces. **LargeFaceList** can contain up to 1 million faces. The large-scale operations are similar to the conventional **PersonGroup** and **FaceList** but have some differences because of the new architecture.
-The samples are written in C# by using the Azure AI Face client library.
+The samples are written in C#.
> [!NOTE] > To enable Face search performance for **Identification** and **FindSimilar** in large-scale, introduce a **Train** operation to preprocess the **LargeFaceList** and **LargePersonGroup**. The training time varies from seconds to about half an hour based on the actual capacity. During the training period, it's possible to perform **Identification** and **FindSimilar** if a successful training operating was done before. The drawback is that the new added persons and faces don't appear in the result until a new post migration to large-scale training is completed.
-## Step 1: Initialize the client object
-
-When you use the Face client library, the key and subscription endpoint are passed in through the constructor of the FaceClient class. See the [quickstart](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp&tabs=visual-studio) for instructions on creating a Face client object.
-
-## Step 2: Code migration
+## Step 1: Code migration
This section focuses on how to migrate **PersonGroup** or **FaceList** implementation to **LargePersonGroup** or **LargeFaceList**. Although **LargePersonGroup** or **LargeFaceList** differs from **PersonGroup** or **FaceList** in design and internal implementation, the API interfaces are similar for backward compatibility.
private static async Task TrainLargeFaceList(
string largeFaceListId, int timeIntervalInMilliseconds = 1000) {
+ HttpClient httpClient = new HttpClient();
+ httpClient.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", SUBSCRIPTION_KEY);
+ // Trigger a train call.
- await FaceClient.LargeFaceList.TrainAsync(largeFaceListId);
+ await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/largefacelists/{largeFaceListId}/train", null);
// Wait for training finish. while (true) { await Task.Delay(timeIntervalInMilliseconds);
- var status = await faceClient.LargeFaceList.GetTrainingStatusAsyn(largeFaceListId);
+ string? trainingStatus = null;
+ using (var response = await httpClient.GetAsync($"{ENDPOINT}/face/v1.0/largefacelists/{largeFaceListId}/training"))
+ {
+ string contentString = await response.Content.ReadAsStringAsync();
+ trainingStatus = (string?)(JsonConvert.DeserializeObject<Dictionary<string, object>>(contentString)?["status"]);
+ }
- if (status.Status == Status.Running)
+ if ("running".Equals(trainingStatus))
{ continue; }
- else if (status.Status == Status.Succeeded)
+ else if ("succeeded".Equals(trainingStatus))
{ break; }
Previously, a typical use of **FaceList** with added faces and **FindSimilar** l
const string FaceListId = "myfacelistid_001"; const string FaceListName = "MyFaceListDisplayName"; const string ImageDir = @"/path/to/FaceList/images";
-await faceClient.FaceList.CreateAsync(FaceListId, FaceListName);
+using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = FaceListName, ["recognitionModel"] = "recognition_04" }))))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ await httpClient.PutAsync($"{ENDPOINT}/face/v1.0/facelists/{FaceListId}", content);
+}
// Add Faces to the FaceList. Parallel.ForEach( Directory.GetFiles(ImageDir, "*.jpg"), async imagePath =>
+ {
+ using (Stream stream = File.OpenRead(imagePath))
{
- using (Stream stream = File.OpenRead(imagePath))
+ using (var content = new StreamContent(stream))
{
- await faceClient.FaceList.AddFaceFromStreamAsync(FaceListId, stream);
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
+ await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/facelists/{FaceListId}/persistedfaces?detectionModel=detection_03", content);
}
- });
+ }
+ });
// Perform FindSimilar. const string QueryImagePath = @"/path/to/query/image";
-var results = new List<SimilarPersistedFace[]>();
+var results = new List<HttpResponseMessage>();
using (Stream stream = File.OpenRead(QueryImagePath)) {
- var faces = await faceClient.Face.DetectWithStreamAsync(stream);
+ var response = await faceClient.DetectAsync(BinaryData.FromStream(stream), FaceDetectionModel.Detection03, FaceRecognitionModel.Recognition04, returnFaceId: true);
+ var faces = response.Value;
foreach (var face in faces) {
- results.Add(await faceClient.Face.FindSimilarAsync(face.FaceId, FaceListId, 20));
+ using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["faceId"] = face.FaceId, ["faceListId"] = FaceListId }))))
+ {
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ results.Add(await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/findsimilars", content));
+ }
} } ```
When migrating it to **LargeFaceList**, it becomes the following:
const string LargeFaceListId = "mylargefacelistid_001"; const string LargeFaceListName = "MyLargeFaceListDisplayName"; const string ImageDir = @"/path/to/FaceList/images";
-await faceClient.LargeFaceList.CreateAsync(LargeFaceListId, LargeFaceListName);
+using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = LargeFaceListName, ["recognitionModel"] = "recognition_04" }))))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ await httpClient.PutAsync($"{ENDPOINT}/face/v1.0/largefacelists/{LargeFaceListId}", content);
+}
// Add Faces to the LargeFaceList. Parallel.ForEach( Directory.GetFiles(ImageDir, "*.jpg"), async imagePath =>
+ {
+ using (Stream stream = File.OpenRead(imagePath))
{
- using (Stream stream = File.OpenRead(imagePath))
+ using (var content = new StreamContent(stream))
{
- await faceClient.LargeFaceList.AddFaceFromStreamAsync(LargeFaceListId, stream);
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
+ await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/largefacelists/{LargeFaceListId}/persistedfaces?detectionModel=detection_03", content);
}
- });
+ }
+ });
// Train() is newly added operation for LargeFaceList.
-// Must call it before FindSimilarAsync() to ensure the newly added faces searchable.
+// Must call it before FindSimilar to ensure the newly added faces searchable.
await TrainLargeFaceList(LargeFaceListId); // Perform FindSimilar. const string QueryImagePath = @"/path/to/query/image";
-var results = new List<SimilarPersistedFace[]>();
+var results = new List<HttpResponseMessage>();
using (Stream stream = File.OpenRead(QueryImagePath)) {
- var faces = await faceClient.Face.DetectWithStreamAsync(stream);
+ var response = await faceClient.DetectAsync(BinaryData.FromStream(stream), FaceDetectionModel.Detection03, FaceRecognitionModel.Recognition04, returnFaceId: true);
+ var faces = response.Value;
foreach (var face in faces) {
- results.Add(await faceClient.Face.FindSimilarAsync(face.FaceId, largeFaceListId: LargeFaceListId));
+ using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["faceId"] = face.FaceId, ["largeFaceListId"] = LargeFaceListId }))))
+ {
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ results.Add(await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/findsimilars", content));
+ }
} } ``` As previously shown, the data management and the **FindSimilar** part are almost the same. The only exception is that a fresh preprocessing **Train** operation must complete in the **LargeFaceList** before **FindSimilar** works.
-## Step 3: Train suggestions
+## Step 2: Train suggestions
Although the **Train** operation speeds up [FindSimilar](/rest/api/face/face-recognition-operations/find-similar-from-large-face-list) and [Identification](/rest/api/face/face-recognition-operations/identify-from-large-person-group), the training time suffers, especially when coming to large scale. The estimated training time in different scales is listed in the following table.
and [Identification](/rest/api/face/face-recognition-operations/identify-from-la
To better utilize the large-scale feature, we recommend the following strategies.
-### Step 3a: Customize time interval
+### Step 2a: Customize time interval
As is shown in `TrainLargeFaceList()`, there's a time interval in milliseconds to delay the infinite training status checking process. For **LargeFaceList** with more faces, using a larger interval reduces the call counts and cost. Customize the time interval according to the expected capacity of the **LargeFaceList**. The same strategy also applies to **LargePersonGroup**. For example, when you train a **LargePersonGroup** with 1 million persons, `timeIntervalInMilliseconds` might be 60,000, which is a 1-minute interval.
-### Step 3b: Small-scale buffer
+### Step 2b: Small-scale buffer
Persons or faces in a **LargePersonGroup** or a **LargeFaceList** are searchable only after being trained. In a dynamic scenario, new persons or faces are constantly added and must be immediately searchable, yet training might take longer than desired.
An example workflow:
1. When the buffer collection size increases to a threshold or at a system idle time, create a new buffer collection. Trigger the **Train** operation on the master collection. 1. Delete the old buffer collection after the **Train** operation finishes on the master collection.
-### Step 3c: Standalone training
+### Step 2c: Standalone training
If a relatively long latency is acceptable, it isn't necessary to trigger the **Train** operation right after you add new data. Instead, the **Train** operation can be split from the main logic and triggered regularly. This strategy is suitable for dynamic scenarios with acceptable latency. It can be applied to static scenarios to further reduce the **Train** frequency.
Suppose there's a `TrainLargePersonGroup` function similar to `TrainLargeFaceLis
```csharp private static void Main() {
- // Create a LargePersonGroup.
- const string LargePersonGroupId = "mylargepersongroupid_001";
- const string LargePersonGroupName = "MyLargePersonGroupDisplayName";
- faceClient.LargePersonGroup.CreateAsync(LargePersonGroupId, LargePersonGroupName).Wait();
- // Set up standalone training at regular intervals. const int TimeIntervalForStatus = 1000 * 60; // 1-minute interval for getting training status. const double TimeIntervalForTrain = 1000 * 60 * 60; // 1-hour interval for training. var trainTimer = new Timer(TimeIntervalForTrain);
- trainTimer.Elapsed += (sender, args) => TrainTimerOnElapsed(LargePersonGroupId, TimeIntervalForStatus);
+ trainTimer.Elapsed += (sender, args) => TrainTimerOnElapsed("mylargepersongroupid_001", TimeIntervalForStatus);
trainTimer.AutoReset = true; trainTimer.Enabled = true;
ai-services Use Persondirectory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/use-persondirectory.md
HttpResponseMessage response;
var body = new Dictionary<string, object>(); body.Add("faceId", "{guid1}"); body.Add("personId", "{guid1}");
-var jsSerializer = new JavaScriptSerializer();
-byte[] byteData = Encoding.UTF8.GetBytes(jsSerializer.Serialize(body));
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
using (var content = new ByteArrayContent(byteData)) {
ai-services Overview Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview-identity.md
And these images are the candidate faces:
![Five images of people smiling. Images A and B show the same person.](./media/FaceFindSimilar.Candidates.jpg)
-To find four similar faces, the **matchPerson** mode returns A and B, which show the same person as the target face. The **matchFace** mode returns A, B, C, and D, which is exactly four candidates, even if some aren't the same person as the target or have low similarity. For more information, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Find Similar API](/rest/api/face/face-recognition-operations/find-similar-from-large-face-list) reference documentation.
+To find four similar faces, the **matchPerson** mode returns A and B, which show the same person as the target face. The **matchFace** mode returns A, B, C, and D, which is exactly four candidates, even if some aren't the same person as the target or have low similarity. For more information, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Find Similar API](/rest/api/face/face-recognition-operations/find-similar) reference documentation.
## Group faces
ai-services Custom Categories Rapid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/how-to/custom-categories-rapid.md
curl --location --request PATCH 'https://<endpoint>/contentsafety/text/incidents
--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>' \ --header 'Content-Type: application/json' \ --data '{
- "incidentName": "<text-incident-name>",
- "incidentDefinition": "string"
+ \"incidentName\": \"<text-incident-name>\",
+ \"incidentDefinition\": \"string\"
}' ```
curl --location --request PATCH 'https://<endpoint>/contentsafety/image/incident
--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>' \ --header 'Content-Type: application/json' \ --data '{
- "incidentName": "<image-incident-name>"
+ \"incidentName\": \"<image-incident-name>\"
}' ```
ai-services Azure Openai Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/azure-openai-integration.md
At the same time, customers often require a custom answer authoring experience t
* An existing Azure OpenAI resource. If you don't already have an Azure OpenAI resource, then [create one and deploy a model](../../../openai/how-to/create-resource.md). * An Azure Language Service resource and custom question answering project. If you donΓÇÖt have one already, then [create one](../quickstart/sdk.md).
- * Azure OpenAI requires registration and is currently only available to approved enterprise customers and partners. See [Limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context) for more information. You can apply for access to Azure OpenAI by completing the form at https://aka.ms/oai/access. Open an issue on this repo to contact us if you have an issue.
* Be sure that you are assigned at least the [Cognitive Services OpenAI Contributor role](/azure/role-based-access-control/built-in-roles#cognitive-services-openai-contributor) for the Azure OpenAI resource.
ai-services Gpt With Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/gpt-with-vision.md
The GPT-4 Turbo with Vision model answers general questions about what's present
Enhancements let you incorporate other Azure AI services (such as Azure AI Vision) to add new functionality to the chat-with-vision experience.
-**Object grounding**: Azure AI Vision complements GPT-4 Turbo with VisionΓÇÖs text response by identifying and locating salient objects in the input images. This lets the chat model give more accurate and detailed responses about the contents of the image.
- > [!IMPORTANT] > To use Vision enhancement, you need a Computer Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource.
+> [!IMPORTANT]
+> Vision enhancements are not supported by the GPT-4 Turbo GA model. They are only available with the preview models.
+
+**Object grounding**: Azure AI Vision complements GPT-4 Turbo with VisionΓÇÖs text response by identifying and locating salient objects in the input images. This lets the chat model give more accurate and detailed responses about the contents of the image.
+ :::image type="content" source="../media/concepts/gpt-v/object-grounding.png" alt-text="Screenshot of an image with object grounding applied. Objects have bounding boxes with labels."::: :::image type="content" source="../media/concepts/gpt-v/object-grounding-response.png" alt-text="Screenshot of a chat response to an image prompt about an outfit. The response is an itemized list of clothing items seen in the image."::: **Optical Character Recognition (OCR)**: Azure AI Vision complements GPT-4 Turbo with Vision by providing high-quality OCR results as supplementary information to the chat model. It allows the model to produce higher quality responses for images with dense text, transformed images, and numbers-heavy financial documents, and increases the variety of languages the model can recognize in text.
-> [!IMPORTANT]
-> To use Vision enhancement, you need a Computer Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource.
- :::image type="content" source="../media/concepts/gpt-v/receipts.png" alt-text="Photo of several receipts."::: :::image type="content" source="../media/concepts/gpt-v/ocr-response.png" alt-text="Screenshot of the JSON response of an OCR call.":::
Enhancements let you incorporate other Azure AI services (such as Azure AI Visio
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RW1eHRf]
-> [!NOTE]
-> In order to use the video prompt enhancement, you need both an Azure AI Vision resource, in the paid (S1) tier, in addition to your Azure OpenAI resource.
- ## Special pricing information > [!IMPORTANT]
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
Along with using Elasticsearch databases in Azure OpenAI Studio, you can also us
-## Deploy to a copilot (preview) or web app
+## Deploy to a copilot (preview), Teams app (preview), or web app
After you connect Azure OpenAI to your data, you can deploy it using the **Deploy to** button in Azure OpenAI studio. :::image type="content" source="../media/use-your-data/deploy-model.png" alt-text="A screenshot showing the model deployment button in Azure OpenAI Studio." lightbox="../media/use-your-data/deploy-model.png":::
-This gives you the option of deploying a standalone web app for you and your users to interact with chat models using a graphical user interface. See [Use the Azure OpenAI web app](../how-to/use-web-app.md) for more information.
+This gives you multiple options for deploying your solution.
-You can also deploy to a copilot in [Copilot Studio](/microsoft-copilot-studio/fundamentals-what-is-copilot-studio) (preview) directly from Azure OpenAI studio, enabling you to bring conversational experiences to various channels such as: Microsoft Teams, websites, Dynamics 365, and other [Azure Bot Service channels](/microsoft-copilot-studio/publication-connect-bot-to-azure-bot-service-channels). The tenant used in the Azure OpenAI service and Copilot Studio (preview) should be the same. For more information, see [Use a connection to Azure OpenAI On Your Data](/microsoft-copilot-studio/nlu-generative-answers-azure-openai).
+#### [Copilot (preview)](#tab/copilot)
+
+You can deploy to a copilot in [Copilot Studio](/microsoft-copilot-studio/fundamentals-what-is-copilot-studio) (preview) directly from Azure OpenAI studio, enabling you to bring conversational experiences to various channels such as: Microsoft Teams, websites, Dynamics 365, and other [Azure Bot Service channels](/microsoft-copilot-studio/publication-connect-bot-to-azure-bot-service-channels). The tenant used in the Azure OpenAI service and Copilot Studio (preview) should be the same. For more information, see [Use a connection to Azure OpenAI On Your Data](/microsoft-copilot-studio/nlu-generative-answers-azure-openai).
> [!NOTE] > Deploying to a copilot in Copilot Studio (preview) is only available in US regions.
+#### [Teams app (preview)](#tab/teams)
+
+A Teams app lets you bring conversational experience to your users in Teams to improve operational efficiency and democratize access of information. This Teams app is configured to users within your Azure account tenant and personal chat (non-group chat) scenarios.
++
+**Prerequisites**
+
+- The latest version of [Visual Studio Code](https://code.visualstudio.com/) installed.
+- The latest version of [Teams toolkit](https://marketplace.visualstudio.com/items?itemName=TeamsDevApp.ms-teams-vscode-extension) installed. This is a VS Code extension that creates a project scaffolding for your app.
+- [Node.js](https://nodejs.org/en/download/) (version 16 or 17) installed. For more information, see [Node.js version compatibility table for project type](/microsoftteams/platform/toolkit/build-environments#nodejs-version-compatibility-table-for-project-type).
+- [Microsoft Teams](https://www.microsoft.com/microsoft-teams/download-app) installed.
+- Sign in to your [Microsoft 365 developer account](/microsoftteams/platform/concepts/build-and-test/prepare-your-o365-tenant) (using this link to get a test account: [Developer program](https://developer.microsoft.com/microsoft-365/dev-program)).
+ - Enable **custom Teams apps** and turn on **custom app uploading** in your account (instructions [here](/microsoftteams/platform/concepts/build-and-test/prepare-your-o365-tenant#enable-custom-teams-apps-and-turn-on-custom-app-uploading))
+- [Azure command-line interface (CLI)](/cli/azure/install-azure-cli) installed. This is a cross-platform command-line tool to connect to Azure and execute administrative commands on Azure resources. For more information on setting up environment variables, see the [Azure SDK documentation](https://github.com/Azure/azure-sdk-for-go/wiki/Set-up-Your-Environment-for-Authentication).
+- Your Azure account has been assigned **Cognitive Services OpenAI user** or **Cognitive Services OpenAI Contributor** role of the Azure OpenAI resource you're using, allowing your account to make Azure OpenAI API calls. For more information, see [Using your data with Azure OpenAI securely](/azure/ai-services/openai/how-to/use-your-data-securely#using-the-api) and [Add role assignment to an Azure OpenAI resource](/azure/ai-services/openai/how-to/role-based-access-control#add-role-assignment-to-an-azure-openai-resource) for instructions on setting this role in the Azure portal.
++
+You can deploy to a standalone Teams app directly from Azure OpenAI Studio. Follow the steps below:
+
+1. After you've added your data to the chat model, select **Deploy** and then **a new Teams app (preview)**.
+
+1. Enter the name of your Teams app and download the resulting .zip file.
+
+1. Extract the .zip file and open the folder in Visual Studio Code.
+
+1. If you chose **API key** in the data connection step, manually copy and paste your Azure AI Search key into the `src\prompts\chat\config.json` file. Your Azure AI Search Key can be found in Azure OpenAI Studio Playground by selecting the **View code** button with the key located under Azure Search Resource Key. If you chose **System assigned managed identity**, you can skip this step. Learn more about different data connection options in the [Data connection](/azure/ai-services/openai/concepts/use-your-data?tabs=ai-search#data-connection) section.
+
+1. Open the Visual Studio Code terminal and log into Azure CLI, selecting the account that you assigned **Cognitive Service OpenAI User** role to. Use the `az login` command in the terminal to log in.
+
+1. To debug your app, press the **F5** key or select **Run and Debug** from the left pane. Then select your debugging environment from the dropdown list. A webpage opens where you can chat with your custom copilot.
+ > [!NOTE]
+ > The citation experience is available in **Debug (Edge)** or **Debug (Chrome)** only.
+
+1. After you've tested your copilot, you can provision, deploy, and publish your Teams app by selecting the **Teams Toolkit Extension** on the left pane in Visual Studio Code. Run the separate provision, deploy, and publish stages in the **Lifecycle** section. You may be asked to sign in to your Microsoft 365 account where you have permissions to upload custom apps and your Azure Account.
+
+1. Provision your app: (detailed instructions in [Provision cloud resources](/microsoftteams/platform/toolkit/provision))
+
+1. Assign the **Cognitive Service OpenAI User** role to your deployed App Service resource
+ 1. Go to the Azure portal and select the newly created Azure App Service resource
+ 1. Go to **settings** -> **identity** -> **enable system assigned identity**
+ 1. Select **Azure role assignments** and then **add role assignments**. Specify the following parameters:
+ * Scope: resource group
+ * Subscription: the subscription of your Azure OpenAI resource
+ * Resource group of your Azure OpenAI resource
+ * Role: **Cognitive Service OpenAI user**
+
+1. Deploy your app to Azure by following the instructions in [Deploy to the cloud](/microsoftteams/platform/toolkit/deploy).
+
+1. Publish your app to Teams by following the instructions in [Publish Teams app](/microsoftteams/platform/toolkit/publish).
+
+The README file in your Teams app has additional details and tips. Also, see [Tutorial - Build Custom Copilot using Teams](/microsoftteams/platform/teams-ai-library-tutorial) for guided steps.
+
+#### [Web app](#tab/web-app)
+
+Deploying to a standalone web app lets you and your users to interact with chat models through a graphical user interface. See [Use the Azure OpenAI web app](../how-to/use-web-app.md) for more information.
+++ ## Use Azure OpenAI On Your Data securely You can use Azure OpenAI On Your Data securely by protecting data and resources with Microsoft Entra ID role-based access control, virtual networks, and private endpoints. You can also restrict the documents that can be used in responses for different users with Azure AI Search security filters. See [Securely use Azure OpenAI On Your Data](../how-to/use-your-data-securely.md).
ai-services Use Your Image Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-image-data.md
Use this article to learn how to provide your own image data for GPT-4 Turbo wit
## Prerequisites
-* An Azure subscription. [Create one for free](https://azure.microsoft.com/free/cognitive-services).
-* Access granted to Azure OpenAI in the desired Azure subscription.
-
- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by [completing the form](https://aka.ms/oai/access). Open an issue on this repo to contact us if you have a problem.
-* An Azure OpenAI resource with the GPT-4 Turbo with Vision model deployed. For more information about model deployment, see the [resource deployment guide](../how-to/create-resource.md).
+- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.
+- An Azure OpenAI resource with the GPT-4 Turbo with Vision model deployed. For more information about model deployment, see the [resource deployment guide](../how-to/create-resource.md).
* At least the [Cognitive Services Contributor role](../how-to/role-based-access-control.md#cognitive-services-contributor) assigned to you for the Azure OpenAI resource. ## Add your data source
ai-services Azure Developer Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/azure-developer-cli.md
Use this article to learn how to automate resource deployment for Azure OpenAI S
## Prerequisites - An Azure subscription. [Create one for free](https://azure.microsoft.com/free/cognitive-services).-- Access granted to Azure OpenAI in the desired Azure subscription.-
- Azure OpenAI requires registration and is currently available only to approved enterprise customers and partners. For more information, see [Limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context). You can apply for access to Azure OpenAI by [completing the form](https://aka.ms/oai/access). Open an issue on this repo to contact us if you have a problem.
- - The Azure Developer CLI [installed](/azure/developer/azure-developer-cli/install-azd) on your machine. ## Clone and initialize the Azure Developer CLI template
ai-services Dall E https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/dall-e.md
OpenAI's DALL-E models generate images based on user-provided text prompts. This
#### [DALL-E 3](#tab/dalle3) - An Azure subscription. <a href="https://azure.microsoft.com/free/ai-services" target="_blank">Create one for free</a>.-- Access granted to DALL-E in the desired Azure subscription. - An Azure OpenAI resource created in the `SwedenCentral` region. - Then, you need to deploy a `dalle3` model with your Azure resource. For more information, see [Create a resource and deploy a model with Azure OpenAI](../how-to/create-resource.md). #### [DALL-E 2 (preview)](#tab/dalle2) - An Azure subscription. <a href="https://azure.microsoft.com/free/ai-services" target="_blank">Create one for free</a>.-- Access granted to DALL-E in the desired Azure subscription. - An Azure OpenAI resource created in the East US region. For more information, see [Create a resource and deploy a model with Azure OpenAI](../how-to/create-resource.md).
ai-services Gpt With Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/gpt-with-vision.md
The **object grounding** integration brings a new layer to data analysis and use
> [!IMPORTANT] > To use the Vision enhancement with an Azure OpenAI resource, you need to specify a Computer Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource. If you're using an Azure AI Services resource, you don't need an additional Computer Vision resource.
+> [!IMPORTANT]
+> Vision enhancements are not supported by the GPT-4 Turbo GA model. They are only available with the preview models.
+ > [!CAUTION] > Azure AI enhancements for GPT-4 Turbo with Vision will be billed separately from the core functionalities. Each specific Azure AI enhancement for GPT-4 Turbo with Vision has its own distinct charges. For details, see the [special pricing information](../concepts/gpt-with-vision.md#special-pricing-information).
Follow these steps to set up a video retrieval system and integrate it with your
> [!IMPORTANT] > To use the Vision enhancement with an Azure OpenAI resource, you need to specify a Computer Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource. If you're using an Azure AI Services resource, you don't need an additional Computer Vision resource.
+> [!IMPORTANT]
+> Vision enhancements are not supported by the GPT-4 Turbo GA model. They are only available with the preview models.
+ > [!CAUTION] > Azure AI enhancements for GPT-4 Turbo with Vision will be billed separately from the core functionalities. Each specific Azure AI enhancement for GPT-4 Turbo with Vision has its own distinct charges. For details, see the [special pricing information](../concepts/gpt-with-vision.md#special-pricing-information).
ai-services Integrate Synapseml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/integrate-synapseml.md
This tutorial shows how to apply large language models at a distributed scale by
- An Azure subscription. <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>. -- Access granted to Azure OpenAI in your Azure subscription.-
- Currently, you must submit an application to access Azure OpenAI Service. To apply for access, complete <a href="https://aka.ms/oai/access" target="_blank">this form</a>. If you need assistance, open an issue on this repo to contact Microsoft.
- An Azure OpenAI resource. [Create a resource](create-resource.md?pivots=web-portal#create-a-resource).
ai-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/managed-identity.md
# How to configure Azure OpenAI Service with Microsoft Entra ID authentication
-More complex security scenarios require Azure role-based access control (Azure RBAC). This document covers how to authenticate to your OpenAI resource using Microsoft Entra ID.
+More complex security scenarios require Azure role-based access control (Azure RBAC). This document covers how to authenticate to your Azure OpenAI resource using Microsoft Entra ID.
In the following sections, you'll use the Azure CLI to sign in, and obtain a bearer token to call the OpenAI resource. If you get stuck, links are provided in each section with all available options for each command in Azure Cloud Shell/Azure CLI. ## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>-- Access granted to the Azure OpenAI Service in the desired Azure subscription-- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the [Request Access to Azure OpenAI Service form](https://aka.ms/oai/access). Open an issue on this repo to contact us if you have an issue. - [Custom subdomain names are required to enable features like Microsoft Entra ID for authentication.]( ../../cognitive-services-custom-subdomains.md)
ai-services Provisioned Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/provisioned-get-started.md
The following guide walks you through setting up a provisioned deployment with y
## Prerequisites - An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true)-- Access granted to Azure OpenAI in the desired Azure subscription.
- Currently, access to this service is by application. You can apply for access to Azure OpenAI Service by completing the form at [https://aka.ms/oai/access](https://aka.ms/oai/access?azure-portal=true).
- Obtained Quota for a provisioned deployment and purchased a commitment. > [!NOTE]
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/overview.md
At Microsoft, we're committed to the advancement of AI driven by principles that
## How do I get access to Azure OpenAI?
-How do I get access to Azure OpenAI?
-
-Access is currently limited as we navigate high demand, upcoming product improvements, and <a href="https://www.microsoft.com/ai/responsible-ai?activetab=pivot1:primaryr6" target="_blank">MicrosoftΓÇÖs commitment to responsible AI</a>. For now, we're working with customers with an existing partnership with Microsoft, lower risk use cases, and those committed to incorporating mitigations.
-
-More specific information is included in the application form. We appreciate your patience as we work to responsibly enable broader access to Azure OpenAI.
-
-Apply here for access:
-
-<a href="https://aka.ms/oaiapply" target="_blank">Apply now</a>
+A Limited Access registration form is not required to access most Azure OpenAI models. Learn more on the [Azure OpenAI Limited Access page](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context).
## Comparing Azure OpenAI and OpenAI
ai-services Text To Speech Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/text-to-speech-quickstart.md
The available voices are: `alloy`, `echo`, `fable`, `onyx`, `nova`, and `shimmer
## Prerequisites - An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true).-- Access granted to Azure OpenAI Service in the desired Azure subscription. - An Azure OpenAI resource created in the North Central US or Sweden Central regions with the `tts-1` or `tts-1-hd` model deployed. For more information, see [Create a resource and deploy a model with Azure OpenAI](how-to/create-resource.md).
-> [!NOTE]
-> Currently, you must submit an application to access Azure OpenAI Service. To apply for access, complete [this form](https://aka.ms/oai/access).
- ## Set up ### Retrieve key and endpoint
ai-services Fine Tune https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/tutorials/fine-tune.md
In this tutorial you learn how to:
## Prerequisites - An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true).-- Access granted to Azure OpenAI in the desired Azure subscription Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at https://aka.ms/oai/access. - Python 3.8 or later version - The following Python libraries: `json`, `requests`, `os`, `tiktoken`, `time`, `openai`, `numpy`. - [Jupyter Notebooks](https://jupyter.org/)
ai-services Use Your Data Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/use-your-data-quickstart.md
In this quickstart you can use your own data with Azure OpenAI models. Using Azu
## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.-- Access granted to Azure OpenAI in the desired Azure subscription.-
- Azure OpenAI requires registration and is currently only available to approved enterprise customers and partners. [See Limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context) for more information. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
- An Azure OpenAI resource deployed in a [supported region and with a supported model](./concepts/use-your-data.md#regional-availability-and-model-support).
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
For more information, see the [deployment types guide](https://aka.ms/aoai/docs/
### DALL-E and GPT-4 Turbo Vision GA configurable content filters
-Create custom content filters for your DALL-E 2 and 3, GPT-4 Turbo with Vision GA (gpt-4-turbo-2024-04-09) and GPT-4o deployments. [Content filtering](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cpython-new#configurability-preview)
+Create custom content filters for your DALL-E 2 and 3, GPT-4 Turbo with Vision GA (`turbo-2024-04-09`), and GPT-4o deployments. [Content filtering](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cpython-new#configurability-preview)
### Asynchronous Filter available for all Azure OpenAI customers
If you are currently using the `2023-03-15-preview` API, we recommend migrating
## April 2023 -- **DALL-E 2 public preview**. Azure OpenAI Service now supports image generation APIs powered by OpenAI's DALL-E 2 model. Get AI-generated images based on the descriptive text you provide. To learn more, check out the [quickstart](./dall-e-quickstart.md). To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/access).
+- **DALL-E 2 public preview**. Azure OpenAI Service now supports image generation APIs powered by OpenAI's DALL-E 2 model. Get AI-generated images based on the descriptive text you provide. To learn more, check out the [quickstart](./dall-e-quickstart.md).
- **Inactive deployments of customized models will now be deleted after 15 days; models will remain available for redeployment.** If a customized (fine-tuned) model is deployed for more than fifteen (15) days during which no completions or chat completions calls are made to it, the deployment will automatically be deleted (and no further hosting charges will be incurred for that deployment). The underlying customized model will remain available and can be redeployed at any time. To learn more check out the [how-to-article](/azure/ai-services/openai/how-to/fine-tuning?tabs=turbo%2Cpython-new&pivots=programming-language-studio#deploy-a-custom-model).
ai-services Whisper Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whisper-quickstart.md
The file size limit for the Azure OpenAI Whisper model is 25 MB. If you need to
## Prerequisites - An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true).-- Access granted to Azure OpenAI Service in the desired Azure subscription. - An Azure OpenAI resource with a `whisper` model deployed in a supported region. [Whisper model regional availability](./concepts/models.md#whisper-models). For more information, see [Create a resource and deploy a model with Azure OpenAI](how-to/create-resource.md).
-> [!NOTE]
-> Currently, you must submit an application to access Azure OpenAI Service. To apply for access, complete [this form](https://aka.ms/oai/access).
- ## Set up ### Retrieve key and endpoint
ai-services Migrate To Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/migrate-to-openai.md
QnA Maker was designed to be a cloud-based Natural Language Processing (NLP) ser
* A QnA Maker project. * An existing Azure OpenAI resource. If you don't already have an Azure OpenAI resource, then [create one and deploy a model](../../openai/how-to/create-resource.md).
- * Azure OpenAI requires registration and is currently only available to approved enterprise customers and partners. See [Limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context) for more information. You can apply for access to Azure OpenAI by completing the form at https://aka.ms/oai/access. Open an issue on this repo to contact us if you have an issue.
* Be sure that you are assigned at least the [Cognitive Services OpenAI Contributor role](/azure/role-based-access-control/built-in-roles#cognitive-services-openai-contributor) for the Azure OpenAI resource. ## Migrate to Azure OpenAI
ai-studio Concept Model Distillation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/concept-model-distillation.md
+
+ Title: Distillation in AI Studio
+
+description: Learn how to do distillation in Azure AI Studio.
+++ Last updated : 07/23/2024+
+reviewer: anshirga
+++++
+# Distillation in Azure AI Studio
+
+In this article
+ - [Distillation](#distillation)
+ - [Next Steps](#next-steps)
+
+In Azure AI Studio, you can leverage Distillation to efficiently train the student model.
+
+## Distillation
+
+In machine learning, distillation is a technique used to transfer knowledge from a large, complex model (often called the ΓÇ£teacher modelΓÇ¥) to a smaller, simpler model (the ΓÇ£student modelΓÇ¥). This process helps the smaller model achieve similar performance to the larger one while being more efficient in terms of computation and memory usage.
+
+The main steps in knowledge distillation involve:
+
+- **Using the teacher model** to generate predictions for the dataset.
+
+- **Training the student model** using these predictions, along with the original dataset, to mimic the teacher modelΓÇÖs behavior.
+
+You can use the sample notebook available at this [link](https://aka.ms/meta-llama-3.1-distillation) to see how to perform distillation. In this sample notebook, the teacher model used the Meta Llama 3.1 405B Instruct model, and the student model used the Meta Llama 3.1 8B Instruct.
+
+We used an advanced prompt during synthetic data generation, which incorporates Chain of thought (COT) reasoning, resulting in higher accuracy data labels in the synthetic data. This further improves the accuracy of the distilled model.
+
+## Next steps
+- [What is Azure AI Studio?](../what-is-ai-studio.md)
+- [Learn more about deploying Meta Llama models](../how-to/deploy-models-llama.md)
+
+- [Azure AI FAQ article](../faq.yml)
ai-studio Concept Synthetic Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/concept-synthetic-data.md
+
+ Title: Synthetic data generation in AI Studio
+
+description: Learn how to generate Synthetic dataset in Azure AI Studio.
+++ Last updated : 07/23/2024+
+reviewer: anshirga
+++++
+# Synthetic data generation in Azure AI Studio
+
+In this article
+ - [Synthetic data generation](#synthetic-data-generation)
+ - [Next Steps](#next-steps)
+
+In Azure AI Studio, you can leverage synthetic data generation to efficiently produce predictions for your datasets.
+
+## Synthetic data generation
+
+Synthetic data generation involves creating artificial data that mimics the statistical properties of real-world data. This data is generated using algorithms and machine learning techniques, and it can be used in various ways, such as computer simulations or by modeling real-world events.
+
+In machine learning, synthetic data is particularly valuable for several reasons:
+
+**Data Augmentation:** It helps in expanding the size of training datasets, which is crucial for training robust machine learning models. This is especially useful when real-world data is scarce or expensive to obtain.
+
+**Testing and Validation:** It allows for extensive testing and validation of machine learning models under various scenarios without the need for real-world data.
+
+You can use the sample notebook available at this [link](https://aka.ms/meta-llama-3.1-datagen) to see how to generate Synthetic data.
+
+## Next steps
+- [What is Azure AI Studio?](../what-is-ai-studio.md)
+- [Learn more about deploying Meta Llama models](../how-to/deploy-models-llama.md)
+
+- [Azure AI FAQ article](../faq.yml)
ai-studio Fine Tuning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/fine-tuning-overview.md
There isn't a single right answer to this question, but you should have clearly
Now that you know when to leverage fine-tuning for your use-case, you can go to Azure AI Studio to find several models available to fine-tune including: - Azure OpenAI models-- Llama 2 family models
+- Meta Llama 2 family models
+- Meta Llama 3.1 family of models
### Azure OpenAI models
Please note for fine-tuning Azure OpenAI models, you must add a connection to an
### Llama 2 family models The following Llama 2 family models are supported in Azure AI Studio for fine-tuning:-- `Llama-2-70b`-- `Llama-2-7b`-- `Llama-2-13b`
+- `Meta-Llama-2-70b`
+- `Meta-Llama-2-7b`
+- `Meta-Llama-2-13b`
Fine-tuning of Llama 2 models is currently supported in projects located in West US 3.
+### Llama 3.1 family models
+The following Llama 3.1 family models are supported in Azure AI Studio for fine-tuning:
+- `Meta-Llama-3.1-70b-Instruct`
+- `Meta-Llama-3.1-7b-Instruct`
+
+Fine-tuning of Llama 3.1 models is currently supported in projects located in West US 3.
+ ## Related content - [Learn how to fine-tune an Azure OpenAI model in Azure AI Studio](../../ai-services/openai/how-to/fine-tuning.md?context=/azure/ai-studio/context/context)
ai-studio Data Image Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/data-image-add.md
Use this article to learn how to provide your own image data for GPT-4 Turbo wit
## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.-- Access granted to Azure OpenAI in the desired Azure subscription.
- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
- An Azure OpenAI resource with the GPT-4 Turbo with Vision model deployed. For more information about model deployment, see the [resource deployment guide](../../ai-services/openai/how-to/create-resource.md). - Be sure that you're assigned at least the [Cognitive Services Contributor role](../../ai-services/openai/how-to/role-based-access-control.md#cognitive-services-contributor) for the Azure OpenAI resource. - An Azure AI Search resource. See [create an Azure AI Search service in the portal](/azure/search/search-create-service-portal). If you don't have an Azure AI Search resource, you're prompted to create one when you add your data source later in this guide.
ai-studio Deploy Models Llama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-llama.md
Title: How to deploy Meta Llama models with Azure AI Studio
+ Title: How to deploy Meta Llama 3.1 models with Azure AI Studio
-description: Learn how to deploy Meta Llama models with Azure AI Studio.
+description: Learn how to deploy Meta Llama 3.1 models with Azure AI Studio.
Previously updated : 5/21/2024 Last updated : 7/21/2024 reviewer: shubhirajMsft
-# How to deploy Meta Llama models with Azure AI Studio
+# How to deploy Meta Llama 3.1 models with Azure AI Studio
[!INCLUDE [Feature preview](~/reusable-content/ce-skilling/azure/includes/ai-studio/includes/feature-preview.md)]
-In this article, you learn about the Meta Llama models. You also learn how to use Azure AI Studio to deploy models from this set either to serverless APIs with pay-as you go billing or to managed compute.
+In this article, you learn about the Meta Llama model family. You also learn how to use Azure AI Studio to deploy models from this set either to serverless APIs with pay-as you go billing or to managed compute.
> [!IMPORTANT]
- > Read more about the announcement of Meta Llama 3 models available now on Azure AI Model Catalog: [Microsoft Tech Community Blog](https://aka.ms/Llama3Announcement) and from [Meta Announcement Blog](https://aka.ms/meta-llama3-announcement-blog).
+ > Read more about the announcement of Meta Llama 3.1 405B Instruct and other Llama 3.1 models available now on Azure AI Model Catalog: [Microsoft Tech Community Blog](https://aka.ms/meta-llama-3.1-release-on-azure) and from [Meta Announcement Blog](https://aka.ms/meta-llama-3.1-release-announcement).
-Meta Llama 3 models and tools are a collection of pretrained and fine-tuned generative text models ranging in scale from 8 billion to 70 billion parameters. The model family also includes fine-tuned versions optimized for dialogue use cases with reinforcement learning from human feedback (RLHF), called Meta-Llama-3-8B-Instruct and Meta-Llama-3-70B-Instruct. See the following GitHub samples to explore integrations with [LangChain](https://aka.ms/meta-llama3-langchain-sample), [LiteLLM](https://aka.ms/meta-llama3-litellm-sample), [OpenAI](https://aka.ms/meta-llama3-openai-sample) and the [Azure API](https://aka.ms/meta-llama3-azure-api-sample).
+Now available on Azure AI Models-as-a-Service:
+- `Meta-Llama-3.1-405B-Instruct`
+- `Meta-Llama-3.1-70B-Instruct`
+- `Meta-Llama-3.1-8B-Instruct`
-## Deploy Meta Llama models as a serverless API
+The Meta Llama 3.1 family of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). All models support long context length (128k) and are optimized for inference with support for grouped query attention (GQA). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source chat models on common industry benchmarks.
-Certain models in the model catalog can be deployed as a serverless API with pay-as-you-go, providing a way to consume them as an API without hosting them on your subscription while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription.
+See the following GitHub samples to explore integrations with [LangChain](https://aka.ms/meta-llama-3.1-405B-instruct-langchain), [LiteLLM](https://aka.ms/meta-llama-3.1-405B-instruct-litellm), [OpenAI](https://aka.ms/meta-llama-3.1-405B-instruct-openai) and the [Azure API](https://aka.ms/meta-llama-3.1-405B-instruct-webrequests).
-Meta Llama 3 models are deployed as a serverless API with pay-as-you-go billing through Microsoft Azure Marketplace, and they might add more terms of use and pricing.
+## Deploy Meta Llama 3.1 405B Instruct as a serverless API
+
+Meta Llama 3.1 models - like `Meta Llama 3.1 405B Instruct` - can be deployed as a serverless API with pay-as-you-go, providing a way to consume them as an API without hosting them on your subscription while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription. Meta Llama 3.1 models are deployed as a serverless API with pay-as-you-go billing through Microsoft Azure Marketplace, and they might add more terms of use and pricing.
### Azure Marketplace model offerings
-# [Meta Llama 3](#tab/llama-three)
+# [Meta Llama 3.1](#tab/llama-three)
-The following models are available in Azure Marketplace for Llama 3 when deployed as a service with pay-as-you-go:
+The following models are available in Azure Marketplace for Llama 3.1 and Llama 3 when deployed as a service with pay-as-you-go:
-* [Meta Llama-3 8B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-8b-chat)
-* [Meta Llama-3 70B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-70b-chat)
+* [Meta-Llama-3.1-405B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-405B-base)
+* [Meta-Llama-3.1-70B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-8B-refresh)
+* [Meta Llama-3.1-8B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-70B-refresh)
+* [Meta-Llama-3-70B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-70b-chat)
+* [Meta-Llama-3-8B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-8b-chat)
# [Meta Llama 2](#tab/llama-two)
If you need to deploy a different model, [deploy it to managed compute](#deploy-
# [Meta Llama 3](#tab/llama-three) - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An [AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for Meta Llama 3 is only available with hubs created in these regions:
+- An [AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for Meta Llama 3.1 and Llama 3 is only available with hubs created in these regions:
* East US * East US 2
If you need to deploy a different model, [deploy it to managed compute](#deploy-
To create a deployment: 1. Sign in to [Azure AI Studio](https://ai.azure.com).
-1. Choose the model you want to deploy from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models).
+1. Choose `Meta-Llama-3.1-405B-Instruct` deploy from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models).
Alternatively, you can initiate deployment by starting from your project in AI Studio. Select a project and then select **Deployments** > **+ Create**.
-1. On the model's **Details** page, select **Deploy** and then select **Serverless API with Azure AI Content Safety**.
+1. On the **Details** page for `Meta-Llama-3.1-405B-Instruct`, select **Deploy** and then select **Serverless API with Azure AI Content Safety**.
1. Select the project in which you want to deploy your models. To use the pay-as-you-go model deployment offering, your workspace must belong to the **East US 2** or **Sweden Central** region. 1. On the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use. You can also select the **Marketplace offer details** tab to learn about pricing for the selected model.
-1. If this is your first time deploying the model in the project, you have to subscribe your project for the particular offering (for example, Meta-Llama-3-70B) from Azure Marketplace. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**.
+1. If this is your first time deploying the model in the project, you have to subscribe your project for the particular offering (for example, `Meta-Llama-3.1-405B-Instruct`) from Azure Marketplace. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**.
> [!NOTE] > Subscribing a project to a particular Azure Marketplace offering (in this case, Meta-Llama-3-70B) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites).
To create a deployment:
1. You can always find the endpoint's details, URL, and access keys by navigating to the project page and selecting **Deployments** from the left menu.
-To learn about billing for Meta Llama models deployed with pay-as-you-go, see [Cost and quota considerations for Llama 3 models deployed as a service](#cost-and-quota-considerations-for-llama-models-deployed-as-a-service).
+To learn about billing for Meta Llama models deployed with pay-as-you-go, see [Cost and quota considerations for Llama 3 models deployed as a service](#cost-and-quota-considerations-for-meta-llama-31-models-deployed-as-a-service).
# [Meta Llama 2](#tab/llama-two)
To create a deployment:
1. You can return to the Deployments page, select the deployment, and note the endpoint's **Target** URL and the Secret **Key**, which you can use to call the deployment and generate completions. 1. You can always find the endpoint's details, URL, and access keys by navigating to your project and selecting **Deployments** from the left menu.
-To learn about billing for Llama models deployed with pay-as-you-go, see [Cost and quota considerations for Llama 3 models deployed as a service](#cost-and-quota-considerations-for-llama-models-deployed-as-a-service).
+To learn about billing for Llama models deployed with pay-as-you-go, see [Cost and quota considerations for Llama 3 models deployed as a service](#cost-and-quota-considerations-for-meta-llama-31-models-deployed-as-a-service).
Models deployed as a service can be consumed using either the chat or the comple
1. Select your project or hub and then select **Deployments** from the left menu.
-1. Find and select the deployment you created.
+1. Find and select the `Meta-Llama-3.1-405B-Instruct` deployment you created.
1. Select **Open in playground**.
Models deployed as a service can be consumed using either the chat or the comple
1. Make an API request based on the type of model you deployed. - For completions models, such as `Meta-Llama-3-8B`, use the [`/completions`](#completions-api) API.
- - For chat models, such as `Meta-Llama-3-8B-Instruct`, use the [`/chat/completions`](#chat-api) API.
+ - For chat models, such as `Meta-Llama-3.1-405B-Instruct`, use the [`/chat/completions`](#chat-api) API.
- For more information on using the APIs, see the [reference](#reference-for-meta-llama-models-deployed-as-a-service) section.
+ For more information on using the APIs, see the [reference](#reference-for-meta-llama-31-models-deployed-as-a-service) section.
# [Meta Llama 2](#tab/llama-two)
Models deployed as a service can be consumed using either the chat or the comple
- For completions models, such as `Meta-Llama-2-7B`, use the [`/v1/completions`](#completions-api) API or the [Azure AI Model Inference API](../reference/reference-model-inference-api.md) on the route `/completions`. - For chat models, such as `Meta-Llama-2-7B-Chat`, use the [`/v1/chat/completions`](#chat-api) API or the [Azure AI Model Inference API](../reference/reference-model-inference-api.md) on the route `/chat/completions`.
- For more information on using the APIs, see the [reference](#reference-for-meta-llama-models-deployed-as-a-service) section.
+ For more information on using the APIs, see the [reference](#reference-for-meta-llama-31-models-deployed-as-a-service) section.
-### Reference for Meta Llama models deployed as a service
+### Reference for Meta Llama 3.1 models deployed as a service
Llama models accept both the [Azure AI Model Inference API](../reference/reference-model-inference-api.md) on the route `/chat/completions` or a [Llama Chat API](#chat-api) on `/v1/chat/completions`. In the same way, text completions can be generated using the [Azure AI Model Inference API](../reference/reference-model-inference-api.md) on the route `/completions` or a [Llama Completions API](#completions-api) on `/v1/completions`
The following is an example response:
## Deploy Meta Llama models to managed compute
-Apart from deploying with the pay-as-you-go managed service, you can also deploy Meta Llama models to managed compute in AI Studio. When deployed to managed compute, you can select all the details about the infrastructure running the model, including the virtual machines to use and the number of instances to handle the load you're expecting. Models deployed to managed compute consume quota from your subscription. All the models in the Llama family can be deployed to managed compute.
+Apart from deploying with the pay-as-you-go managed service, you can also deploy Meta Llama 3.1 models to managed compute in AI Studio. When deployed to managed compute, you can select all the details about the infrastructure running the model, including the virtual machines to use and the number of instances to handle the load you're expecting. Models deployed to managed compute consume quota from your subscription. The following models from the 3.1 release wave are available on managed compute:
+- `Meta-Llama-3.1-8B-Instruct` (FT supported)
+- `Meta-Llama-3.1-70B-Instruct` (FT supported)
+- `Meta-Llama-3.1-8B` (FT supported)
+- `Meta-Llama-3.1-70B` (FT supported)
+- `Llama Guard 3 8B`
+- `Prompt Guard`
-Follow these steps to deploy a model such as `Llama-2-7b-chat` to a real-time endpoint in [Azure AI Studio](https://ai.azure.com).
+Follow these steps to deploy a model such as `Meta-Llama-3.1-70B-Instruct ` to a managed compute in [Azure AI Studio](https://ai.azure.com).
1. Choose the model you want to deploy from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models).
Follow these steps to deploy a model such as `Llama-2-7b-chat` to a real-time en
1. On the model's **Details** page, select **Deploy** next to the **View license** button.
- :::image type="content" source="../media/deploy-monitor/llama/deploy-real-time-endpoint.png" alt-text="A screenshot showing how to deploy a model with the real-time endpoint option." lightbox="../media/deploy-monitor/llama/deploy-real-time-endpoint.png":::
+ :::image type="content" source="../media/deploy-monitor/llama/deploy-real-time-endpoint.png" alt-text="A screenshot showing how to deploy a model with the managed compute option." lightbox="../media/deploy-monitor/llama/deploy-real-time-endpoint.png":::
1. On the **Deploy with Azure AI Content Safety (preview)** page, select **Skip Azure AI Content Safety** so that you can continue to deploy the model using the UI.
For reference about how to invoke Llama models deployed to managed compute, see
##### More inference examples
-# [Meta Llama 3](#tab/llama-three)
-
-| **Package** | **Sample Notebook** |
-|-|-|
-| OpenAI SDK (experimental) | [openaisdk.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/meta-llama3/openaisdk.ipynb) |
-| LangChain | [langchain.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/meta-llama3/langchain.ipynb) |
-| WebRequests | [webrequests.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/meta-llama3/webrequests.ipynb) |
-| LiteLLM SDK | [litellm.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/meta-llama3/litellm.ipynb) |
-
-# [Meta Llama 2](#tab/llama-two)
- | **Package** | **Sample Notebook** | |-|-|
-| OpenAI SDK (experimental) | [openaisdk.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/llama2/openaisdk.ipynb) |
-| LangChain | [langchain.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/llama2/langchain.ipynb) |
-| WebRequests | [webrequests.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/llama2/webrequests.ipynb) |
-| LiteLLM SDK | [litellm.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/llama2/litellm.ipynb) |
--
+| CLI using CURL and Python web requests | [webrequests.ipynb](https://aka.ms/meta-llama-3.1-405B-instruct-webrequests)|
+| OpenAI SDK (experimental) | [openaisdk.ipynb](https://aka.ms/meta-llama-3.1-405B-instruct-openai)|
+| LangChain | [langchain.ipynb](https://aka.ms/meta-llama-3.1-405B-instruct-langchain)|
+| LiteLLM SDK | [litellm.ipynb](https://aka.ms/meta-llama-3.1-405B-instruct-litellm) |
## Cost and quotas
-### Cost and quota considerations for Llama models deployed as a service
+### Cost and quota considerations for Meta Llama 3.1 models deployed as a service
-Llama models deployed as a service are offered by Meta through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying or [fine-tuning the models](./fine-tune-model-llama.md).
+Meta Llama 3.1 models deployed as a service are offered by Meta through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying or [fine-tuning the models](./fine-tune-model-llama.md).
Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference and fine-tuning; however, multiple meters are available to track each scenario independently.
For more information on how to track costs, see [monitor costs for models offere
:::image type="content" source="../media/cost-management/marketplace/costs-model-as-service-cost-details.png" alt-text="A screenshot showing different resources corresponding to different model offers and their associated meters." lightbox="../media/cost-management/marketplace/costs-model-as-service-cost-details.png":::
-Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
+Quota is managed per deployment. Each deployment has a rate limit of 400,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
-### Cost and quota considerations for Llama models deployed as managed compute
+### Cost and quota considerations for Meta Llama 3.1 models deployed as managed compute
-For deployment and inferencing of Llama models with managed compute, you consume virtual machine (VM) core quota that is assigned to your subscription on a per-region basis. When you sign up for Azure AI Studio, you receive a default VM quota for several VM families available in the region. You can continue to create deployments until you reach your quota limit. Once you reach this limit, you can request a quota increase.
+For deployment and inferencing of Meta Llama 3.1 models with managed compute, you consume virtual machine (VM) core quota that is assigned to your subscription on a per-region basis. When you sign up for Azure AI Studio, you receive a default VM quota for several VM families available in the region. You can continue to create deployments until you reach your quota limit. Once you reach this limit, you can request a quota increase.
## Content filtering
Models deployed as a serverless API with pay-as-you-go are protected by Azure AI
## Next steps - [What is Azure AI Studio?](../what-is-ai-studio.md)-- [Fine-tune a Meta Llama 2 model in Azure AI Studio](fine-tune-model-llama.md)
+- [Fine-tune a Meta Llama 3.1 models in Azure AI Studio](fine-tune-model-llama.md)
- [Azure AI FAQ article](../faq.yml) - [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
ai-studio Fine Tune Model Llama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/fine-tune-model-llama.md
description: Learn how to fine-tune Meta Llama models in Azure AI Studio.
Previously updated : 5/21/2024 Last updated : 7/23/2024 reviewer: shubhirajMsft
Fine-tuning provides significant value by enabling customization and optimizatio
In this article, you learn how to fine-tune Meta Llama models in [Azure AI Studio](https://ai.azure.com).
-The [Meta Llama family of large language models (LLMs)](./deploy-models-llama.md) is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. The model family also includes fine-tuned versions optimized for dialogue use cases with Reinforcement Learning from Human Feedback (RLHF), called Llama-2-chat.
+The [Meta Llama family of large language models (LLMs)](./deploy-models-llama.md) is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. The model family also includes fine-tuned versions optimized for dialogue use cases with Reinforcement Learning from Human Feedback (RLHF), called Llama-Instruct.
## Models
-# [Meta Llama 3](#tab/llama-three)
+# [Meta Llama 3.1](#tab/llama-three)
-Fine-tuning of Llama 3 models is currently not supported.
+The following models are available in Azure Marketplace for Llama 3.1 when fine-tuning as a service with pay-as-you-go billing:
+
+- `Meta-Llama-3.1-80B-Instruct` (preview)
+- `Meta-LLama-3.1-8b-Instruct` (preview)
+
+Fine-tuning of Llama 3.1 models is currently supported in projects located in West US 3.
+
+> [!IMPORTANT]
+> At this time we are not able to do fine-tuning for Llama 3.1 with sequence length of 128K.
# [Meta Llama 2](#tab/llama-two) The following models are available in Azure Marketplace for Llama 2 when fine-tuning as a service with pay-as-you-go billing: -- `Llama-2-70b` (preview)-- `Llama-2-13b` (preview)-- `Llama-2-7b` (preview)
+- `Meta Llama-2-70b` (preview)
+- `Meta Llama-2-13b` (preview)
+- `Meta Llama-2-7b` (preview)
Fine-tuning of Llama 2 models is currently supported in projects located in West US 3.
Fine-tuning of Llama 2 models is currently supported in projects located in West
## Prerequisites
-# [Meta Llama 3](#tab/llama-three)
+# [Meta Llama 3.1](#tab/llama-three)
++
+ An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
+- An [Azure AI hub resource](../how-to/create-azure-ai-resource.md).
+
+ > [!IMPORTANT]
+ > For Meta Llama 3.1 models, the pay-as-you-go model fine-tune offering is only available with AI hubs created in **West US 3** regions.
+
+- An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio.
+- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions:
+
+ - On the Azure subscriptionΓÇöto subscribe the Azure AI project to the Azure Marketplace offering, once for each project, per offering:
+ - `Microsoft.MarketplaceOrdering/agreements/offers/plans/read`
+ - `Microsoft.MarketplaceOrdering/agreements/offers/plans/sign/action`
+ - `Microsoft.MarketplaceOrdering/offerTypes/publishers/offers/plans/agreements/read`
+ - `Microsoft.Marketplace/offerTypes/publishers/offers/plans/agreements/read`
+ - `Microsoft.SaaS/register/action`
+
+ - On the resource groupΓÇöto create and use the SaaS resource:
+ - `Microsoft.SaaS/resources/read`
+ - `Microsoft.SaaS/resources/write`
+
+ - On the Azure AI projectΓÇöto deploy endpoints (the Azure AI Developer role contains these permissions already):
+ - `Microsoft.MachineLearningServices/workspaces/marketplaceModelSubscriptions/*`
+ - `Microsoft.MachineLearningServices/workspaces/serverlessEndpoints/*`
+
+ For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
-Fine-tuning of Llama 3 models is currently not supported.
# [Meta Llama 2](#tab/llama-two)
The supported file type is JSON Lines. Files are uploaded to the default datasto
## Fine-tune a Meta Llama model
-# [Meta Llama 3](#tab/llama-three)
+# [Meta Llama 3.1](#tab/llama-three)
+
+To fine-tune a LLama 3.1 model:
+
+1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Choose the model you want to fine-tune from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models).
+
+1. On the model's **Details** page, select **fine-tune**.
+
+1. Select the project in which you want to fine-tune your models. To use the pay-as-you-go model fine-tune offering, your workspace must belong to the **West US 3** region.
+1. On the fine-tune wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use. You can also select the **Marketplace offer details** tab to learn about pricing for the selected model.
+1. If this is your first time fine-tuning the model in the project, you have to subscribe your project for the particular offering (for example, Meta-Llama-3-70B) from Azure Marketplace. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and fine-tune**.
+
+ > [!NOTE]
+ > Subscribing a project to a particular Azure Marketplace offering (in this case, Meta-Llama-3-70B) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites).
+
+1. Once you sign up the project for the particular Azure Marketplace offering, subsequent fine-tuning of the _same_ offering in the _same_ project don't require subscribing again. Therefore, you don't need to have the subscription-level permissions for subsequent fine-tune jobs. If this scenario applies to you, select **Continue to fine-tune**.
+
+1. Enter a name for your fine-tuned model and the optional tags and description.
+1. Select training data to fine-tune your model. See [data preparation](#data-preparation) for more information.
+
+ > [!NOTE]
+ > If you have your training/validation files in a credential less datastore, you will need to allow workspace managed identity access to their datastore in order to proceed with MaaS finetuning with a credential less storage. On the "Datastore" page, after clicking "Update authentication" > Select the following option:
+
+ ![Use workspace managed identity for data preview and profiling in Azure Machine Learning Studio.](../media/how-to/fine-tune/llama/credentials.png)
+
+ Make sure all your training examples follow the expected format for inference. To fine-tune models effectively, ensure a balanced and diverse dataset. This involves maintaining data balance, including various scenarios, and periodically refining training data to align with real-world expectations, ultimately leading to more accurate and balanced model responses.
+ - The batch size to use for training. When set to -1, batch_size is calculated as 0.2% of examples in training set and the max is 256.
+ - The fine-tuning learning rate is the original learning rate used for pretraining multiplied by this multiplier. We recommend experimenting with values between 0.5 and 2. Empirically, we've found that larger learning rates often perform better with larger batch sizes. Must be between 0.0 and 5.0.
+ - Number of training epochs. An epoch refers to one full cycle through the data set.
+
+1. Task parameters are an optional step and an advanced option- Tuning hyperparameter is essential for optimizing large language models (LLMs) in real-world applications. It allows for improved performance and efficient resource usage. The default settings can be used or advanced users can customize parameters like epochs or learning rate.
+
+1. Review your selections and proceed to train your model.
+
+Once your model is fine-tuned, you can deploy the model and can use it in your own application, in the playground, or in prompt flow. For more information, see [How to deploy Llama 3.1 family of large language models with Azure AI Studio](./deploy-models-llama.md).
-Fine-tuning of Llama 3 models is currently not supported.
# [Meta Llama 2](#tab/llama-two)
ai-studio Azure Open Ai Gpt 4V Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/azure-open-ai-gpt-4v-tool.md
The prompt flow Azure OpenAI GPT-4 Turbo with Vision tool enables you to use you
## Prerequisites - An Azure subscription. <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">You can create one for free</a>.-- Access granted to Azure OpenAI in the desired Azure subscription.-
- Currently, you must apply for access to this service. To apply for access to Azure OpenAI, complete the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
- - An [AI Studio hub](../../how-to/create-azure-ai-resource.md) with a GPT-4 Turbo with Vision model deployed in [one of the regions that support GPT-4 Turbo with Vision](../../../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability). When you deploy from your project's **Deployments** page, select `gpt-4` as the model name and `vision-preview` as the model version. ## Build with the Azure OpenAI GPT-4 Turbo with Vision tool
ai-studio Get Started Playground https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/get-started-playground.md
The steps in this quickstart include:
## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.-- Access granted to Azure OpenAI in the desired Azure subscription.-
- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
- - You need an Azure AI Studio hub or permissions to create one. Your user role must be **Azure AI Developer**, **Contributor**, or **Owner** on the hub. For more information, see [hubs](../concepts/ai-resources.md) and [Azure AI roles](../concepts/rbac-ai-studio.md). - If your role is **Contributor** or **Owner**, you can [create a hub in this tutorial](#create-a-project-in-azure-ai-studio). - If your role is **Azure AI Developer**, the hub must already be created.
ai-studio Hear Speak Playground https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/hear-speak-playground.md
The speech to text and text to speech features can be used together or separatel
## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.-- Access granted to Azure OpenAI in the desired Azure subscription.-
- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
- - An [AI Studio hub](../how-to/create-azure-ai-resource.md) with a chat model deployed. For more information about model deployment, see the [resource deployment guide](../../ai-services/openai/how-to/create-resource.md). - An [AI Studio project](../how-to/create-projects.md).
ai-studio Multimodal Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/multimodal-vision.md
Extra usage fees might apply when using GPT-4 Turbo with Vision and Azure AI Vis
## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.-- Access granted to Azure OpenAI in the desired Azure subscription.
- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
- Once you have your Azure subscription, <a href="/azure/ai-services/openai/how-to/create-resource?pivots=web-portal" title="Create an Azure OpenAI resource." target="_blank">create an Azure OpenAI resource </a>. - An [AI Studio hub](../how-to/create-azure-ai-resource.md) with your Azure OpenAI resource added as a connection.
ai-studio Copilot Sdk Build Rag https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/copilot-sdk-build-rag.md
This system is able to interpret the intent of the query "how much does it cost?
If you navigate to the trace from this flow run, you see this in action. The local traces link shows in the console output before the result of the flow test run. ## Clean up resources
ai-studio Deploy Chat Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-chat-web-app.md
The steps in this tutorial are:
## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.-- Access granted to Azure OpenAI in the desired Azure subscription.-
- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
- - An [AI Studio hub](../how-to/create-azure-ai-resource.md), [project](../how-to/create-projects.md), and [deployed Azure OpenAI](../how-to/deploy-models-openai.md) chat model. Complete the [AI Studio playground quickstart](../quickstarts/get-started-playground.md) to create these resources if you haven't already. - An [Azure AI Search service connection](../how-to/connections-add.md#create-a-new-connection) to index the sample product and customer data.
ai-studio Deploy Copilot Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-copilot-ai-studio.md
The steps in this tutorial are:
## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.-- Access granted to Azure OpenAI in the desired Azure subscription.-
- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
- - An [AI Studio hub](../how-to/create-azure-ai-resource.md), [project](../how-to/create-projects.md), and [deployed Azure OpenAI](../how-to/deploy-models-openai.md) chat model. Complete the [AI Studio playground quickstart](../quickstarts/get-started-playground.md) to create these resources if you haven't already. - An [Azure AI Search service connection](../how-to/connections-add.md#create-a-new-connection) to index the sample product and customer data.
ai-studio What Is Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/what-is-ai-studio.md
You can [explore AI Studio (including the model catalog)](./how-to/model-catalog
But for full functionality there are some requirements: - You need an [Azure account](https://azure.microsoft.com/free/). -- You also need to apply for access to Azure OpenAI Service by completing the form at [https://aka.ms/oai/access](https://aka.ms/oai/access). You receive a follow-up email when your subscription is added. ## Next steps
aks Aks Extension Attach Azure Container Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-extension-attach-azure-container-registry.md
+
+ Title: Attach to Azure Container Registry (ACR) using the Azure Kubernetes Service (AKS) extension for Visual Studio Code
+description: Learn how to attach to Azure Container Registry (ACR) using the Azure Kubernetes Service (AKS) extension for Visual Studio Code.
++ Last updated : 07/15/2024++++
+# Attach to Azure Container Registry (ACR) using the Azure Kubernetes Service (AKS) extension for Visual Studio Code
+
+In this article, you learn how to attach to Azure Container Registry (ACR) using the Azure Kubernetes Service (AKS) extension for Visual Studio Code.
+
+## Prerequisites
+
+Before you begin, make sure you have the following resources:
+
+* An Azure container registry. If you don't have one, create one using the steps in [Quickstart: Create a private container registry][create-acr-cli].
+* An AKS cluster. If you don't have one, create one using the steps in [Quickstart: Deploy an AKS cluster][deploy-aks-cli].
+* The Azure Kubernetes Service (AKS) extension for Visual Studio Code downloaded. For more information, see [Install the Azure Kubernetes Service (AKS) extension for Visual Studio Code][install-aks-vscode].
+
+## Attach your Azure container registry to your AKS cluster
+
+You can access the screen for attaching your container registry to your AKS cluster using the command palette or the Kubernetes view.
+
+### [Command palette](#tab/command-palette)
+
+1. On your keyboard, press `Ctrl+Shift+P` to open the command palette.
+2. Enter the following information:
+
+ * **Subscription**: Select the Azure subscription that holds your resources.
+ * **ACR Resource Group**: Select the resource group for your container registry.
+ * **Container Registry**: Select the container registry you want to attach to your cluster.
+ * **Cluster Resource Group**: Select the resource group for your cluster.
+ * **Cluster**: Select the cluster you want to attach to your container registry.
+
+3. Select **Attach**.
+
+ You should see a green checkmark, which means your container registry is attached to your AKS cluster.
+
+### [Kubernetes view](#tab/kubernetes-view)
+
+1. In the Kubernetes tab, under Clouds > Azure > your subscription > Automated Deployments, right click on your cluster and select **Attach ACR to Cluster**.
+2. Enter the following information:
+
+ * **Subscription**: Select the Azure subscription that holds your resources.
+ * **ACR Resource Group**: Select the resource group for your container registry.
+ * **Container Registry**: Select the container registry you want to attach to your cluster.
+ * **Cluster Resource Group**: Select the resource group for your cluster.
+ * **Cluster**: Select the cluster you want to attach to your container registry.
+
+3. Select **Attach**.
+
+ You should see a green checkmark, which means your container registry is attached to your AKS cluster.
+++
+For more information, see [AKS extension for Visual Studio Code features][aks-vscode-features].
+
+## Product support and feedback
+
+If you have a question or want to offer product feedback, please open an issue on the [AKS extension GitHub repository][aks-vscode-github].
+
+## Next steps
+
+To learn more about other AKS add-ons and extensions, see [Add-ons, extensions, and other integrations for AKS][aks-addons].
+
+<!LINKS>
+[create-acr-cli]: ../container-registry/container-registry-get-started-azure-cli.md
+[deploy-aks-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[install-aks-vscode]: ./aks-extension-vs-code.md#installation
+[aks-vscode-features]: https://code.visualstudio.com/docs/azure/aksextensions#_features
+[aks-vscode-github]: https://github.com/Azure/vscode-aks-tools/issues/new/choose
+[aks-addons]: ./integrations.md
+
aks Aks Extension Draft Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-extension-draft-deployment.md
+
+ Title: Create a Kubernetes deployment using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code
+description: Learn how create a Kubernetes deployment using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code.
++ Last updated : 07/15/2024++++
+# Create a Kubernetes deployment using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code
+
+In this article, you learn how to create a Kubernetes deployment using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code. Automated Deployments provides an easy way to automate the process of scaling, updating, and maintaining your applications.
+
+## Prerequisites
+
+Before you begin, make sure you have the following resources:
+
+* An active folder with code open in Visual Studio Code.
+* The Azure Kubernetes Service (AKS) extension for Visual Studio Code downloaded. For more information, see [Install the Azure Kubernetes Service (AKS) extension for Visual Studio Code][install-aks-vscode].
+
+## Create a Kubernetes deployment using the Azure Kubernetes Service (AKS) extension
+
+You can access the screen to create a Kubernetes deployment using the command palette or the explorer view.
+
+### [Command palette](#tab/command-palette)
+
+1. On your keyboard, press `Ctrl+Shift+P` to open the command palette.
+2. In the search bar, search for and select **Automated Deployments: Create a Deployment**.
+3. Enter the following information:
+
+ * **Subscription**: Select your Azure subscription.
+ * **Location**: Select a location where you want to save your Kubernetes deployment files.
+ * **Deployment options**: Select `Kubernetes manifests`, `Helm`, or `Kustomize`.
+ * **Target port**: Select the port in which your applications listen to in your deployment. This port usually matches what is exposed in your Dockerfile.
+ * **Service port**: Select the port in which the service listens to for incoming traffic.
+ * **Namespace**: Select the namespace in which your application will be deployed into.
+
+4. Select **Create**.
++
+### [Explorer view](#tab/explorer-view)
+
+1. Right click on the explorer pane where your active folder is open and select **Create a Deployment**.
+2. Enter the following information:
+
+ * **Subscription**: Select your Azure subscription.
+ * **Location**: Select a location where you want to save your Kubernetes deployment files.
+ * **Deployment options**: Select `Kubernetes manifests`, `Helm`, or `Kustomize`.
+ * **Target port**: Select the port in which your applications listen to in your deployment. This port usually matches what is exposed in your Dockerfile.
+ * **Service port**: Select the port in which the service listens to for incoming traffic.
+ * **Namespace**: Select the namespace in which your application will be deployed into.
+
+3. Select **Create**.
+++
+For more information, see [AKS extension for Visual Studio Code features][aks-vscode-features].
+
+## Product support and feedback
+
+If you have a question or want to offer product feedback, please open an issue on the [AKS extension GitHub repository][aks-vscode-github].
+
+## Next steps
+
+To learn more about other AKS add-ons and extensions, see [Add-ons, extensions, and other integrations for AKS][aks-addons].
+
+<!LINKS>
+[install-aks-vscode]: ./aks-extension-vs-code.md#installation
+[aks-vscode-features]: https://code.visualstudio.com/docs/azure/aksextensions#_features
+[aks-vscode-github]: https://github.com/Azure/vscode-aks-tools/issues/new/choose
+[aks-addons]: ./integrations.md
+
+
aks Aks Extension Draft Dockerfile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-extension-draft-dockerfile.md
+
+ Title: Create a Dockerfile using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code
+description: Learn how to create a Dockerfile using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code.
++ Last updated : 07/15/2024++++
+# Create a Dockerfile using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code
+
+In this article, you learn how to create a Dockerfile using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code. A Dockerfile is essential for Kubernetes because it defines the blueprint for creating Docker images. These images encapsulate your application along with its dependencies and environment settings, ensuring consistent deployment across various environments.
+
+## Prerequisites
+
+Before you begin, make sure you have the following resources:
+
+* An active folder with code open in Visual Studio Code.
+* The Azure Kubernetes Service (AKS) extension for Visual Studio Code downloaded. For more information, see [Install the Azure Kubernetes Service (AKS) extension for Visual Studio Code][install-aks-vscode].
+
+## Create a Dockerfile using the Azure Kubernetes Service (AKS) extension
+
+You can access the screen to create a Dockerfile using the command palette or the explorer view.
+
+### [Command palette](#tab/command-palette)
+
+1. On your keyboard, press `Ctrl+Shift+P` to open the command palette.
+2. In the search bar, search for and select **Automated Deployments: Create a Dockerfile**.
+3. Enter the following information:
+
+ * **Location**: Select a location where you want to save your Dockerfile.
+ * **Programming language**: Select the programming language your app is written in.
+ * **Programming language version**: Select the programming language version.
+ * **Application Port**: Select the port.
+ * **Cluster**: Select the port in which your application listens to for incoming network connections.
+
+4. Select **Create**.
+
+### [Explorer view](#tab/explorer-view)
+
+1. Right click on the explorer pane where your active folder is open and select **Create a Dockerfile**.
+2. Enter the following information:
+
+ * **Location**: Select a location where you want to save your Dockerfile.
+ * **Programming language**: Select the programming language your app is written in.
+ * **Programming language version**: Select the programming language version.
+ * **Application Port**: Select the port.
+ * **Cluster**: Select the port in which your application listens to for incoming network connections.
+
+3. Select **Create**.
+++
+For more information, see [AKS extension for Visual Studio Code features][aks-vscode-features].
+
+## Product support and feedback
+
+If you have a question or want to offer product feedback, please open an issue on the [AKS extension GitHub repository][aks-vscode-github].
+
+## Next steps
+
+To learn more about other AKS add-ons and extensions, see [Add-ons, extensions, and other integrations for AKS][aks-addons].
+
+<!LINKS>
+[install-aks-vscode]: ./aks-extension-vs-code.md#installation
+[aks-vscode-features]: https://code.visualstudio.com/docs/azure/aksextensions#_features
+[aks-vscode-github]: https://github.com/Azure/vscode-aks-tools/issues/new/choose
+[aks-addons]: ./integrations.md
+
aks Aks Extension Draft Github Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-extension-draft-github-workflow.md
+
+ Title: Create a GitHub Workflow using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code
+description: Learn how to create a GitHub Workflow using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code.
++ Last updated : 07/15/2024++++
+# Create a GitHub Workflow using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code
+
+In this article, you learn how to create a GitHub Workflow using Automated Deployments in the Azure Kubernetes Service (AKS) extension for Visual Studio Code. A GitHub Workflow automates various development tasks, such as building, testing, and deploying code, ensuring consistency and efficiency across the development process. It enhances collaboration by integrating seamlessly with version control, enabling continuous integration and continuous deployment (CI/CD) pipelines, and ensuring that all changes are thoroughly vetted before being merged into the main codebase.
+
+## Prerequisites
+
+Before you begin, make sure you have the following resources:
+
+* An active folder with code open in Visual Studio Code.
+* Make sure the current workspace is an active `git` repository.
+* The Azure Kubernetes Service (AKS) extension for Visual Studio Code downloaded. For more information, see [Install the Azure Kubernetes Service (AKS) extension for Visual Studio Code][install-aks-vscode].
+
+## Create a GitHub Workflow using the Azure Kubernetes Service (AKS) extension
+
+You can access the screen to create a GitHub Workflow using the command palette or the Kubernetes view.
+
+### [Command palette](#tab/command-palette)
+
+1. On your keyboard, press `Ctrl+Shift+P` to open the command palette.
+2. Enter the following information:
+
+ * **Workflow name**: Enter a name for your GitHub Workflow.
+ * **GitHub repository**: Select the location where want to save your Kubernetes deployment files.
+ * **Subscription**: Select your Azure subscription.
+ * **Dockerfile**: Select the Dockerfile that you want to build in the GitHub Action.
+ * **Build context**: Select a build context.
+ * **ACR Resource Group**: Select an ACR resource group.
+ * **Container Registry**: Select a container registry.
+ * **Azure Container Registry image**: Select or enter an Azure Container Registry image.
+ * **Cluster Resource Group**: Select your cluster resource group.
+ * **Cluster**: Select your AKS cluster.
+ * **Namespace**: Select or enter a namespace in which you will deploy into.
+ * **Type**: Select the type of deployment option.
+
+3. Select **Create**.
+
+### [Kubernetes view](#tab/kubernetes-view)
+
+1. In the Kubernetes tab, under Clouds > Azure > your subscription > Automated Deployments, right click on your cluster and select **Create a GitHub Workflow**.
+2. Enter the following information:
+
+ * **Workflow name**: Enter a name for your GitHub Workflow.
+ * **GitHub repository**: Select the location where want to save your Kubernetes deployment files.
+ * **Subscription**: Select your Azure subscription.
+ * **Dockerfile**: Select the Dockerfile that you want to build in the GitHub Action.
+ * **Build context**: Select a build context.
+ * **ACR Resource Group**: Select an ACR resource group.
+ * **Container Registry**: Select a container registry.
+ * **Azure Container Registy image**: Select or enter an Azure Container Registry image.
+ * **Cluster Resource Group**: Select your cluster resource group.
+ * **Cluster**: Select your AKS cluster.
+ * **Namespace**: Select or enter a namespace in which you will deploy into.
+ * **Type**: Select the type of deployment option.
+
+3. Select **Create**.
+++
+For more information, see [AKS extension for Visual Studio Code features][aks-vscode-features].
+
+## Product support and feedback
+
+If you have a question or want to offer product feedback, please open an issue on the [AKS extension GitHub repository][aks-vscode-github].
+
+## Next steps
+
+To learn more about other AKS add-ons and extensions, see [Add-ons, extensions, and other integrations for AKS][aks-addons].
+
+<!LINKS>
+[install-aks-vscode]: ./aks-extension-vs-code.md#installation
+[aks-vscode-features]: https://code.visualstudio.com/docs/azure/aksextensions#_features
+[aks-vscode-github]: https://github.com/Azure/vscode-aks-tools/issues/new/choose
+[aks-addons]: ./integrations.md
+
aks Tutorial Kubernetes Deploy Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-deploy-application.md
Title: Kubernetes on Azure tutorial - Deploy an application to Azure Kubernetes Service (AKS) description: In this Azure Kubernetes Service (AKS) tutorial, you deploy a multi-container application to your cluster using images stored in Azure Container Registry. Previously updated : 02/20/2023 Last updated : 06/10/2024 #Customer intent: As a developer, I want to learn how to deploy apps to an Azure Kubernetes Service (AKS) cluster so that I can deploy and run my own applications.
In these tutorials, your Azure Container Registry (ACR) instance stores the cont
az acr list --resource-group myResourceGroup --query "[].{acrLoginServer:loginServer}" --output table ```
-2. Make sure you're in the cloned *aks-store-demo* directory, and then open the manifest file with a text editor, such as `vi`.
-
- ```azurecli-interactive
- vi aks-store-quickstart.yaml
- ```
+2. Make sure you're in the cloned *aks-store-demo* directory, and then open the `aks-store-quickstart.yaml` manifest file with a text editor.
3. Update the `image` property for the containers by replacing *ghcr.io/azure-samples* with your ACR login server name.
In these tutorials, your Azure Container Registry (ACR) instance stores the cont
... ```
-4. Save and close the file. In `vi`, use `:wq`.
+4. Save and close the file.
### [Azure PowerShell](#tab/azure-powershell) 1. Get your login server address using the [`Get-AzContainerRegistry`][get-azcontainerregistry] cmdlet and query for your login server. Make sure you replace `<acrName>` with the name of your ACR instance. ```azurepowershell-interactive
- (Get-AzContainerRegistry -ResourceGroupName myResourceGroup -Name <acrName>).LoginServer
+ (Get-AzContainerRegistry -ResourceGroupName myResourceGroup -Name $ACRNAME).LoginServer
```
-2. Make sure you're in the cloned *aks-store-demo* directory, and then open the manifest file with a text editor, such as `vi`.
-
- ```azurepowershell-interactive
- vi aks-store-quickstart.yaml
- ```
+2. Make sure you're in the cloned *aks-store-demo* directory, and then open the `aks-store-quickstart.yaml` manifest file with a text editor.
3. Update the `image` property for the containers by replacing *ghcr.io/azure-samples* with your ACR login server name.
In these tutorials, your Azure Container Registry (ACR) instance stores the cont
... ```
-4. Save and close the file. In `vi`, use `:wq`.
+4. Save and close the file.
### [Azure Developer CLI](#tab/azure-azd)
In these tutorials, your Azure Container Registry (ACR) instance stores the cont
The following example output shows the resources successfully created in the AKS cluster: ```output
- deployment.apps/rabbitmq created
+ statefulset.apps/rabbitmq created
+ configmap/rabbitmq-enabled-plugins created
service/rabbitmq created deployment.apps/order-service created service/order-service created
In these tutorials, your Azure Container Registry (ACR) instance stores the cont
The following example output shows the resources successfully created in the AKS cluster: ```output
- deployment.apps/rabbitmq created
+ statefulset.apps/rabbitmq created
+ configmap/rabbitmq-enabled-plugins created
service/rabbitmq created deployment.apps/order-service created service/order-service created
When the application runs, a Kubernetes service exposes the application front en
kubectl get service store-front --watch ```
- Initially, the `EXTERNAL-IP` for the *store-front* service shows as *pending*:
+ Initially, the `EXTERNAL-IP` for the `store-front` service shows as `<pending>`:
```output store-front LoadBalancer 10.0.34.242 <pending> 80:30676/TCP 5s ```
-2. When the `EXTERNAL-IP` address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process.
+2. When the `EXTERNAL-IP` address changes from `<pending>` to a public IP address, use `CTRL-C` to stop the `kubectl` watch process.
The following example output shows a valid public IP address assigned to the service:
When the application runs, a Kubernetes service exposes the application front en
store-front LoadBalancer 10.0.34.242 52.179.23.131 80:30676/TCP 67s ```
-3. View the application in action by opening a web browser to the external IP address of your service.
+3. View the application in action by opening a web browser and navigating to the external IP address of your service: `http://<external-ip>`.
:::image type="content" source="./learn/media/quick-kubernetes-deploy-cli/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="./learn/media/quick-kubernetes-deploy-cli/aks-store-application.png":::
Navigate to your Azure portal to find your deployment information.
1. Open your [Resource Group][azure-rg] on the Azure portal 1. Navigate to the Kubernetes service for your cluster 1. Select `Services and Ingress` under `Kubernetes Resources`
-1. Copy the External IP shown in the column for store-front
+1. Copy the External IP shown in the column for the `store-front` service
1. Paste the IP into your browser and visit your store page :::image type="content" source="./learn/media/quick-kubernetes-deploy-cli/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="./learn/media/quick-kubernetes-deploy-cli/aks-store-application.png":::
+## Clean up resources
+
+Since you validated the application's functionality, you can now remove the cluster from the application. We will deploy the application again in the next tutorial.
+
+1. Stop and remove the container instances and resources using the [`docker-compose down`][docker-compose-down] command.
+
+ ```console
+ kubectl delete -f aks-store-quickstart.yaml
+ ```
+
+1. Check that all the application pods have been removed:
+
+ ```console
+ kubectl get pods
+ ```
+ ## Next steps In this tutorial, you deployed a sample Azure application to a Kubernetes cluster in AKS. You learned how to:
aks Tutorial Kubernetes Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-deploy-cluster.md
Title: Kubernetes on Azure tutorial - Create an Azure Kubernetes Service (AKS) cluster description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to create an AKS cluster and use kubectl to connect to the Kubernetes main node. Previously updated : 02/14/2024 Last updated : 06/10/2024
Kubernetes provides a distributed platform for containerized applications. With
In this tutorial, part three of seven, you deploy a Kubernetes cluster in AKS. You learn how to: > [!div class="checklist"]-
+>
> * Deploy an AKS cluster that can authenticate to an Azure Container Registry (ACR). > * Install the Kubernetes CLI, `kubectl`. > * Configure `kubectl` to connect to your AKS cluster.
For information about AKS resource limits and region availability, see [Quotas,
To allow an AKS cluster to interact with other Azure resources, the Azure platform automatically creates a cluster identity. In this example, the cluster identity is [granted the right to pull images][container-registry-integration] from the ACR instance you created in the previous tutorial. To execute the command successfully, you need to have an **Owner** or **Azure account administrator** role in your Azure subscription.
-* Create an AKS cluster using the [`az aks create`][az aks create] command. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region.
+* Create an AKS cluster using the [`az aks create`][az aks create] command. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region. We will continue to use the environment variable, `$ACRNAME`, that we set in the [previous tutorial][aks-tutorial-prepare-acr]. If you do not have this environment variable set, set it now to the same value you used previously.
```azurecli-interactive az aks create \
To allow an AKS cluster to interact with other Azure resources, the Azure platfo
--name myAKSCluster \ --node-count 2 \ --generate-ssh-keys \
- --attach-acr <acrName>
+ --attach-acr $ACRNAME
``` > [!NOTE]
To allow an AKS cluster to interact with other Azure resources, the Azure platfo
* Create an AKS cluster using the [`New-AzAksCluster`][new-azakscluster] cmdlet. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region. ```azurepowershell-interactive
- New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 2 -GenerateSshKey -AcrNameToAttach <acrName>
+ New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 2 -GenerateSshKey -AcrNameToAttach $ACRNAME
``` > [!NOTE]
To avoid needing an **Owner** or **Azure account administrator** role, you can a
```output NAME STATUS ROLES AGE VERSION
- aks-nodepool1-19366578-vmss000002 Ready agent 47h v1.25.6
- aks-nodepool1-19366578-vmss000003 Ready agent 47h v1.25.6
+ aks-nodepool1-19366578-vmss000000 Ready agent 47h v1.28.9
+ aks-nodepool1-19366578-vmss000001 Ready agent 47h v1.28.9
``` ### [Azure PowerShell](#tab/azure-powershell)
To avoid needing an **Owner** or **Azure account administrator** role, you can a
```output NAME STATUS ROLES AGE VERSION
- aks-nodepool1-19366578-vmss000002 Ready agent 47h v1.25.6
- aks-nodepool1-19366578-vmss000003 Ready agent 47h v1.25.6
+ aks-nodepool1-19366578-vmss000000 Ready agent 47h v1.28.9
+ aks-nodepool1-19366578-vmss000001 Ready agent 47h v1.28.9
``` ### [Azure Developer CLI](#tab/azure-azd)
To avoid needing an **Owner** or **Azure account administrator** role, you can a
```output NAME STATUS ROLES AGE VERSION
- aks-nodepool1-19366578-vmss000002 Ready agent 47h v1.25.6
- aks-nodepool1-19366578-vmss000003 Ready agent 47h v1.25.6
+ aks-nodepool1-19366578-vmss000000 Ready agent 47h v1.28.9
+ aks-nodepool1-19366578-vmss000001 Ready agent 47h v1.28.9
``` [!INCLUDE [azd-login-ts](./includes/azd/azd-login-ts.md)]
aks Tutorial Kubernetes Paas Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-paas-services.md
Title: Kubernetes on Azure tutorial - Use PaaS services with an Azure Kubernetes Service (AKS) cluster description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to use the Azure Service Bus service with your AKS cluster. Previously updated : 10/23/2023 Last updated : 06/10/2024 #Customer intent: As a developer, I want to learn how to use PaaS services with an Azure Kubernetes Service (AKS) cluster so that I can deploy and manage my applications.
In previous tutorials, you used a RabbitMQ container to store orders submitted b
kubectl get service store-front ```
-2. Navigate to the external IP address of the `store-front` service in your browser.
+2. Navigate to the external IP address of the `store-front` service in your browser using `http://<external-ip>`.
3. Place an order by choosing a product and selecting **Add to cart**. 4. Select **Cart** to view your order, and then select **Checkout**.
aks Tutorial Kubernetes Prepare Acr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-prepare-acr.md
Title: Kubernetes on Azure tutorial - Create an Azure Container Registry and build images description: In this Azure Kubernetes Service (AKS) tutorial, you create an Azure Container Registry instance and upload sample application container images. Previously updated : 11/28/2023 Last updated : 06/10/2024
Before creating an ACR instance, you need a resource group. An Azure resource gr
2. Create an ACR instance using the [`New-AzContainerRegistry`][new-azcontainerregistry] cmdlet and provide your own unique registry name. The registry name must be unique within Azure and contain 5-50 alphanumeric characters. The rest of this tutorial uses an environment variable, `$ACRNAME`, as a placeholder for the container registry name. You can set this environment variable to your unique ACR name to use in future commands. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput. ```azurepowershell-interactive
+ $rand=New-Object System.Random
+ $RAND=$rand.Next()
+ $ACRNAME="myregistry$RAND" # Or replace with your own name
New-AzContainerRegistry -ResourceGroupName myResourceGroup -Name $ACRNAME -Location eastus -Sku Basic ```
aks Tutorial Kubernetes Prepare App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-prepare-app.md
Title: Kubernetes on Azure tutorial - Prepare an application for Azure Kubernetes Service (AKS) description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to prepare and build a multi-container app with Docker Compose that you can then deploy to AKS. Previously updated : 02/15/2023 Last updated : 06/10/2024
In the next tutorial, you learn how to create a cluster using the `azd` template
<!-- LINKS - external --> [docker-compose]: https://docs.docker.com/compose/
-[docker-for-linux]: https://docs.docker.com/engine/installation/#supported-platforms
-[docker-for-mac]: https://docs.docker.com/docker-for-mac/
-[docker-for-windows]: https://docs.docker.com/docker-for-windows/
+[docker-for-linux]: https://docs.docker.com/desktop/install/linux-install/
+[docker-for-mac]: https://docs.docker.com/desktop/install/mac-install/
+[docker-for-windows]: https://docs.docker.com/desktop/install/windows-install/
[docker-get-started]: https://docs.docker.com/get-started/ [docker-images]: https://docs.docker.com/engine/reference/commandline/images/ [docker-ps]: https://docs.docker.com/engine/reference/commandline/ps/
aks Tutorial Kubernetes Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-scale.md
Title: Kubernetes on Azure tutorial - Scale applications in Azure Kubernetes Service (AKS) description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to scale nodes and pods and implement horizontal pod autoscaling. Previously updated : 03/05/2023 Last updated : 06/10/2024
The following example increases the number of nodes to three in the Kubernetes c
Once the cluster successfully scales, your output will be similar to following example output: ```output
+ "aadProfile": null,
+ "addonProfiles": null,
"agentPoolProfiles": [ {
+ ...
"count": 3,
- "dnsPrefix": null,
- "fqdn": null,
- "name": "myAKSCluster",
- "osDiskSizeGb": null,
+ "mode": "System",
+ "name": "nodepool1",
+ "osDiskSizeGb": 128,
+ "osDiskType": "Managed",
"osType": "Linux", "ports": null,
- "vmSize": "Standard_D2_v2",
+ "vmSize": "Standard_DS2_v2",
"vnetSubnetId": null
+ ...
}
+ ...
+ ]
``` ### [Azure PowerShell](#tab/azure-powershell)
The following example increases the number of nodes to three in the Kubernetes c
Once the cluster successfully scales, your output will be similar to following example output: ```output
- ProvisioningState : Succeeded
- MaxAgentPools : 100
- KubernetesVersion : 1.19.9
- DnsPrefix : myAKSCluster
- Fqdn : myakscluster-000a0aa0.hcp.eastus.azmk8s.io
- PrivateFQDN :
- AgentPoolProfiles : {default}
- WindowsProfile : Microsoft.Azure.Commands.Aks.Models.PSManagedClusterWindowsProfile
- AddonProfiles : {}
- NodeResourceGroup : MC_myresourcegroup_myAKSCluster_eastus
- EnableRBAC : True
- EnablePodSecurityPolicy :
- NetworkProfile : Microsoft.Azure.Commands.Aks.Models.PSContainerServiceNetworkProfile
- AadProfile :
- ApiServerAccessProfile :
- Identity :
- LinuxProfile : Microsoft.Azure.Commands.Aks.Models.PSContainerServiceLinuxProfile
- ServicePrincipalProfile : Microsoft.Azure.Commands.Aks.Models.PSContainerServiceServicePrincipalProfile
- Id : /subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/myresourcegroup/providers/Micros
- oft.ContainerService/managedClusters/myAKSCluster
- Name : myAKSCluster
- Type : Microsoft.ContainerService/ManagedClusters
- Location : eastus
- Tags : {}
+ ...
+ ProvisioningState : Succeeded
+ MaxAgentPools : 100
+ KubernetesVersion : 1.28
+ CurrentKubernetesVersion : 1.28.9
+ DnsPrefix : myAKSCluster
+ Fqdn : myakscluster-000a0aa0.hcp.eastus.azmk8s.io
+ PrivateFQDN :
+ AzurePortalFQDN : myakscluster-000a0aa0.portal.hcp.eastus.azmk8s.io
+ AgentPoolProfiles : {default}
+ ...
+ ResourceGroupName : myResourceGroup
+ Id : /subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/myResourceGroup/providers/Mic
+ rosoft.ContainerService/managedClusters/myAKSCluster
+ Name : myAKSCluster
+ Type : Microsoft.ContainerService/ManagedClusters
+ Location : eastus
+ Tags :
```
aks Tutorial Kubernetes Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-upgrade-cluster.md
Title: Kubernetes on Azure tutorial - Upgrade an Azure Kubernetes Service (AKS) cluster description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to upgrade an existing AKS cluster to the latest available Kubernetes version. Previously updated : 11/02/2023 Last updated : 06/10/2024
If using Azure PowerShell, this tutorial requires Azure PowerShell version 5.9.0
az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster ```
- The following example output shows the current version as *1.26.6* and lists the available versions under `upgrades`:
+ The following example output shows the current version as *1.28.9* and lists the available versions under `upgrades`:
```output
- {
- "agentPoolProfiles": null,
- "controlPlaneProfile": {
- "kubernetesVersion": "1.26.6",
+ {
+ "agentPoolProfiles": null,
+ "controlPlaneProfile": {
+ "kubernetesVersion": "1.28.9",
+ ...
+ "upgrades": [
+ {
+ "isPreview": null,
+ "kubernetesVersion": "1.29.4"
+ },
+ {
+ "isPreview": null,
+ "kubernetesVersion": "1.29.2"
+ }
+ ]
+ },
...
- "upgrades": [
- {
- "isPreview": null,
- "kubernetesVersion": "1.27.1"
- },
- {
- "isPreview": null,
- "kubernetesVersion": "1.27.3"
- }
- ]
- },
- ...
- }
+ }
``` ### [Azure PowerShell](#tab/azure-powershell)
If using Azure PowerShell, this tutorial requires Azure PowerShell version 5.9.0
```azurepowershell-interactive Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster |
- Select-Object -Property Name, KubernetesVersion, Location
+ Select-Object -Property Name, CurrentKubernetesVersion, Location
```
- The following example output shows the current version as *1.26.6* and the location as *eastus*:
+ The following example output shows the current version as *1.28.9* and the location as *eastus*:
```output
- Name KubernetesVersion Location
- - -- --
- myAKSCluster 1.26.6 eastus
+ Name CurrentKubernetesVersion Location
+ - --
+ myAKSCluster 1.28.9 eastus
``` 2. Check which Kubernetes upgrade releases are available in the region where your cluster resides using the [`Get-AzAksVersion`][get-azaksversion] cmdlet.
If using Azure PowerShell, this tutorial requires Azure PowerShell version 5.9.0
```output Default IsPreview OrchestratorType OrchestratorVersion - - -
- Kubernetes 1.27.1
- Kubernetes 1.27.3
+ Kubernetes 1.29.4
+ Kubernetes 1.29.2
+ True Kubernetes 1.28.9
+ Kubernetes 1.28.5
+ ...
``` ### [Azure portal](#tab/azure-portal)
You can either [manually upgrade your cluster](#manually-upgrade-cluster) or [co
--kubernetes-version KUBERNETES_VERSION ```
+* You will be prompted to confirm the upgrade operation, and to confirm that you want to upgrade the control plane *and* all the node pools to the selected version of Kubernetes:
+
+ ```console
+ Are you sure you want to perform this operation? (y/N): y
+ Since control-plane-only argument is not specified, this will upgrade the control plane AND all nodepools to version 1.29.2. Continue? (y/N): y
+ ```
+ > [!NOTE] > You can only upgrade one minor version at a time. For example, you can upgrade from *1.14.x* to *1.15.x*, but you can't upgrade from *1.14.x* to *1.16.x* directly. To upgrade from *1.14.x* to *1.16.x*, you must first upgrade from *1.14.x* to *1.15.x*, then perform another upgrade from *1.15.x* to *1.16.x*.
- The following example output shows the result of upgrading to *1.27.3*. Notice the `kubernetesVersion` now shows *1.27.3*:
+ The following example output shows the result of upgrading to *1.29.2*. Notice the `kubernetesVersion` now shows *1.29.2*:
```output {
+ ...
"agentPoolProfiles": [ {
+ ...
"count": 3,
+ "currentOrchestratorVersion": "1.29.2",
"maxPods": 110, "name": "nodepool1",
+ "nodeImageVersion": "AKSUbuntu-2204gen2containerd-202405.27.0",
+ "orchestratorVersion": "1.29.2",
"osType": "Linux",
- "vmSize": "Standard_DS1_v2",
+ "upgradeSettings": {
+ "drainTimeoutInMinutes": null,
+ "maxSurge": "10%",
+ "nodeSoakDurationInMinutes": null,
+ "undrainableNodeBehavior": null
+ },
+ "vmSize": "Standard_DS2_v2",
+ ...
} ],
+ ...
+ "currentKubernetesVersion": "1.29.2",
"dnsPrefix": "myAKSClust-myResourceGroup-19da35", "enableRbac": false, "fqdn": "myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io", "id": "/subscriptions/<Subscription ID>/resourcegroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster",
- "kubernetesVersion": "1.27.3",
+ "kubernetesVersion": "1.29.2",
"location": "eastus", "name": "myAKSCluster", "type": "Microsoft.ContainerService/ManagedClusters"
+ ...
} ```
You can either [manually upgrade your cluster](#manually-upgrade-cluster) or [co
> [!NOTE] > You can only upgrade one minor version at a time. For example, you can upgrade from *1.14.x* to *1.15.x*, but you can't upgrade from *1.14.x* to *1.16.x* directly. To upgrade from *1.14.x* to *1.16.x*, first upgrade from *1.14.x* to *1.15.x*, then perform another upgrade from *1.15.x* to *1.16.x*.
- The following example output shows the result of upgrading to *1.27.3*. Notice the `KubernetesVersion` now shows *1.27.3*:
+ The following example output shows the result of upgrading to *1.29.2*. Notice the `KubernetesVersion` now shows *1.29.2*:
```output
- ProvisioningState : Succeeded
- MaxAgentPools : 100
- KubernetesVersion : 1.27.3
- PrivateFQDN :
- AgentPoolProfiles : {default}
- Name : myAKSCluster
- Type : Microsoft.ContainerService/ManagedClusters
- Location : eastus
- Tags : {}
+ ...
+ ProvisioningState : Succeeded
+ MaxAgentPools : 100
+ KubernetesVersion : 1.29.2
+ CurrentKubernetesVersion : 1.29.2
+ ...
+ ResourceGroupName : myResourceGroup
+ Name : myAKSCluster
+ Type : Microsoft.ContainerService/ManagedClusters
+ Location : eastus
+ Tags :
``` #### [Azure portal](#tab/azure-portal)
AKS regularly provides new node images. Linux node images are updated weekly, an
The following example output shows some of the above events listed during an upgrade: ```output
+ LAST SEEN TYPE REASON OBJECT MESSAGE
...
- default 2m1s Normal Drain node/aks-nodepool1-96663640-vmss000001 Draining node: [aks-nodepool1-96663640-vmss000001]
- ...
- default 9m22s Normal Surge node/aks-nodepool1-96663640-vmss000002 Created a surge node [aks-nodepool1-96663640-vmss000002 nodepool1] for agentpool %!s(MISSING)
+ 5m Normal Drain node/aks-nodepool1-96663640-vmss000000 Draining node: aks-nodepool1-96663640-vmss000000
+ 5m Normal Upgrade node/aks-nodepool1-96663640-vmss000000 Deleting node aks-nodepool1-96663640-vmss000000 from API server
+ 4m Normal Upgrade node/aks-nodepool1-96663640-vmss000000 Successfully reimaged node: aks-nodepool1-96663640-vmss000000
+ 4m Normal Upgrade node/aks-nodepool1-96663640-vmss000000 Successfully upgraded node: aks-nodepool1-96663640-vmss000000
+ 4m Normal Drain node/aks-nodepool1-96663640-vmss000000 Draining node: aks-nodepool1-96663640-vmss000000
... ```
AKS regularly provides new node images. Linux node images are updated weekly, an
```output Name Location ResourceGroup KubernetesVersion CurrentKubernetesVersion ProvisioningState Fqdn - - - -
- myAKSCluster eastus myResourceGroup 1.27.3 1.27.3 Succeeded myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io
+ myAKSCluster eastus myResourceGroup 1.29.2 1.29.2 Succeeded myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io
``` ### [Azure PowerShell](#tab/azure-powershell)
AKS regularly provides new node images. Linux node images are updated weekly, an
The following example output shows the AKS cluster runs *KubernetesVersion 1.27.3*: ```output
- Name Location KubernetesVersion ProvisioningState
- - -- -- --
- myAKSCluster eastus 1.27.3 Succeeded
+ Name Location KubernetesVersion ProvisioningState
+ - -- -- --
+ myAKSCluster eastus 1.29.2 Succeeded
``` ### [Azure portal](#tab/azure-portal)
app-service Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md
description: Learn how to migrate your App Service Environment v2 to App Service
Previously updated : 7/11/2024 Last updated : 7/23/2024 # Migration to App Service Environment v3 using the side-by-side migration feature
Once you're ready to redirect traffic, you can complete the final step of the mi
> > [!NOTE]
-> You have 14 days to complete this step. If you don't complete this step in 14 days, your migration is automatically reverted back to an App Service Environment v2. If you need more than 14 days to complete this step, contact support.
+> It's important to complete this step as soon as possible. When your App Service Environment is in the hybrid state, it's unable to receive platform upgrades and security patches, which makes it more vulnerable to instability and security threats.
> If you discover any issues with your new App Service Environment v3, don't run the command to redirect customer traffic. This command also initiates the deletion of your App Service Environment v2. If you find an issue, contact support.
az rest --method get --uri "${ASE_ID}?api-version=2022-03-01" --query properties
This step is your opportunity to test and validate your new App Service Environment v3.
-Once you confirm your apps are working as expected, you can finalize the migration by running the following command. This command also deletes your old environment. You have 14 days to complete this step. If you don't complete this step in 14 days, your migration is automatically reverted back to an App Service Environment v2. If you need more than 14 days to complete this step, contact support.
+Once you confirm your apps are working as expected, you can finalize the migration by running the following command. This command also deletes your old environment.
If you find any issues or decide at this point that you no longer want to proceed with the migration, contact support to discuss your options. Don't run the DNS change command since that command completes the migration.
automation Manage Runtime Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-runtime-environment.md
Title: Manage Runtime environment and associated runbooks in Azure Automation
description: This article tells how to manage runbooks in Runtime environment and associated runbooks Azure Automation Previously updated : 06/28/2024 Last updated : 07/24/2024
An Azure Automation account in supported public region (except Central India, Ge
> [!NOTE] > - When you import a package, it might take several minutes. 100MB is the maximum total size of the files that you can import. > - Use *.zip* files for PowerShell runbook types as mentioned [here](/powershell/scripting/developer/module/understanding-a-windows-powershell-module)
- > - For Python 3.8 packages, use .tar.gz or .whl files targeting cp38-amd64.
+ > - For Python 3.8 packages, use .whl files targeting cp38-amd64.
> - For Python 3.10 (preview) packages, use .whl files targeting cp310 Linux OS. 1. Select **Next** and in the **Review + Create** tab, verify that the settings are correct. When you select **Create**, Azure runs validation on Runtime environment settings that you have chosen. If the validation passes, you can proceed to create Runtime environment else, the portal indicates the settings that you need to modify.
automation Python Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/python-packages.md
Title: Manage Python 2 packages in Azure Automation
description: This article tells how to manage Python 2 packages in Azure Automation. Previously updated : 04/23/2024 Last updated : 07/23/2024
For information on managing Python 3 packages, see [Manage Python 3 packages](./
:::image type="content" source="media/python-packages/add-python-package.png" alt-text="Screenshot of the Python packages page shows Python packages in the left menu and Add a Python package highlighted.":::
-2. On the **Add Python Package** page, select a local package to upload. The package can be a **.whl** or **.tar.gz** file.
+2. On the **Add Python Package** page, select a local package to upload. The package can be a **.whl** file.
3. Enter the name and select the **Runtime version** as 2.x.x 4. Select **Import**.
azure-cache-for-redis Cache Azure Active Directory For Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-azure-active-directory-for-authentication.md
Previously updated : 05/09/2024 Last updated : 07/17/2024
To use the ACL integration, your client application must assume the identity of
For information on using Microsoft Entra ID with Azure CLI, see the [references pages for identity](/cli/azure/redis/identity).
+## Disable access key authentication on your cache
+
+Using Microsoft Entra ID is the secure way to connect your cache. We recommend using Microsoft Entra ID and disabling access keys.
+
+When you disable access key Authentication for a cache, all existing client connections are terminated, whether they use access keys or Microsoft Entra ID auth-based. You're advised to follow the recommended Redis client best practices to implement proper retry mechanisms for reconnecting MS Entra-based connections, if any.
+
+Before you disable access keys:
+
+- Before you disable access keys, Microsoft Entra ID authorization must be enabled.
+- Disabling access keys is only available for Basic, Standard, and Premium tier caches.
+- For geo-replicated caches, before you disable accces keys, you must: 1) unlink the caches, 2) disable access keys, and finally, 3) relink the caches.
+
+If you have a cache where access keys are used, and you want to disable access keys, follow this procedure.
+
+1. In the Azure portal, select the Azure Cache for Redis instance where you'd like to disable access keys.
+
+1. Select **Authentication** from the Resource menu.
+
+1. In the working pane, selectΓÇ»**Access keys**.
+
+1. SelectΓÇ»**Disable Access Keys Authentication**. Then, selectΓÇ»**Save**.
+
+ :::image type="content" source="media/cache-azure-active-directory-for-authentication/cache-disable-access-keys.png" alt-text="Screenshot showing access keys in the working pane with a red box around Disable Access Key Authentication. ":::
+
+1. You're asked to confirm that you want to update your configuration. SelectΓÇ»**Yes**.
+
+> [!IMPORTANT]
+> When the **Disable Access Key Authentication**" setting is changed for a cache, all existing client connections, using access keys or Microsoft Entra ID, are terminated. Follow the best practices to implement proper retry mechanisms for reconnecting MS Entra-based connections. For more information, see [Connection resilience](cache-best-practices-connection.md).
+ ## Using data access configuration with your cache If you would like to use a custom access policy instead of Redis Data Owner, go to the **Data Access Configuration** on the Resource menu. For more information, see [Configure a custom data access policy for your application](cache-configure-role-based-access-control.md#configure-a-custom-data-access-policy-for-your-application).
azure-cache-for-redis Monitor Cache Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/monitor-cache-reference.md
description: This article contains important reference material you need when yo
Last updated 05/13/2024 -+
The following list provides details and more information about the supported Azu
- Sets - The number of set operations to the cache during the specified reporting interval. This value is the sum of the following values from the Redis INFO all command: `cmdstat_set`, `cmdstat_hset`, `cmdstat_hmset`, `cmdstat_hsetnx`, `cmdstat_lset`, `cmdstat_mset`, `cmdstat_msetnx`, `cmdstat_setbit`, `cmdstat_setex`, `cmdstat_setrange`, and `cmdstat_setnx`. - Total Keys
- - The maximum number of keys in the cache during the past reporting time period. This number maps to `keyspace` from the Redis INFO command. Because of a limitation in the underlying metrics system for caches with clustering enabled, Total Keys return the maximum number of keys of the shard that had the maximum number of keys during the reporting interval.
+ - The maximum number of keys in the cache during the past reporting time period. This number maps to `keyspace` from the Redis INFO command.
+
+ > [!IMPORTANT]
+ > Because of a limitation in the underlying metrics system for caches with clustering enabled, Total Keys return the maximum number of keys of the shard that had the maximum number of keys during the reporting interval.
+
- Total Operations - The total number of commands processed by the cache server during the specified reporting interval. This value maps to `total_commands_processed` from the Redis INFO command. When Azure Cache for Redis is used purely for pub/sub, there are no metrics for `Cache Hits`, `Cache Misses`, `Gets`, or `Sets`, but there are `Total Operations` metrics that reflect the cache usage for pub/sub operations. - Used Memory
azure-monitor Itsmc Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-servicenow.md
For information about installing ITSMC, see [Add the IT Service Management Conne
### OAuth setup
-ServiceNow supported versions include Vancouver, Utah, Tokyo, San Diego, Rome, Quebec, Paris, Orlando, New York, Madrid, London, Kingston, Jakarta, Istanbul, Helsinki, and Geneva.
+ServiceNow supported versions include Washington, Vancouver, Utah, Tokyo, San Diego, Rome, Quebec, Paris, Orlando, New York, Madrid, London, Kingston, Jakarta, Istanbul, Helsinki, and Geneva.
ServiceNow admins must generate a client ID and client secret for their ServiceNow instance. See the following information as required:
azure-monitor Azure Web Apps Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-java.md
Title: Monitor Azure app services performance Java | Microsoft Docs description: Application performance monitoring for Azure app services using Java. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 06/23/2023 Last updated : 08/22/2024 ms.devlang: java
azure-monitor Kubernetes Monitoring Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-monitoring-enable.md
This article provides onboarding guidance for the following types of clusters. A
> [!NOTE] > The Managed Prometheus Arc-Enabled Kubernetes extension does not support the following configurations:
-> * Red Hat Openshift distributions
- > * Windows nodes
+> * Red Hat Openshift distributions, including Azure Red Hat OpenShift (ARO)
+> * Windows nodes
## Workspaces
azure-monitor Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/availability-zones.md
Previously updated : 06/05/2023 Last updated : 07/23/2024 # Customer intent: As an IT manager, I want to understand the data and service resilience benefits Azure Monitor availability zones provide to ensure my data and services are sufficiently protected in the event of datacenter failure.
A subset of the availability zones that support data resilience currently also s
| East US | | :white_check_mark: | | | East US 2 | | :white_check_mark: | :white_check_mark: | | South Central US | :white_check_mark: | :white_check_mark: | |
+| Spain Central | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| West US 2 | | :white_check_mark: | :white_check_mark: | | West US 3 | :white_check_mark: | :white_check_mark: | | | **Asia Pacific** | | | |
A subset of the availability zones that support data resilience currently also s
Learn more about how to: - [Set up a dedicated cluster](logs-dedicated-clusters.md).-- [Migrate Log Analytics workspaces to availability zone support](../../availability-zones/migrate-monitor-log-analytics.md).
+- [Migrate Log Analytics workspaces to availability zone support](../../availability-zones/migrate-monitor-log-analytics.md).
azure-netapp-files Azure Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md
All [Azure NetApp Files features](whats-new.md) available on Azure public cloud
| Azure NetApp Files features | Azure public cloud availability | Azure Government availability | |: |: |: |
-| Azure NetApp Files customer-managed keys | Generally available (GA) | No |
| Azure NetApp Files large volumes | Generally available (GA) | Generally available [(select regions)](large-volumes-requirements-considerations.md#supported-regions) | ## Portal access
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
Azure NetApp Files customer-managed keys is supported for the following regions:
* Italy North * Japan East * Japan West- * Korea Central * Korea South * North Central US
Azure NetApp Files customer-managed keys is supported for the following regions:
* UAE North * UK South * UK West
+* US Gov Arizona
+* US Gov Texas
+* US Gov Virginia
* West Europe * West US * West US 2
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about t
## July 2024
+* [Customer-managed keys for Azure NetApp Files volume encryption](configure-customer-managed-keys.md#supported-regions) is now available in all US Gov regions
+ * [Azure NetApp Files large volume enhancement:](large-volumes-requirements-considerations.md) increased throughput and maximum size limit of 2-PiB volume (preview) Azure NetApp Files large volumes now support increased maximum throughput and size limits. This update brings an increased size limit to **one PiB,** available via Azure Feature Exposure Control (AFEC), allowing for more extensive and robust data management solutions for various workloads, including HPC, EDA, VDI, and more.
azure-resource-manager Bicep Core Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-core-diagnostics.md
+
+ Title: Bicep warnings and error codes
+description: Lists the warnings and error codes.
++ Last updated : 07/23/2024++
+# Bicep warning and error codes
+
+If you need more information about a particular warning or error code, select the **Feedback** button in the upper right corner of the page and specify the code.
+
+| Code | Description |
+||-|
+| BCP001 | The following token is not recognized: "{token}". |
+| BCP002 | The multi-line comment at this location is not terminated. Terminate it with the */ character sequence. |
+| BCP003 | The string at this location is not terminated. Terminate the string with a single quote character. |
+| BCP004 | The string at this location is not terminated due to an unexpected new line character. |
+| BCP005 | The string at this location is not terminated. Complete the escape sequence and terminate the string with a single unescaped quote character. |
+| BCP006 | The specified escape sequence is not recognized. Only the following escape sequences are allowed: {ToQuotedString(escapeSequences)}. |
+| BCP007 | This declaration type is not recognized. Specify a metadata, parameter, variable, resource, or output declaration. |
+| BCP008 | Expected the "=" token, or a newline at this location. |
+| BCP009 | Expected a literal value, an array, an object, a parenthesized expression, or a function call at this location. |
+| BCP010 | Expected a valid 64-bit signed integer. |
+| BCP011 | The type of the specified value is incorrect. Specify a string, boolean, or integer literal. |
+| BCP012 | Expected the "{keyword}" keyword at this location. |
+| BCP013 | Expected a parameter identifier at this location. |
+| BCP015 | Expected a variable identifier at this location. |
+| BCP016 | Expected an output identifier at this location. |
+| BCP017 | Expected a resource identifier at this location. |
+| BCP018 | Expected the "{character}" character at this location. |
+| BCP019 | Expected a new line character at this location. |
+| BCP020 | Expected a function or property name at this location. |
+| BCP021 | Expected a numeric literal at this location. |
+| BCP022 | Expected a property name at this location. |
+| BCP023 | Expected a variable or function name at this location. |
+| BCP024 | The identifier exceeds the limit of {LanguageConstants.MaxIdentifierLength}. Reduce the length of the identifier. |
+| BCP025 | The property "{property}" is declared multiple times in this object. Remove or rename the duplicate properties. |
+| BCP026 | The output expects a value of type "{expectedType}" but the provided value is of type "{actualType}". |
+| BCP028 | Identifier "{identifier}" is declared multiple times. Remove or rename the duplicates. |
+| BCP029 | The resource type is not valid. Specify a valid resource type of format "&lt;types>@&lt;apiVersion>". |
+| BCP030 | The output type is not valid. Please specify one of the following types: {ToQuotedString(validTypes)}. |
+| BCP031 | The parameter type is not valid. Please specify one of the following types: {ToQuotedString(validTypes)}. |
+| BCP032 | The value must be a compile-time constant. |
+| <a id='BCP033' />[BCP033](./diagnostics/bcp033.md) | Expected a value of type &lt;data-type> but the provided value is of type &lt;data-type>. |
+| BCP034 | The enclosing array expected an item of type "{expectedType}", but the provided item was of type "{actualType}". |
+| <a id='BCP035' />[BCP035](./diagnostics/bcp035.md) | The specified &lt;data-type> declaration is missing the following required properties: &lt;property-name>. |
+| <a id='BCP036' />[BCP036](./diagnostics/bcp036.md) | The property &lt;property-name> expected a value of type &lt;data-type> but the provided value is of type &lt;data-type>. |
+| <a id='BCP037' />[BCP037](./diagnostics/bcp037.md) | The property &lt;property-name> is not allowed on objects of type &lt;type-definition>. |
+| <a id='BCP040' />[BCP040](./diagnostics/bcp040.md) | String interpolation is not supported for keys on objects of type &lt;type-definition>. |
+| BCP041 | Values of type "{valueType}" cannot be assigned to a variable. |
+| BCP043 | This is not a valid expression. |
+| BCP044 | Cannot apply operator "{operatorName}" to operand of type "{type}". |
+| BCP045 | Cannot apply operator "{operatorName}" to operands of type "{type1}" and "{type2}".{(additionalInfo is null ? string.Empty : " " + additionalInfo)} |
+| BCP046 | Expected a value of type "{type}". |
+| BCP047 | String interpolation is unsupported for specifying the resource type. |
+| BCP048 | Cannot resolve function overload. For details, see the documentation. |
+| BCP049 | The array index must be of type "{LanguageConstants.String}" or "{LanguageConstants.Int}" but the provided index was of type "{wrongType}". |
+| BCP050 | The specified path is empty. |
+| BCP051 | The specified path begins with "/". Files must be referenced using relative paths. |
+| BCP052 | The type "{type}" does not contain property "{badProperty}". |
+| BCP053 | The type "{type}" does not contain property "{badProperty}". Available properties include {ToQuotedString(availableProperties)}. |
+| BCP054 | The type "{type}" does not contain any properties. |
+| BCP055 | Cannot access properties of type "{wrongType}". An "{LanguageConstants.Object}" type is required. |
+| BCP056 | The reference to name "{name}" is ambiguous because it exists in namespaces {ToQuotedString(namespaces)}. The reference must be fully qualified. |
+| BCP057 | The name "{name}" does not exist in the current context. |
+| BCP059 | The name "{name}" is not a function. |
+| BCP060 | The "variables" function is not supported. Directly reference variables by their symbolic names. |
+| BCP061 | The "parameters" function is not supported. Directly reference parameters by their symbolic names. |
+| BCP062 | The referenced declaration with name "{name}" is not valid. |
+| BCP063 | The name "{name}" is not a parameter, variable, resource or module. |
+| BCP064 | Found unexpected tokens in interpolated expression. |
+| BCP065 | Function "{functionName}" is not valid at this location. It can only be used as a parameter default value. |
+| BCP066 | Function "{functionName}" is not valid at this location. It can only be used in resource declarations. |
+| BCP067 | Cannot call functions on type "{wrongType}". An "{LanguageConstants.Object}" type is required. |
+| BCP068 | Expected a resource type string. Specify a valid resource type of format "&lt;types>@&lt;apiVersion>". |
+| BCP069 | The function "{function}" is not supported. Use the "{@operator}" operator instead. |
+| BCP070 | Argument of type "{argumentType}" is not assignable to parameter of type "{parameterType}". |
+| BCP071 | Expected {expected}, but got {argumentCount}. |
+| <a id='BCP072' />[BCP072](./diagnostics/bcp072.md) | This symbol cannot be referenced here. Only other parameters can be referenced in parameter default values. |
+| <a id='BCP073' />[BCP073](./diagnostics/bcp073.md) | The property &lt;property-name> is read-only. Expressions cannot be assigned to read-only properties. |
+| BCP074 | Indexing over arrays requires an index of type "{LanguageConstants.Int}" but the provided index was of type "{wrongType}". |
+| BCP075 | Indexing over objects requires an index of type "{LanguageConstants.String}" but the provided index was of type "{wrongType}". |
+| BCP076 | Cannot index over expression of type "{wrongType}". Arrays or objects are required. |
+| BCP077 | The property "{badProperty}" on type "{type}" is write-only. Write-only properties cannot be accessed. |
+| BCP078 | The property "{propertyName}" requires a value of type "{expectedType}", but none was supplied. |
+| BCP079 | This expression is referencing its own declaration, which is not allowed. |
+| BCP080 | The expression is involved in a cycle ("{string.Join("\" -> \"", cycle)}"). |
+| BCP081 | Resource type "{resourceTypeReference.FormatName()}" does not have types available. Bicep is unable to validate resource properties prior to deployment, but this will not block the resource from being deployed. |
+| BCP082 | The name "{name}" does not exist in the current context. Did you mean "{suggestedName}"? |
+| BCP083 | The type "{type}" does not contain property "{badProperty}". Did you mean "{suggestedProperty}"? |
+| BCP084 | The symbolic name "{name}" is reserved. Please use a different symbolic name. Reserved namespaces are {ToQuotedString(namespaces.OrderBy(ns => ns))}. |
+| BCP085 | The specified file path contains one ore more invalid path characters. The following are not permitted: {ToQuotedString(forbiddenChars.OrderBy(x => x).Select(x => x.ToString()))}. |
+| BCP086 | The specified file path ends with an invalid character. The following are not permitted: {ToQuotedString(forbiddenPathTerminatorChars.OrderBy(x => x).Select(x => x.ToString()))}. |
+| BCP087 | Array and object literals are not allowed here. |
+| BCP088 | The property "{property}" expected a value of type "{expectedType}" but the provided value is of type "{actualStringLiteral}". Did you mean "{suggestedStringLiteral}"? |
+| BCP089 | The property "{property}" is not allowed on objects of type "{type}". Did you mean "{suggestedProperty}"? |
+| BCP090 | This module declaration is missing a file path reference. |
+| BCP091 | An error occurred reading file. {failureMessage} |
+| BCP092 | String interpolation is not supported in file paths. |
+| BCP093 | File path "{filePath}" could not be resolved relative to "{parentPath}". |
+| BCP094 | This module references itself, which is not allowed. |
+| BCP095 | The file is involved in a cycle ("{string.Join("\" -> \"", cycle)}"). |
+| BCP096 | Expected a module identifier at this location. |
+| BCP097 | Expected a module path string. This should be a relative path to another bicep file, e.g. 'myModule.bicep' or '../parent/myModule.bicep' |
+| BCP098 | The specified file path contains a "\" character. Use "/" instead as the directory separator character. |
+| BCP099 | The "{LanguageConstants.ParameterAllowedPropertyName}" array must contain one or more items. |
+| BCP100 | The function "if" is not supported. Use the "?:\" (ternary conditional) operator instead, e.g. condition ? ValueIfTrue : ValueIfFalse |
+| BCP101 | The "createArray" function is not supported. Construct an array literal using []. |
+| BCP102 | The "createObject" function is not supported. Construct an object literal using {}. |
+| BCP103 | The following token is not recognized: "{token}". Strings are defined using single quotes in bicep. |
+| BCP104 | The referenced module has errors. |
+| BCP105 | Unable to load file from URI "{fileUri}". |
+| BCP106 | Expected a new line character at this location. Commas are not used as separator delimiters. |
+| BCP107 | The function "{name}" does not exist in namespace "{namespaceType.Name}". |
+| BCP108 | The function "{name}" does not exist in namespace "{namespaceType.Name}". Did you mean "{suggestedName}"? |
+| BCP109 | The type "{type}" does not contain function "{name}". |
+| BCP110 | The type "{type}" does not contain function "{name}". Did you mean "{suggestedName}"? |
+| BCP111 | The specified file path contains invalid control code characters. |
+| BCP112 | The "{LanguageConstants.TargetScopeKeyword}" cannot be declared multiple times in one file. |
+| BCP113 | Unsupported scope for module deployment in a "{LanguageConstants.TargetScopeTypeTenant}" target scope. Omit this property to inherit the current scope, or specify a valid scope. Permissible scopes include tenant: tenant(), named management group: managementGroup(&lt;name>), named subscription: subscription(&lt;subId>), or named resource group in a named subscription: resourceGroup(&lt;subId>, &lt;name>). |
+| BCP114 | Unsupported scope for module deployment in a "{LanguageConstants.TargetScopeTypeManagementGroup}" target scope. Omit this property to inherit the current scope, or specify a valid scope. Permissible scopes include current management group: managementGroup(), named management group: managementGroup(&lt;name>), named subscription: subscription(&lt;subId>), tenant: tenant(), or named resource group in a named subscription: resourceGroup(&lt;subId>, &lt;name>). |
+| BCP115 | Unsupported scope for module deployment in a "{LanguageConstants.TargetScopeTypeSubscription}" target scope. Omit this property to inherit the current scope, or specify a valid scope. Permissible scopes include current subscription: subscription(), named subscription: subscription(&lt;subId>), named resource group in same subscription: resourceGroup(&lt;name>), named resource group in different subscription: resourceGroup(&lt;subId>, &lt;name>), or tenant: tenant(). |
+| BCP116 | Unsupported scope for module deployment in a "{LanguageConstants.TargetScopeTypeResourceGroup}" target scope. Omit this property to inherit the current scope, or specify a valid scope. Permissible scopes include current resource group: resourceGroup(), named resource group in same subscription: resourceGroup(&lt;name>), named resource group in a different subscription: resourceGroup(&lt;subId>, &lt;name>), current subscription: subscription(), named subscription: subscription(&lt;subId>) or tenant: tenant(). |
+| BCP117 | An empty indexer is not allowed. Specify a valid expression. |
+| BCP118 | Expected the "{" character, the "[" character, or the "if" keyword at this location. |
+| BCP119 | Unsupported scope for extension resource deployment. Expected a resource reference. |
+| BCP120 | This expression is being used in an assignment to the "{propertyName}" property of the "{objectTypeName}" type, which requires a value that can be calculated at the start of the deployment. |
+| BCP121 | Resources: {ToQuotedString(resourceNames)} are defined with this same name in a file. Rename them or split into different modules. |
+| BCP122 | Modules: {ToQuotedString(moduleNames)} are defined with this same name and this same scope in a file. Rename them or split into different modules. |
+| BCP123 | Expected a namespace or decorator name at this location. |
+| BCP124 | The decorator "{decoratorName}" can only be attached to targets of type "{attachableType}", but the target has type "{targetType}". |
+| BCP125 | Function "{functionName}" cannot be used as a parameter decorator. |
+| BCP126 | Function "{functionName}" cannot be used as a variable decorator. |
+| BCP127 | Function "{functionName}" cannot be used as a resource decorator. |
+| BCP128 | Function "{functionName}" cannot be used as a module decorator. |
+| BCP129 | Function "{functionName}" cannot be used as an output decorator. |
+| BCP130 | Decorators are not allowed here. |
+| BCP132 | Expected a declaration after the decorator. |
+| BCP133 | The unicode escape sequence is not valid. Valid unicode escape sequences range from \\u{0} to \\u{10FFFF}. |
+| BCP134 | Scope {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(suppliedScope))} is not valid for this module. Permitted scopes: {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(supportedScopes))}. |
+| BCP135 | Scope {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(suppliedScope))} is not valid for this resource type. Permitted scopes: {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(supportedScopes))}. |
+| BCP136 | Expected a loop item variable identifier at this location. |
+| BCP137 | Loop expected an expression of type "{LanguageConstants.Array}" but the provided value is of type "{actualType}". |
+| BCP138 | For-expressions are not supported in this context. For-expressions may be used as values of resource, module, variable, and output declarations, or values of resource and module properties. |
+| BCP139 | A resource's scope must match the scope of the Bicep file for it to be deployable. You must use modules to deploy resources to a different scope. |
+| BCP140 | The multi-line string at this location is not terminated. Terminate it with "'''. |
+| BCP141 | The expression cannot be used as a decorator as it is not callable. |
+| BCP142 | Property value for-expressions cannot be nested. |
+| BCP143 | For-expressions cannot be used with properties whose names are also expressions. |
+| BCP144 | Directly referencing a resource or module collection is not currently supported here. Apply an array indexer to the expression. |
+| BCP145 | Output "{identifier}" is declared multiple times. Remove or rename the duplicates. |
+| BCP147 | Expected a parameter declaration after the decorator. |
+| BCP148 | Expected a variable declaration after the decorator. |
+| BCP149 | Expected a resource declaration after the decorator. |
+| BCP150 | Expected a module declaration after the decorator. |
+| BCP151 | Expected an output declaration after the decorator. |
+| BCP152 | Function "{functionName}" cannot be used as a decorator. |
+| BCP153 | Expected a resource or module declaration after the decorator. |
+| BCP154 | Expected a batch size of at least {limit} but the specified value was "{value}". |
+| BCP155 | The decorator "{decoratorName}" can only be attached to resource or module collections. |
+| BCP156 | The resource type segment "{typeSegment}" is invalid. Nested resources must specify a single type segment, and optionally can specify an API version using the format "&lt;type>@&lt;apiVersion>". |
+| BCP157 | The resource type cannot be determined due to an error in the containing resource. |
+| BCP158 | Cannot access nested resources of type "{wrongType}". A resource type is required. |
+| BCP159 | The resource "{resourceName}" does not contain a nested resource named "{identifierName}". Known nested resources are: {ToQuotedString(nestedResourceNames)}. |
+| BCP160 | A nested resource cannot appear inside of a resource with a for-expression. |
+| BCP162 | Expected a loop item variable identifier or "(" at this location. |
+| BCP164 | A child resource's scope is computed based on the scope of its ancestor resource. This means that using the "scope" property on a child resource is unsupported. |
+| BCP165 | A resource's computed scope must match that of the Bicep file for it to be deployable. This resource's scope is computed from the "scope" property value assigned to ancestor resource "{ancestorIdentifier}". You must use modules to deploy resources to a different scope. |
+| BCP166 | Duplicate "{decoratorName}" decorator. |
+| BCP167 | Expected the "{" character or the "if" keyword at this location. |
+| BCP168 | Length must not be a negative value. |
+| BCP169 | Expected resource name to contain {expectedSlashCount} "/" character(s). The number of name segments must match the number of segments in the resource type. |
+| BCP170 | Expected resource name to not contain any "/" characters. Child resources with a parent resource reference (via the parent property or via nesting) must not contain a fully-qualified name. |
+| BCP171 | Resource type "{resourceType}" is not a valid child resource of parent "{parentResourceType}". |
+| BCP172 | The resource type cannot be validated due to an error in parent resource "{resourceName}". |
+| BCP173 | The property "{property}" cannot be used in an existing resource declaration. |
+| BCP174 | Type validation is not available for resource types declared containing a "/providers/" segment. Please instead use the "scope" property. |
+| BCP176 | Values of the "any" type are not allowed here. |
+| BCP177 | This expression is being used in the if-condition expression, which requires a value that can be calculated at the start of the deployment.{variableDependencyChainClause}{accessiblePropertiesClause} |
+| BCP178 | This expression is being used in the for-expression, which requires a value that can be calculated at the start of the deployment.{variableDependencyChainClause}{accessiblePropertiesClause} |
+| BCP179 | Unique resource or deployment name is required when looping. The loop item variable "{itemVariableName}" or the index variable "{indexVariableName}" must be referenced in at least one of the value expressions of the following properties in the loop body: {ToQuotedString(expectedVariantProperties)} |
+| BCP180 | Function "{functionName}" is not valid at this location. It can only be used when directly assigning to a module parameter with a secure decorator. |
+| BCP181 | This expression is being used in an argument of the function "{functionName}", which requires a value that can be calculated at the start of the deployment.{variableDependencyChainClause}{accessiblePropertiesClause} |
+| BCP182 | This expression is being used in the for-body of the variable "{variableName}", which requires values that can be calculated at the start of the deployment.{variableDependencyChainClause}{violatingPropertyNameClause}{accessiblePropertiesClause} |
+| BCP183 | The value of the module "params" property must be an object literal. |
+| BCP184 | File '{filePath}' exceeded maximum size of {maxSize} {unit}. |
+| BCP185 | Encoding mismatch. File was loaded with '{detectedEncoding}' encoding. |
+| BCP186 | Unable to parse literal JSON value. Please ensure that it is well-formed. |
+| BCP187 | The property "{property}" does not exist in the resource or type definition, although it might still be valid.{TypeInaccuracyClause} |
+| BCP188 | The referenced ARM template has errors. Please see [https://aka.ms/arm-template](https://aka.ms/arm-template) for information on how to diagnose and fix the template. |
+| BCP189 | (allowedSchemes.Contains(ArtifactReferenceSchemes.Local, StringComparer.Ordinal), allowedSchemes.Any(scheme => !string.Equals(scheme, ArtifactReferenceSchemes.Local, StringComparison.Ordinal))) switch { (false, false) => "Module references are not supported in this context.", (false, true) => $"The specified module reference scheme \"{badScheme}\" is not recognized. Specify a module reference using one of the following schemes: {FormatSchemes()}", (true, false) => $"The specified module reference scheme \"{badScheme}\" is not recognized. Specify a path to a local module file.", (true, true) => $"The specified module reference scheme \"{badScheme}\" is not recognized. Specify a path to a local module file or a module reference using one of the following schemes: {FormatSchemes()}"} |
+| BCP190 | The artifact with reference "{artifactRef}" has not been restored. |
+| BCP191 | Unable to restore the artifact with reference "{artifactRef}". |
+| BCP192 | Unable to restore the artifact with reference "{artifactRef}": {message} |
+| BCP193 | {BuildInvalidOciArtifactReferenceClause(aliasName, badRef)} Specify a reference in the format of "{ArtifactReferenceSchemes.Oci}:&lt;artifact-uri>:&lt;tag>", or "{ArtifactReferenceSchemes.Oci}/&lt;module-alias>:&lt;module-name-or-path>:&lt;tag>". |
+| BCP194 | {BuildInvalidTemplateSpecReferenceClause(aliasName, badRef)} Specify a reference in the format of "{ArtifactReferenceSchemes.TemplateSpecs}:&lt;subscription-ID>/&lt;resource-group-name>/&lt;template-spec-name>:&lt;version>", or "{ArtifactReferenceSchemes.TemplateSpecs}/&lt;module-alias>:&lt;template-spec-name>:&lt;version>". |
+| BCP195 | {BuildInvalidOciArtifactReferenceClause(aliasName, badRef)} The artifact path segment "{badSegment}" is not valid. Each artifact name path segment must be a lowercase alphanumeric string optionally separated by a ".", "_", or \"-\"." |
+| BCP196 | The module tag or digest is missing. |
+| BCP197 | The tag "{badTag}" exceeds the maximum length of {maxLength} characters. |
+| BCP198 | The tag "{badTag}" is not valid. Valid characters are alphanumeric, ".", "_", or "-" but the tag cannot begin with ".", "_", or "-". |
+| BCP199 | Module path "{badRepository}" exceeds the maximum length of {maxLength} characters. |
+| BCP200 | The registry "{badRegistry}" exceeds the maximum length of {maxLength} characters. |
+| BCP201 | Expected a provider specification string of with a valid format at this location. Valid formats are "br:&lt;providerRegistryHost>/&lt;providerRepositoryPath>@&lt;providerVersion>" or "br/&lt;providerAlias>:&lt;providerName>@&lt;providerVersion>". |
+| BCP202 | Expected a provider alias name at this location. |
+| BCP203 | Using provider statements requires enabling EXPERIMENTAL feature "Extensibility". |
+| BCP204 | Provider namespace "{identifier}" is not recognized. |
+| BCP205 | Provider namespace "{identifier}" does not support configuration. |
+| BCP206 | Provider namespace "{identifier}" requires configuration, but none was provided. |
+| BCP207 | Namespace "{identifier}" is declared multiple times. Remove the duplicates. |
+| BCP208 | The specified namespace "{badNamespace}" is not recognized. Specify a resource reference using one of the following namespaces: {ToQuotedString(allowedNamespaces)}. |
+| BCP209 | Failed to find resource type "{resourceType}" in namespace "{@namespace}". |
+| BCP210 | Resource type belonging to namespace "{childNamespace}" cannot have a parent resource type belonging to different namespace "{parentNamespace}". |
+| BCP211 | The module alias name "{aliasName}" is invalid. Valid characters are alphanumeric, "_", or "-". |
+| BCP212 | The Template Spec module alias name "{aliasName}" does not exist in the {BuildBicepConfigurationClause(configFileUri)}. |
+| BCP213 | The OCI artifact module alias name "{aliasName}" does not exist in the {BuildBicepConfigurationClause(configFileUri)}. |
+| BCP214 | The Template Spec module alias "{aliasName}" in the {BuildBicepConfigurationClause(configFileUri)} is in valid. The "subscription" property cannot be null or undefined. |
+| BCP215 | The Template Spec module alias "{aliasName}" in the {BuildBicepConfigurationClause(configFileUri)} is in valid. The "resourceGroup" property cannot be null or undefined. |
+| BCP216 | The OCI artifact module alias "{aliasName}" in the {BuildBicepConfigurationClause(configFileUri)} is invalid. The "registry" property cannot be null or undefined. |
+| BCP217 | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The subscription ID "{subscriptionId}" is not a GUID. |
+| BCP218 | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The resource group name "{resourceGroupName}" exceeds the maximum length of {maximumLength} characters. |
+| BCP219 | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The resource group name "{resourceGroupName}" is invalid. Valid characters are alphanumeric, unicode characters, ".", "_", "-", "(", or ")", but the resource group name cannot end with ".". |
+| BCP220 | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The Template Spec name "{templateSpecName}" exceeds the maximum length of {maximumLength} characters. |
+| BCP221 | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The Template Spec name "{templateSpecName}" is invalid. Valid characters are alphanumeric, ".", "_", "-", "(", or ")", but the Template Spec name cannot end with ".". |
+| BCP222 | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The Template Spec version "{templateSpecVersion}" exceeds the maximum length of {maximumLength} characters. |
+| BCP223 | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The Template Spec version "{templateSpecVersion}" is invalid. Valid characters are alphanumeric, ".", "_", "-", "(", or ")", but the Template Spec name cannot end with ".". |
+| BCP224 | {BuildInvalidOciArtifactReferenceClause(aliasName, badRef)} The digest "{badDigest}" is not valid. The valid format is a string "sha256:" followed by exactly 64 lowercase hexadecimal digits. |
+| BCP225 | The discriminator property "{propertyName}" value cannot be determined at compilation time. Type checking for this object is disabled. |
+| BCP226 | Expected at least one diagnostic code at this location. Valid format is "#disable-next-line diagnosticCode1 diagnosticCode2 ...". |
+| BCP227 | The type "{resourceType}" cannot be used as a parameter or output type. Extensibility types are currently not supported as parameters or outputs. |
+| BCP229 | The parameter "{parameterName}" cannot be used as a resource scope or parent. Resources passed as parameters cannot be used as a scope or parent of a resource. |
+| BCP300 | Expected a type literal at this location. Please specify a concrete value or a reference to a literal type. |
+| BCP301 | The type name "{reservedName}" is reserved and may not be attached to a user-defined type. |
+| BCP302 | The name "{name}" is not a valid type. Please specify one of the following types: {ToQuotedString(validTypes)}. |
+| BCP303 | String interpolation is unsupported for specifying the provider. |
+| BCP304 | Invalid provider specifier string. Specify a valid provider of format "&lt;providerName>@&lt;providerVersion>". |
+| BCP305 | Expected the "with" keyword, "as" keyword, or a new line character at this location. |
+| BCP306 | The name "{name}" refers to a namespace, not to a type. |
+| BCP307 | The expression cannot be evaluated, because the identifier properties of the referenced existing resource including {ToQuotedString(runtimePropertyNames.OrderBy(x => x))} cannot be calculated at the start of the deployment. In this situation, {accessiblePropertyNamesClause}{accessibleFunctionNamesClause}. |
+| BCP308 | The decorator "{decoratorName}" may not be used on statements whose declared type is a reference to a user-defined type. |
+| BCP309 | Values of type "{flattenInputType.Name}" cannot be flattened because "{incompatibleType.Name}" is not an array type. |
+| BCP311 | The provided index value of "{indexSought}" is not valid for type "{typeName}". Indexes for this type must be between 0 and {tupleLength - 1}. |
+| BCP315 | An object type may have at most one additional properties declaration. |
+| BCP316 | The "{LanguageConstants.ParameterSealedPropertyName}" decorator may not be used on object types with an explicit additional properties type declaration. |
+| BCP317 | Expected an identifier, a string, or an asterisk at this location. |
+| BCP318 | The value of type "{possiblyNullType}" may be null at the start of the deployment, which would cause this access expression (and the overall deployment with it) to fail. If you do not know whether the value will be null and the template would handle a null value for the overall expression, use a `.?` (safe dereference) operator to short-circuit the access expression if the base expression's value is null: {accessExpression.AsSafeAccess().ToString()}. If you know the value will not be null, use a non-null assertion operator to inform the compiler that the value will not be null: {SyntaxFactory.AsNonNullable(expression).ToString()}. |
+| BCP319 | The type at "{errorSource}" could not be resolved by the ARM JSON template engine. Original error message: "{message}" |
+| BCP320 | The properties of module output resources cannot be accessed directly. To use the properties of this resource, pass it as a resource-typed parameter to another module and access the parameter's properties therein. |
+| BCP321 | Expected a value of type "{expectedType}" but the provided value is of type "{actualType}". If you know the value will not be null, use a non-null assertion operator to inform the compiler that the value will not be null: {SyntaxFactory.AsNonNullable(expression).ToString()}. |
+| BCP322 | The `.?` (safe dereference) operator may not be used on instance function invocations. |
+| BCP323 | The `[?]` (safe dereference) operator may not be used on resource or module collections. |
+| BCP325 | Expected a type identifier at this location. |
+| BCP326 | Nullable-typed parameters may not be assigned default values. They have an implicit default of 'null' that cannot be overridden. |
+| <a id='BCP327' />[BCP327](./diagnostics/bcp327.md) | The provided value (which will always be greater than or equal to &lt;value>) is too large to assign to a target for which the maximum allowable value is &lt;max-value>. |
+| <a id='BCP328' />[BCP328](./diagnostics/bcp328.md) | The provided value (which will always be less than or equal to &lt;value>) is too small to assign to a target for which the minimum allowable value is &lt;max-value>. |
+| BCP329 | The provided value can be as small as {sourceMin} and may be too small to assign to a target with a configured minimum of {targetMin}. |
+| BCP330 | The provided value can be as large as {sourceMax} and may be too large to assign to a target with a configured maximum of {targetMax}. |
+| BCP331 | A type's "{minDecoratorName}" must be less than or equal to its "{maxDecoratorName}", but a minimum of {minValue} and a maximum of {maxValue} were specified. |
+| <a id='BCP332' />[BCP332](./diagnostics/bcp332.md) | The provided value (whose length will always be greater than or equal to &lt;string-length>) is too long to assign to a target for which the maximum allowable length is &lt;max-length>. |
+| <a id='BCP333' />[BCP333](./diagnostics/bcp333.md) | The provided value (whose length will always be less than or equal to &lt;string-length>) is too short to assign to a target for which the minimum allowable length is &lt;min-length>. |
+| BCP334 | The provided value can have a length as small as {sourceMinLength} and may be too short to assign to a target with a configured minimum length of {targetMinLength}. |
+| BCP335 | The provided value can have a length as large as {sourceMaxLength} and may be too long to assign to a target with a configured maximum length of {targetMaxLength}. |
+| BCP337 | This declaration type is not valid for a Bicep Parameters file. Specify a "{LanguageConstants.UsingKeyword}", "{LanguageConstants.ParameterKeyword}" or "{LanguageConstants.VariableKeyword}" declaration. |
+| BCP338 | Failed to evaluate parameter "{parameterName}": {message} |
+| BCP339 | The provided array index value of "{indexSought}" is not valid. Array index should be greater than or equal to 0. |
+| BCP340 | Unable to parse literal YAML value. Please ensure that it is well-formed. |
+| BCP341 | This expression is being used inside a function declaration, which requires a value that can be calculated at the start of the deployment. {variableDependencyChainClause}{accessiblePropertiesClause} |
+| BCP342 | User-defined types are not supported in user-defined function parameters or outputs. |
+| BCP344 | Expected an assert identifier at this location. |
+| BCP345 | A test declaration can only reference a Bicep File |
+| BCP0346 | Expected a test identifier at this location. |
+| BCP0347 | Expected a test path string at this location. |
+| BCP348 | Using a test declaration statement requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.TestFramework)}". |
+| BCP349 | Using an assert declaration requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.Assertions)}". |
+| BCP350 | Value of type "{valueType}" cannot be assigned to an assert. Asserts can take values of type 'bool' only. |
+| BCP351 | Function "{functionName}" is not valid at this location. It can only be used when directly assigning to a parameter. |
+| BCP352 | Failed to evaluate variable "{name}": {message} |
+| BCP353 | The {itemTypePluralName} {ToQuotedString(itemNames)} differ only in casing. The ARM deployments engine is not case sensitive and will not be able to distinguish between them. |
+| BCP354 | Expected left brace ('{') or asterisk ('*') character at this location. |
+| BCP355 | Expected the name of an exported symbol at this location. |
+| BCP356 | Expected a valid namespace identifier at this location. |
+| BCP358 | This declaration is missing a template file path reference. |
+| BCP360 | The '{symbolName}' symbol was not found in (or was not exported by) the imported template. |
+| BCP361 | The "@export()" decorator must target a top-level statement. |
+| BCP362 | This symbol is imported multiple times under the names {string.Join(", ", importedAs.Select(identifier => $"'{identifier}'"))}. |
+| BCP363 | The "{LanguageConstants.TypeDiscriminatorDecoratorName}" decorator can only be applied to object-only union types with unique member types. |
+| BCP364 | The property "{discriminatorPropertyName}" must be a required string literal on all union member types. |
+| BCP365 | The value "{discriminatorPropertyValue}" for discriminator property "{discriminatorPropertyName}" is duplicated across multiple union member types. The value must be unique across all union member types. |
+| BCP366 | The discriminator property name must be "{acceptablePropertyName}" on all union member types. |
+| BCP367 | The "{featureName}" feature is temporarily disabled. |
+| BCP368 | The value of the "{targetName}" parameter cannot be known until the template deployment has started because it uses a reference to a secret value in Azure Key Vault. Expressions that refer to the "{targetName}" parameter may be used in {LanguageConstants.LanguageFileExtension} files but not in {LanguageConstants.ParamsFileExtension} files. |
+| BCP369 | The value of the "{targetName}" parameter cannot be known until the template deployment has started because it uses the default value defined in the template. Expressions that refer to the "{targetName}" parameter may be used in {LanguageConstants.LanguageFileExtension} files but not in {LanguageConstants.ParamsFileExtension} files. |
+| BCP372 | The "@export()" decorator may not be applied to variables that refer to parameters, modules, or resource, either directly or indirectly. The target of this decorator contains direct or transitive references to the following unexportable symbols: {ToQuotedString(nonExportableSymbols)}. |
+| BCP373 | Unable to import the symbol named "{name}": {message} |
+| BCP374 | The imported model cannot be loaded with a wildcard because it contains the following duplicated exports: {ToQuotedString(ambiguousExportNames)}. |
+| BCP375 | An import list item that identifies its target with a quoted string must include an 'as &lt;alias>' clause. |
+| BCP376 | The "{name}" symbol cannot be imported because imports of kind {exportMetadataKind} are not supported in files of kind {sourceFileKind}. |
+| BCP377 | The provider alias name "{aliasName}" is invalid. Valid characters are alphanumeric, "_", or "-". |
+| BCP378 | The OCI artifact provider alias "{aliasName}" in the {BuildBicepConfigurationClause(configFileUri)} is invalid. The "registry" property cannot be null or undefined. |
+| BCP379 | The OCI artifact provider alias name "{aliasName}" does not exist in the {BuildBicepConfigurationClause(configFileUri)}. |
+| BCP380 | Artifacts of type: "{artifactType}" are not supported. |
+| BCP381 | Declaring provider namespaces with the "import" keyword has been deprecated. Please use the "provider" keyword instead. |
+| BCP383 | The "{typeName}" type is not parameterizable. |
+| BCP384 | The "{typeName}" type requires {requiredArgumentCount} argument(s). |
+| BCP385 | Using resource-derived types requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.ResourceDerivedTypes)}". |
+| BCP386 | The decorator "{decoratorName}" may not be used on statements whose declared type is a reference to a resource-derived type. |
+| BCP387 | Indexing into a type requires an integer greater than or equal to 0. |
+| BCP388 | Cannot access elements of type "{wrongType}" by index. A tuple type is required. |
+| BCP389 | The type "{wrongType}" does not declare an additional properties type. |
+| BCP390 | The array item type access operator ('[*]') can only be used with typed arrays. |
+| BCP391 | Type member access is only supported on a reference to a named type. |
+| BCP392 | "The supplied resource type identifier "{resourceTypeIdentifier}" was not recognized as a valid resource type name." |
+| BCP393 | "The type pointer segment "{unrecognizedSegment}" was not recognized. Supported pointer segments are: "properties", "items", "prefixItems", and "additionalProperties"." |
+| BCP394 | Resource-derived type expressions must derefence a property within the resource body. Using the entire resource body type is not permitted. |
+| BCP395 | Declaring provider namespaces using the '&lt;providerName>@&lt;version>' expression has been deprecated. Please use an identifier instead. |
+| BCP396 | The referenced provider types artifact has been published with malformed content. |
+| BCP397 | "Provider {name} is incorrectly configured in the {BuildBicepConfigurationClause(configFileUri)}. It is referenced in the "{RootConfiguration.ImplicitProvidersConfigurationKey}" section, but is missing corresponding configuration in the "{RootConfiguration.ProvidersConfigurationKey}" section." |
+| BCP398 | "Provider {name} is incorrectly configured in the {BuildBicepConfigurationClause(configFileUri)}. It is configured as built-in in the "{RootConfiguration.ProvidersConfigurationKey}" section, but no built-in provider exists." |
+| BCP399 | Fetching az types from the registry requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.DynamicTypeLoading)}". |
+| BCP400 | Fetching types from the registry requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.ProviderRegistry)}". |
+
+## Next steps
+
+To learn about Bicep, see [Bicep overview](./overview.md).
azure-resource-manager Deploy To Management Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-management-group.md
To use an ARM template to create a new Azure subscription in a management group,
* [Programmatically create Azure subscriptions for a Microsoft Customer Agreement](../../cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md) * [Programmatically create Azure subscriptions for a Microsoft Partner Agreement](../../cost-management-billing/manage/programmatically-create-subscription-microsoft-partner-agreement.md)
-To deploy a template that moves an existing Azure subscription to a new management group, see [Move subscriptions in ARM template](../../governance/management-groups/manage.md#move-subscriptions-in-arm-template)
+To deploy a template that moves an existing Azure subscription to a new management group, see [Move subscriptions in ARM template](../../governance/management-groups/manage.md#move-a-subscription-in-an-arm-template)
## Azure Policy
azure-resource-manager Bcp033 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp033.md
+
+ Title: BCP033
+description: Error/warning - Expected a value of type <data-type> but the provided value is of type <data-type>.
++ Last updated : 07/15/2024++
+# Bicep error/warning code - BCP033
+
+This error/warning occurs when you assign a value of a mismatched data type.
+
+## Error/warning description
+
+`Expected a value of type <data-type> but the provided value is of type <data-type>.`
+
+## Solution
+
+Use the expected data type.
+
+## Examples
+
+The following example raises the error because the expected data type is a string. The actual provided value is an integer:
+
+```bicep
+var myValue = 5
+
+output myString string = myValue
+```
+
+You can fix the error by providing a string value:
+
+```bicep
+var myValue = '5'
+
+output myString string = myValue
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md).
azure-resource-manager Bcp035 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp035.md
+
+ Title: BCP035
+description: Error/warning - The specified <data-type> declaration is missing the following required properties <property-name>.
++ Last updated : 07/15/2024++
+# Bicep error/warning code - BCP035
+
+This error/warning occurs when your resource definition is missing a required property.
+
+## Error/warning description
+
+`The specified <date-type> declaration is missing the following required properties: <property-name>.`
+
+## Solution
+
+Add the missing property to the resource definition.
+
+## Examples
+
+The following example raises the warning for **virtualNetworkGateway1** and **virtualNetworkGateway2**:
+
+```bicep
+var networkConnectionName = 'testConnection'
+var location = 'eastus'
+var vnetGwAId = 'gatewayA'
+var vnetGwBId = 'gatewayB'
+
+resource networkConnection 'Microsoft.Network/connections@2023-11-01' = {
+ name: networkConnectionName
+ location: location
+ properties: {
+ virtualNetworkGateway1: {
+ id: vnetGwAId
+ }
+ virtualNetworkGateway2: {
+ id: vnetGwBId
+ }
+
+ connectionType: 'Vnet2Vnet'
+ }
+}
+```
+
+The warning is:
+
+```warning
+The specified "object" declaration is missing the following required properties: "properties". If this is an inaccuracy in the documentation, please report it to the Bicep Team.
+```
+
+You can verify the missing properties from the [template reference](/azure/templates). If you see the warning from Visual Studio Code, hover the cursor over the resource symbolic name and select **View document** to open the template reference.
+
+You can fix the issue by adding the missing properties:
+
+```bicep
+var networkConnectionName = 'testConnection'
+var location = 'eastus'
+var vnetGwAId = 'gatewayA'
+var vnetGwBId = 'gatewayB'
+
+resource networkConnection 'Microsoft.Network/connections@2023-11-01' = {
+ name: networkConnectionName
+ location: location
+ properties: {
+ virtualNetworkGateway1: {
+ id: vnetGwAId
+ properties:{}
+ }
+ virtualNetworkGateway2: {
+ id: vnetGwBId
+ properties:{}
+ }
+
+ connectionType: 'Vnet2Vnet'
+ }
+}
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md).
azure-resource-manager Bcp036 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp036.md
+
+ Title: BCP036
+description: Error/warning - The property <property-name> expected a value of type <data-type> but the provided value is of type <data-type>.
++ Last updated : 07/15/2024++
+# Bicep error/warning code - BCP036
+
+This error/warning occurs when you assign a value to a property whose expected data type isn't compatible with the type of the assigned value.
+
+## Error/warning description
+
+`The property <property-name> expected a value of type <data-type> but the provided value is of type <data-type>.`
+
+## Solution
+
+Assign a value with the correct data type.
+
+## Examples
+
+The following example raises the error because `sku` is defined as a string, not an integer:
+
+```bicep
+type storageAccountConfigType = {
+ name: string
+ sku: string
+}
+
+param foo storageAccountConfigType = {
+ name: 'myStorage'
+ sku: 2
+}
+```
+
+You can fix the issue by assigning a string value to `sku`:
+
+```bicep
+type storageAccountConfigType = {
+ name: string
+ sku: string
+}
+
+param foo storageAccountConfigType = {
+ name: 'myStorage'
+ sku: 'Standard_LRS'
+}
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md).
azure-resource-manager Bcp037 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp037.md
+
+ Title: BCP037
+description: Warning - The property <property-name> is not allowed on objects of type <type-definition>.
++ Last updated : 07/15/2024++
+# Bicep warning code - BCP037
+
+This warning occurs when you specify a property that isn't defined in a resource type.
+
+## Warning description
+
+`The property <property-name> is not allowed on objects of type <type-defintion>.`
+
+## Solution
+
+Remove the undefined property.
+
+## Examples
+
+The following example raises the warning because `bar` isn't defined in `storageAccountType`:
+
+```bicep
+type storageAccountConfigType = {
+ name: string
+ sku: string
+}
+
+param foo storageAccountConfigType = {
+ name: 'myStorage'
+ sku: 'Standard_LRS'
+ bar: 'myBar'
+}
+```
+
+You can fix the issue by removing the property:
+
+```bicep
+type storageAccountConfigType = {
+ name: string
+ sku: string
+}
+
+param foo storageAccountConfigType = {
+ name: 'myStorage'
+ sku: 'Standard_LRS'
+}
+```
+
+The following example raises the error because `obj` is a sealed type and doesn't define a `baz` property.
+
+```bicep
+@sealed()
+type obj = {
+ foo: string
+ bar: string
+}
+
+param p obj = {
+ foo: 'foo'
+ bar: 'bar'
+ baz: 'baz'
+}
+```
+
+You can fix the issue by removing the property:
+
+```bicep
+@sealed()
+type obj = {
+ foo: string
+ bar: string
+}
+
+param p obj = {
+ foo: 'foo'
+ bar: 'bar'
+}
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md).
azure-resource-manager Bcp040 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp040.md
+
+ Title: BCP040
+description: Error/warning - String interpolation is not supported for keys on objects of type <type-definition>.
++ Last updated : 07/15/2024++
+# Bicep error/warning code - BCP040
+
+This error/warning occurs when the Bicep compiler can't determine the exact value of an interpolated string key.
+
+## Error/warning description
+
+`String interpolation is not supported for keys on objects of type <type-definition>.`
+
+## Solution
+
+Remove string interpolation.
+
+## Examples
+
+The following example raises the warning because string interpolation is used for specifying the key `sku1`:
+
+```bicep
+var name = 'sku'
+
+type storageAccountConfigType = {
+ name: string
+ sku1: string
+}
+
+param foo storageAccountConfigType = {
+ name: 'myStorage'
+ '${name}1': 'Standard_LRS'
+}
+```
+
+You can fix the issue by adding the missing properties:
+
+```bicep
+var name = 'sku'
+
+type storageAccountConfigType = {
+ name: string
+ sku1: string
+}
+
+param foo storageAccountConfigType = {
+ name: 'myStorage'
+ sku1: 'Standard_LRS'
+}
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md).
azure-resource-manager Bcp053 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp053.md
+
+ Title: BCP053
+description: Error/warning - The type <resource-type> does not contain property <property-name>. Available properties include <property-names>.
++ Last updated : 07/15/2024++
+# Bicep error/warning code - BCP053
+
+This error/warning occurs when you reference a property that isn't defined in the resource type or [user-defined data type](../user-defined-data-types.md).
+
+## Error/warning description
+
+`The type <resource-type> does not contain property <property-name>. Available properties include <property-names>.`
+
+## Solution
+
+Reference the correct property name
+
+## Examples
+
+The following example raises the error because `Microsoft.Storage/storageAccounts` doesn't contain a property called `bar`.
+
+```bicep
+param location string
+
+resource storage 'Microsoft.Storage/storageAccounts@2023-04-01' = {
+ name: 'myStorage'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+}
+
+output foo string = storage.bar
+```
+
+You can fix the error by referencing a valid property, such as `name`:
+
+```bicep
+param location string
+
+resource storage 'Microsoft.Storage/storageAccounts@2023-04-01' = {
+ name: 'myStorage'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+}
+
+output foo string = storage.name
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md).
azure-resource-manager Bcp072 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp072.md
+
+ Title: BCP072
+description: Error - This symbol cannot be referenced here. Only other parameters can be referenced in parameter default values.
++ Last updated : 07/15/2024++
+# Bicep error code - BCP072
+
+This error occurs when you reference a variable in parameter default values.
+
+## Error description
+
+`This symbol cannot be referenced here. Only other parameters can be referenced in parameter default values.`
+
+## Solution
+
+Reference another parameter instead.
+
+## Examples
+
+The following example raises the error because the parameter default value references a variable:
+
+```bicep
+param foo string = bar
+
+var bar = 'HelloWorld!'
+```
+
+You can fix the error by referencing another parameter:
+
+```bicep
+param foo string = bar
+param bar string = 'HelloWorld!'
+
+output outValue string = foo
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md).
azure-resource-manager Bcp073 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp073.md
+
+ Title: BCP073
+description: Warning - The property <property-name> is read-only. Expressions cannot be assigned to read-only properties.
++ Last updated : 07/15/2024++
+# Bicep warning code - BCP073
+
+This warning occurs when you assign a value to a read-only property.
+
+## Warning description
+
+`The property <property-name> is read-only. Expressions cannot be assigned to read-only properties.`
+
+## Solution
+
+Remove the property assignment from the file.
+
+## Examples
+
+The following example raises the warning because `sku` can only be set on the `storageAccounts` level. It's read-only for services that are under a storage account like `blobServices` and `fileServices`.
+
+```bicep
+param location string
+
+resource storage 'Microsoft.Storage/storageAccounts@2023-04-01' = {
+ name: 'mystore'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+}
+
+resource blobService 'Microsoft.Storage/storageAccounts/blobServices@2023-04-01' = {
+ parent: storage
+ name: 'default'
+ sku: {}
+}
+```
+
+You can fix the issue by removing the `sku` property assignment:
+
+```bicep
+param location string
+
+resource storage 'Microsoft.Storage/storageAccounts@2023-04-01' = {
+ name: 'mystore'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+}
+
+resource blobService 'Microsoft.Storage/storageAccounts/blobServices@2023-04-01' = {
+ parent: storage
+ name: 'default'
+}
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md).
azure-resource-manager Bcp327 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp327.md
+
+ Title: BCP327
+description: Error/warning - The provided value (which will always be greater than or equal to <value>) is too large to assign to a target for which the maximum allowable value is <max-value>.
++ Last updated : 07/15/2024++
+# Bicep error/warning code - BCP327
+
+This error/warning occurs when you assign a value that is greater than the allowable value.
+
+## Error/warning description
+
+`The provided value (which will always be greater than or equal to <value>) is too large to assign to a target for which the maximum allowable value is <max-value>.`
+
+## Solution
+
+Assign a value that falls within the permitted range.
+
+## Examples
+
+The following example raises the error because `13` is greater than maximum allowable value:
+
+```bicep
+@minValue(1)
+@maxValue(12)
+param month int = 13
+
+```
+
+You can fix the error by assigning a value within the permitted range:
+
+```bicep
+@minValue(1)
+@maxValue(12)
+param month int = 12
+
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md).
azure-resource-manager Bcp328 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp328.md
+
+ Title: BCP328
+description: Error/warning - The provided value (which will always be less than or equal to <value>) is too small to assign to a target for which the minimum allowable value is <min-value>.
++ Last updated : 07/15/2024++
+# Bicep error/warning code - BCP328
+
+This error/warning occurs when you assign a value that is less than the allowable value.
+
+## Error/warning description
+
+`The provided value (which will always be less than or equal to <value>) is too small to assign to a target for which the minimum allowable value is <min-value>.`
+
+## Solution
+
+Assign a value that falls within the permitted range.
+
+## Examples
+
+The following example raises the error because `0` is less than minimum allowable value:
+
+```bicep
+@minValue(1)
+@maxValue(12)
+param month int = 0
+
+```
+
+You can fix the error by assigning a value within the permitted range:
+
+```bicep
+@minValue(1)
+@maxValue(12)
+param month int = 1
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md).
azure-resource-manager Bcp332 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp332.md
+
+ Title: BCP332
+description: Error/warning - The provided value (whose length will always be greater than or equal to <length>) is too long to assign to a target for which the maximum allowable length is <max-length>.
++ Last updated : 07/15/2024++
+# Bicep error/warning code - BCP332
+
+This error/warning occurs when a string or array exceeding the allowable length is assigned.
+
+## Error/warning description
+
+`The provided value (whose length will always be greater than or equal to <length>) is too long to assign to a target for which the maximum allowable length is <max-length>.`
+
+## Solution
+
+Assign a string whose length is within the allowable range.
+
+## Examples
+
+The following example raises the error because the value `longerThan10` exceeds the allowable length:
+
+```bicep
+@minLength(3)
+@maxLength(10)
+param storageAccountName string = 'longerThan10'
+```
+
+You can fix the error by assigning a string whose length is within the allowable range.
+
+```bicep
+@minLength(3)
+@maxLength(10)
+param storageAccountName string = 'myStorage'
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md).
azure-resource-manager Bcp333 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp333.md
+
+ Title: BCP333
+description: Error/warning - The provided value (whose length will always be less than or equal to <length>) is too short to assign to a target for which the minimum allowable length is <min-length>.
++ Last updated : 07/15/2024++
+# Bicep error/warning code - BCP333
+
+This error/warning occurs when an assigned string or array is shorter than the allowable length.
+
+## Error/warning description
+
+`The provided value (whose length will always be less than or equal to <length>) is too short to assign to a target for which the minimum allowable length is <min-length>.`
+
+## Solution
+
+Assign a string whose length is within the allowable range.
+
+## Examples
+
+The following example raises the error because the value `st` is shorter than the allowable length:
+
+```bicep
+@minLength(3)
+@maxLength(10)
+param storageAccountName string = 'st'
+```
+
+You can fix the error by assigning a string whose length is within the allowable range.
+
+```bicep
+@minLength(3)
+@maxLength(10)
+param storageAccountName string = 'myStorage'
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md).
azure-resource-manager Move Resource Group And Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resource-group-and-subscription.md
There are some important steps to do before moving a resource. By verifying thes
* [Networking move guidance](./move-limitations/networking-move-limitations.md) * [Recovery Services move guidance](../../backup/backup-azure-move-recovery-services-vault.md?toc=/azure/azure-resource-manager/toc.json) * [Virtual Machines move guidance](./move-limitations/virtual-machines-move-limitations.md)
- * To move an Azure subscription to a new management group, see [Move subscriptions](../../governance/management-groups/manage.md#move-subscriptions).
+ * To move an Azure subscription to a new management group, see [Move subscriptions](../../governance/management-groups/manage.md#move-management-groups-and-subscriptions).
1. The destination subscription must be registered for the resource provider of the resource being moved. If not, you receive an error stating that the **subscription is not registered for a resource type**. You might see this error when moving a resource to a new subscription, but that subscription has never been used with that resource type.
azure-resource-manager Deploy To Management Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-to-management-group.md
To use an ARM template to create a new Azure subscription in a management group,
* [Programmatically create Azure subscriptions for a Microsoft Customer Agreement](../../cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md) * [Programmatically create Azure subscriptions for a Microsoft Partner Agreement](../../cost-management-billing/manage/programmatically-create-subscription-microsoft-partner-agreement.md)
-To deploy a template that moves an existing Azure subscription to a new management group, see [Move subscriptions in ARM template](../../governance/management-groups/manage.md#move-subscriptions-in-arm-template)
+To deploy a template that moves an existing Azure subscription to a new management group, see [Move subscriptions in ARM template](../../governance/management-groups/manage.md#move-a-subscription-in-an-arm-template)
## Azure Policy
backup Backup Sql Server Database From Azure Vm Blade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-database-from-azure-vm-blade.md
+
+ Title: Back up SQL Server from the Azure VM blade using Azure Backup
+description: In this article, learn how to back up SQL Server databases from the Azure VM blade via the Azure portal.
+ Last updated : 07/23/2024++++
+# Back up a SQL Server from the Azure SQL Server VM blade
+
+This article describes how to use Azure Backup to back up a SQL Server (running in Azure VM) from the SQL VM resource via the Azure portal.
+
+SQL Server databases are critical workloads that require a low recovery-point objective (RPO) and long-term retention. You can back up SQL Server databases running on Azure virtual machines (VMs) by using [Azure Backup](backup-overview.md).
+
+>[!Note]
+>Learn more about the [SQL backup supported configurations and scenarios](sql-support-matrix.md).
+
+## Prerequisites
+
+Before you back up a SQL Server database, see the [backup criteria](backup-sql-server-database-azure-vms.md#prerequisites).
+
+## Configure backup for SQLServer database
+
+You can now configure Azure backup for your SQL server running in Azure VM, directly from the SQL VM resource blade.
+
+To configure backup from the SQL VM blade, follow these steps:
+
+1. In the [Azure portal](https://portal.azure.com/), go to the *SQL VM resource*.
+
+ >[!Note]
+ >SQL Server resource is different from the Virtual Machine resource.
+
+1. Go to **Settings** > **Backups**.
+
+ If the backup isnΓÇÖt configured for the VM, the following backup options appear:
+
+ - **Azure Backup**
+ - **Automated Backup**
+
+ :::image type="content" source="./media/backup-sql-server-database-from-azure-vm-blade/select-backups.png" alt-text="Screenshot shows how to select the Backups option on a SQL VM." lightbox="./media/backup-sql-server-database-from-azure-vm-blade/select-backups.png":::
+
+1. On the **Azure Backup** blade, select **Enable** to start configuring the backup for the SQL Server using Azure Backup.
+
+1. To start the backup operation, select an existing Recovery Services vault or [create a new vault](backup-sql-server-database-azure-vms.md#create-a-recovery-services-vault).
+
+1. Select **Discover** to start discovering databases in the VM.
+
+ :::image type="content" source="./media/backup-sql-server-database-from-azure-vm-blade/start-database-discovery.png" alt-text="Screenshot shows how to start discovering the SQL database." lightbox="./media/backup-sql-server-database-from-azure-vm-blade/start-database-discovery.png":::
+
+ This operation will take some time to run when performed for the first time.
+
+ :::image type="content" source="./media/backup-sql-server-database-from-azure-vm-blade/database-discovery-in-progress.png" alt-text="Screenshot shows the database discovery operation in progress." lightbox="./media/backup-sql-server-database-from-azure-vm-blade/database-discovery-in-progress.png":::
+
+ Azure Backup discovers all SQL Server databases on the VM. During discovery, the following operations run in the background:
+
+ 1. Azure Backup registers the VM with the vault for workload backup. All databases on the registered VM can only be backed up to this vault.
+ 1. Azure Backup installs the AzureBackupWindowsWorkload extension on the VM. No agent is installed on the SQL database.
+ 1. Azure Backup creates the service account NT Service\AzureWLBackupPluginSvc on the VM.
+ 1. All backup and restore operations use the service account.
+ 1. NT Service\AzureWLBackupPluginSvc needs SQL sysadmin permissions. All SQL Server VMs created in Azure Marketplace come with the SqlIaaSExtension installed.
+
+ The AzureBackupWindowsWorkload extension uses the SQLIaaSExtension to automatically get the necessary permissions.
+
+1. Once the operation is completed, select **Configure backup**.
+
+ :::image type="content" source="./media/backup-sql-server-database-from-azure-vm-blade/start-database-backup-configuration.png" alt-text="Screenshot shows how to start the database backup configuration." lightbox="./media/backup-sql-server-database-from-azure-vm-blade/start-database-backup-configuration.png":::
+
+1. Define a backup policy using one of the following options:
+
+ 1. Select the default policy as *HourlyLogBackup*.
+ 1. Select an existing backup policy previously created for SQL.
+ 1. [Create a new policy](tutorial-sql-backup.md#create-a-backup-policy) based on your RPO and retention range.
+
+ :::image type="content" source="./media/backup-sql-server-database-from-azure-vm-blade/select-backup-policy.png" alt-text="Screenshot shows how to select a backup policy for the database." lightbox="./media/backup-sql-server-database-from-azure-vm-blade/select-backup-policy.png":::
+
+1. Select **Add** to view all the registered availability groups and standalone SQL Server instances.
+
+1. On **Select items to backup**, expand the list of all the *unprotected databases* in that instance or the *Always On availability group*.
+
+1. Select the *databases* to protect and select **OK**.
+
+ :::image type="content" source="./media/backup-sql-server-database-from-azure-vm-blade/confirm-database-selection.png" alt-text="Screenshot shows how to confirm the selection of database for backup." lightbox="./media/backup-sql-server-database-from-azure-vm-blade/confirm-database-selection.png":::
+
+1. To optimize backup loads, Azure Backup allows/permits a maximum number of 50 databases in one backup job.
+
+ 1. To protect more than 50 databases, configure multiple backups.
+ 1. To enable the entire instance or the Always On availability group, in the AUTOPROTECT drop-down list, select ON, and then select OK.
+
+1. Select **Enable Backup** to submit the Configure Protection operation and track the configuration progress in the Notifications area of the portal.
+
+ :::image type="content" source="./media/backup-sql-server-database-from-azure-vm-blade/enable-database-backup.png" alt-text="Screenshot shows how to enable the database backup operation." lightbox="./media/backup-sql-server-database-from-azure-vm-blade/enable-database-backup.png":::
+
+1. To get an overview of your configured backups and a summary of backup jobs, go to **Settings** > **Backups** in the SQL VM resource.
+
+ :::image type="content" source="./media/backup-sql-server-database-from-azure-vm-blade/backup-jobs-summary.png" alt-text="Screenshot shows how to view the backup jobs summary." lightbox="./media/backup-sql-server-database-from-azure-vm-blade/backup-jobs-summary.png":::
+
+## Next steps
+
+- [Restore SQL Server databases on Azure VM](restore-sql-database-azure-vm.md)
+- [Manage and monitor backed up SQL Server databases](manage-monitor-sql-database-backup.md)
+- [Troubleshoot backups on a SQL Server database](backup-sql-server-azure-troubleshoot.md)
+- [FAQ - Backing up SQL Server databases on Azure VMs - Azure Backup | Microsoft Learn](/azure/backup/faq-backup-sql-server)
cloud-services Applications Dont Support Tls 1 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/applications-dont-support-tls-1-2.md
tag: top-support-issue Previously updated : 02/21/2023 Last updated : 07/23/2024 # Troubleshooting applications that don't support TLS 1.2 [!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-This article describes how to enable the older TLS protocols (TLS 1.0 and 1.1) as well as applying legacy cipher suites to support the additional protocols on the Windows Server 2019 cloud service web and worker roles.
+This article describes how to enable the older TLS protocols (TLS 1.0 and 1.1). It also covers the application of legacy cipher suites to support the additional protocols on the Windows Server 2019 cloud service web and worker roles.
-We understand that while we are taking steps to deprecate TLS 1.0 and TLS 1.1, our customers may need to support the older protocols and cipher suites until they can plan for their deprecation. While we don't recommend re-enabling these legacy values, we are providing guidance to help customers. We encourage customers to evaluate the risk of regression before implementing the changes outlined in this article.
+We understand that while we're taking steps to deprecate TLS 1.0 and TLS 1.1, our customers may need to support the older protocols and cipher suites in the meantime. While we don't recommend re-enabling these legacy values, we're providing guidance to help customers. We encourage customers to evaluate the risk of regression before implementing the changes outlined in this article.
> [!NOTE] > Guest OS Family 6 release enforces TLS 1.2 by explicitly disabling TLS 1.0 and 1.1 and defining a specific set of cipher suites.For more information on Guest OS families see [Guest OS release news](./cloud-services-guestos-update-matrix.md#family-6-releases) ## Dropping support for TLS 1.0, TLS 1.1 and older cipher suites
-In support of our commitment to use best-in-class encryption, Microsoft announced plans to start migration away from TLS 1.0 and 1.1 in June of 2017. Since that initial announcement, Microsoft announced our intent to disable Transport Layer Security (TLS) 1.0 and 1.1 by default in supported versions of Microsoft Edge and Internet Explorer 11 in the first half of 2020. Similar announcements from Apple, Google, and Mozilla indicate the direction in which the industry is headed.
+In support of our commitment to use best-in-class encryption, Microsoft announced plans to start migration away from TLS 1.0 and 1.1 in June of 2017. Microsoft announced our intent to disable Transport Layer Security (TLS) 1.0 and 1.1 by default in supported versions of Microsoft Edge and Internet Explorer 11 in the first half of 2020. Similar announcements from Apple, Google, and Mozilla indicate the direction in which the industry is headed.
For more information, see [Preparing for TLS 1.2 in Microsoft Azure](https://azure.microsoft.com/updates/azuretls12/) ## TLS configuration
-The Windows Server 2019 cloud server image is configured with TLS 1.0 and TLS 1.1 disabled at the registry level. This means applications deployed to this version of Windows AND using the Windows stack for TLS negotiation will not allow TLS 1.0 and TLS 1.1 communication.
+The Windows Server 2019 cloud server image is configured with TLS 1.0 and TLS 1.1 disabled at the registry level. This means applications deployed to this version of Windows AND using the Windows stack for TLS negotiation won't allow TLS 1.0 and TLS 1.1 communication.
The server also comes with a limited set of cipher suites:
The server also comes with a limited set of cipher suites:
## Step 1: Create the PowerShell script to enable TLS 1.0 and TLS 1.1
-Use the following code as an example to create a script that enables the older protocols and cipher suites. For the purposes of this documentation, this script will be named: **TLSsettings.ps1**. Store this script on your local desktop for easy access in later steps.
+Use the following code as an example to create a script that enables the older protocols and cipher suites. For the purposes of this documentation, this script is named: **TLSsettings.ps1**. Store this script on your local desktop for easy access in later steps.
```powershell # You can use the -SetCipherOrder (or -sco) option to also set the TLS cipher
If ($reboot) {
## Step 2: Create a command file
-Create a CMD file named **RunTLSSettings.cmd** using the below. Store this script on your local desktop for easy access in later steps.
+Create a CMD file named **RunTLSSettings.cmd** using the following script. Store this script on your local desktop for easy access in later steps.
```cmd SET LOG_FILE="%TEMP%\StartupLog.txt"
Add the following snippet to your existing service definition file.
</Startup> ```
-Here is an example that shows both the worker role and web role.
+Here's an example that shows both the worker role and web role.
``` <?xmlversion="1.0" encoding="utf-8"?>
To ensure the scripts are uploaded with every update pushed from Visual Studio,
## Step 6: Publish & Validate
-Now that the above steps have been complete, publish the update to your existing Cloud Service.
+Now that you completed the previous steps, publish the update to your existing Cloud Service.
You can use [SSLLabs](https://www.ssllabs.com/) to validate the TLS status of your endpoints
cloud-services Automation Manage Cloud Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/automation-manage-cloud-services.md
Title: Manage Azure Cloud Services (classic) using Azure Automation | Microsoft
description: Learn about how the Azure Automation service can be used to manage Azure cloud services at scale. Previously updated : 02/21/2023 Last updated : 07/23/2024
This guide introduces you to the Azure Automation service, and how it can be used to simplify management of your Azure cloud services. ## What is Azure Automation?
-[Azure Automation](https://azure.microsoft.com/services/automation/) is an Azure service for simplifying cloud management through process automation. Using Azure Automation, long-running, manual, error-prone, and frequently repeated tasks can be automated to increase reliability, efficiency, and time to value for your organization.
+[Azure Automation](https://azure.microsoft.com/services/automation/) is an Azure service for simplifying cloud management through process automation. When you use Azure Automation, you can automate long-running, manual, error-prone, and frequently repeated tasks to increase reliability, efficiency, and time to value for your organization.
-Azure Automation provides a highly reliable and highly available workflow execution engine that scales to meet your needs as your organization grows. In Azure Automation, processes can be kicked off manually, by third-party systems, or at scheduled intervals so that tasks happen exactly when needed.
+Azure Automation provides a highly reliable and highly available workflow execution engine that scales to meet your needs as your organization grows. In Azure Automation, processes can be kicked off manually, by non-Microsoft systems, or at scheduled intervals so that tasks happen exactly when needed.
-Lower operational overhead and free up IT / DevOps staff to focus on work that adds business value by moving your cloud management tasks to be run automatically by Azure Automation.
+Lower operational overhead and free up IT / DevOps staff to focus on work that adds business value by running your cloud management tasks automatically with Azure Automation.
## How can Azure Automation help manage Azure cloud services?
-Azure cloud services can be managed in Azure Automation by using the PowerShell cmdlets that are available in the [Azure PowerShell tools](/powershell/). Azure Automation has these cloud service PowerShell cmdlets available out of the box, so that you can perform all of your cloud service management tasks within the service. You can also pair these cmdlets in Azure Automation with the cmdlets for other Azure services, to automate complex tasks across Azure services and third party systems.
+Azure cloud services can be managed in Azure Automation by using the PowerShell cmdlets that are available in the [Azure PowerShell tools](/powershell/). Azure Automation has these cloud service PowerShell cmdlets available out of the box, so that you can perform all of your cloud service management tasks within the service. You can also pair these cmdlets in Azure Automation with the cmdlets for other Azure services, to automate complex tasks across Azure services and non-Microsoft systems.
## Next Steps
-Now that you've learned the basics of Azure Automation and how it can be used to manage Azure cloud services, follow these links to learn more about Azure Automation.
+Now that you covered the basics of Azure Automation and how it can be used to manage Azure cloud services, follow these links to learn more about Azure Automation.
* [Azure Automation Overview](../automation/automation-intro.md) * [My first runbook](../automation/learn/powershell-runbook-managed-identity.md)
cloud-services Cloud Services Allocation Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-allocation-failures.md
Title: Troubleshooting Cloud Service (classic) allocation failures | Microsoft D
description: Troubleshoot an allocation failure when you deploy Azure Cloud Services. Learn how allocation works and why allocation can fail. Previously updated : 02/21/2023 Last updated : 07/23/2024 -
## Summary
-When you deploy instances to a Cloud Service or add new web or worker role instances, Microsoft Azure allocates compute resources. You may occasionally receive errors when performing these operations even before you reach the Azure subscription limits. This article explains the causes of some of the common allocation failures and suggests possible remediation. The information may also be useful when you plan the deployment of your services.
+When you deploy instances to a Cloud Service or add new web or worker role instances, Microsoft Azure allocates compute resources. You may occasionally receive errors when performing these operations even before you reach the Azure subscription limits. This article explains the causes of some of the common allocation failures and suggests possible remediation. The information can also be useful when you plan the deployment of your services.
[!INCLUDE [support-disclaimer](~/reusable-content/ce-skilling/azure/includes/support-disclaimer.md)] ### Background ΓÇô How allocation works
-The servers in Azure datacenters are partitioned into clusters. A new cloud service allocation request is attempted in multiple clusters. When the first instance is deployed to a cloud service(in either staging or production), that cloud service gets pinned to a cluster. Any further deployments for the cloud service will happen in the same cluster. In this article, we'll refer to this as "pinned to a cluster". Diagram 1 below illustrates the case of a normal allocation which is attempted in multiple clusters; Diagram 2 illustrates the case of an allocation that's pinned to Cluster 2 because that's where the existing Cloud Service CS_1 is hosted.
+The servers in Azure datacenters are partitioned into clusters. A new cloud service allocation request is attempted in multiple clusters. When the first instance is deployed to a cloud service(in either staging or production), that cloud service gets pinned to a cluster. Any further deployments for the cloud service happen in the same cluster. In this article, we refer to this state as "pinned to a cluster." The following diagram illustrates the case of a normal allocation, which is attempted in multiple clusters. The second diagram illustrates the case of an allocation pinned to Cluster 2 because that's where the existing Cloud Service CS_1 is hosted.
![Allocation Diagram](./media/cloud-services-allocation-failure/Allocation1.png) ### Why allocation failure happens
-When an allocation request is pinned to a cluster, there's a higher chance of failing to find free resources since the available resource pool is limited to a cluster. Furthermore, if your allocation request is pinned to a cluster but the type of resource you requested is not supported by that cluster, your request will fail even if the cluster has free resource. Diagram 3 below illustrates the case where a pinned allocation fails because the only candidate cluster does not have free resources. Diagram 4 illustrates the case where a pinned allocation fails because the only candidate cluster does not support the requested VM size, even though the cluster has free resources.
+When an allocation request is pinned to a cluster, there's a higher chance of failing to find free resources since the available resource pool is limited to a cluster. Furthermore, if your allocation request is pinned to a cluster but the cluster doesn't support the resource type you requested, your request fails even if the cluster has free resource. The next diagram illustrates the case where a pinned allocation fails because the only candidate cluster doesn't have free resources. Diagram 4 illustrates the case where a pinned allocation fails because the only candidate cluster doesn't support the requested virtual machine (VM) size, even though the cluster has free resources.
![Pinned Allocation Failure](./media/cloud-services-allocation-failure/Allocation2.png)
When an allocation request is pinned to a cluster, there's a higher chance of fa
In Azure portal, navigate to your cloud service and in the sidebar select *Operation logs (classic)* to view the logs.
-See further solutions for the exceptions below:
+See these further solutions for the exceptions:
|Exception Type |Error Message |Solution | ||||
-|FabricInternalServerError |Operation failed with error code 'InternalError' and errorMessage 'The server encountered an internal error. Please retry the request.'.|[Troubleshoot FabricInternalServerError](cloud-services-troubleshoot-fabric-internal-server-error.md)|
-|ServiceAllocationFailure |Operation failed with error code 'InternalError' and errorMessage 'The server encountered an internal error. Please retry the request.'.|[Troubleshoot ServiceAllocationFailure](cloud-services-troubleshoot-fabric-internal-server-error.md)|
-|LocationNotFoundForRoleSize |The operation '`{Operation ID}`' failed: 'The requested VM tier is currently not available in Region (`{Region ID}`) for this subscription. Please try another tier or deploy to a different location.'.|[Troubleshoot LocationNotFoundForRoleSize](cloud-services-troubleshoot-location-not-found-for-role-size.md)|
-|ConstrainedAllocationFailed |Azure operation '`{Operation ID}`' failed with code Compute.ConstrainedAllocationFailed. Details: Allocation failed; unable to satisfy constraints in request. The requested new service deployment is bound to an Affinity Group, or it targets a Virtual Network, or there is an existing deployment under this hosted service. Any of these conditions constrains the new deployment to specific Azure resources. Please retry later or try reducing the VM size or number of role instances. Alternatively, if possible, remove the aforementioned constraints or try deploying to a different region.|[Troubleshoot ConstrainedAllocationFailed](cloud-services-troubleshoot-constrained-allocation-failed.md)|
-|OverconstrainedAllocationRequest |The VM size (or combination of VM sizes) required by this deployment cannot be provisioned due to deployment request constraints. If possible, try relaxing constraints such as virtual network bindings, deploying to a hosted service with no other deployment in it and to a different affinity group or with no affinity group, or try deploying to a different region.|[Troubleshoot OverconstrainedAllocationRequest](cloud-services-troubleshoot-overconstrained-allocation-request.md)|
+|FabricInternalServerError |Operation failed with error code 'InternalError' and errorMessage 'The server encountered an internal error. Please retry the request.'|[Troubleshoot FabricInternalServerError](cloud-services-troubleshoot-fabric-internal-server-error.md)|
+|ServiceAllocationFailure |Operation failed with error code 'InternalError' and errorMessage 'The server encountered an internal error. Please retry the request.'|[Troubleshoot ServiceAllocationFailure](cloud-services-troubleshoot-fabric-internal-server-error.md)|
+|LocationNotFoundForRoleSize |The operation '`{Operation ID}`' failed: 'The requested VM tier is currently not available in Region (`{Region ID}`) for this subscription. Please try another tier or deploy to a different location.'|[Troubleshoot LocationNotFoundForRoleSize](cloud-services-troubleshoot-location-not-found-for-role-size.md)|
+|ConstrainedAllocationFailed |Azure operation '`{Operation ID}`' failed with code Compute.ConstrainedAllocationFailed. Details: Allocation failed; unable to satisfy constraints in request. The requested new service deployment is bound to an Affinity Group, or it targets a Virtual Network, or there's an existing deployment under this hosted service. Any of these conditions constrains the new deployment to specific Azure resources. Please retry later or try reducing the VM size or number of role instances. Alternatively, if possible, remove the constraints or try deploying to a different region.|[Troubleshoot ConstrainedAllocationFailed](cloud-services-troubleshoot-constrained-allocation-failed.md)|
+|OverconstrainedAllocationRequest |The VM size (or combination of VM sizes) required by this deployment can't be provisioned due to deployment request constraints. If possible, try relaxing constraints such as virtual network bindings, deploying to a hosted service with no other deployment in it and to a different affinity group or with no affinity group, or try deploying to a different region.|[Troubleshoot OverconstrainedAllocationRequest](cloud-services-troubleshoot-overconstrained-allocation-request.md)|
Example error message:
Example error message:
Here are the common allocation scenarios that cause an allocation request to be pinned to a single cluster.
-* Deploying to Staging Slot - If a cloud service has a deployment in either slot, then the entire cloud service is pinned to a specific cluster. This means that if a deployment already exists in the production slot, then a new staging deployment can only be allocated in the same cluster as the production slot. If the cluster is nearing capacity, the request may fail.
-* Scaling - Adding new instances to an existing cloud service must allocate in the same cluster. Small scaling requests can usually be allocated, but not always. If the cluster is nearing capacity, the request may fail.
-* Affinity Group - A new deployment to an empty cloud service can be allocated by the fabric in any cluster in that region, unless the cloud service is pinned to an affinity group. Deployments to the same affinity group will be attempted on the same cluster. If the cluster is nearing capacity, the request may fail.
-* Affinity Group vNet - Older Virtual Networks were tied to affinity groups instead of regions, and cloud services in these Virtual Networks would be pinned to the affinity group cluster. Deployments to this type of virtual network will be attempted on the pinned cluster. If the cluster is nearing capacity, the request may fail.
+* Deploying to Staging Slot - If a cloud service has a deployment in either slot, then the entire cloud service is pinned to a specific cluster. This means that if a deployment already exists in the production slot, then a new staging deployment can only be allocated in the same cluster as the production slot. If the cluster is nearing capacity, the request may fail.
+* Scaling - Adding new instances to an existing cloud service must allocate in the same cluster. Small scaling requests can usually be allocated, but not always. If the cluster is nearing capacity, the request may fail.
+* Affinity Group - The fabric in any cluster in that region can allocate a new deployment to an empty cloud service, unless the cloud service is pinned to an affinity group. Deployments attempt to use the same affinity group on the same cluster. If the cluster is nearing capacity, the request may fail.
+* Affinity Group virtual network - Older Virtual Networks were tied to affinity groups instead of regions, and cloud services in these Virtual Networks would be pinned to the affinity group cluster. Attempted deployments to this type of virtual network occur on the pinned cluster. If the cluster is nearing capacity, the request may fail.
## Solutions
Here are the common allocation scenarios that cause an allocation request to be
* Deploy the workload to a new cloud service * Update the CNAME or A record to point traffic to the new cloud service * Once zero traffic is going to the old site, you can delete the old cloud service. This solution should incur zero downtime.
-2. Delete both production and staging slots - This solution will preserve your existing DNS name, but will cause downtime to your application.
+2. Delete both production and staging slots - This solution preserves your existing Domain Name System (DNS) name but causes downtime to your application.
* Delete the production and staging slots of an existing cloud service so that the cloud service is empty, and then
- * Create a new deployment in the existing cloud service. This will re-attempt to allocation on all clusters in the region. Ensure the cloud service is not tied to an affinity group.
-3. Reserved IP - This solution will preserve your existing IP address, but will cause downtime to your application.
+ * Create a new deployment in the existing cloud service. This solution reattempts allocation on all clusters in the region. Ensure the cloud service isn't tied to an affinity group.
+3. Reserved IP - This solution preserves your existing IP address but causes downtime to your application.
* Create a ReservedIP for your existing deployment using PowerShell
Here are the common allocation scenarios that cause an allocation request to be
New-AzureReservedIP -ReservedIPName {new reserved IP name} -Location {location} -ServiceName {existing service name} ```
- * Follow #2 from above, making sure to specify the new ReservedIP in the service's CSCFG.
-4. Remove affinity group for new deployments - Affinity Groups are no longer recommended. Follow steps for #1 above to deploy a new cloud service. Ensure cloud service is not in an affinity group.
-5. Convert to a Regional Virtual Network - See [How to migrate from Affinity Groups to a Regional Virtual Network (VNet)](/previous-versions/azure/virtual-network/virtual-networks-migrate-to-regional-vnet).
+ * Follow #2, making sure to specify the new ReservedIP in the service's CSCFG.
+4. Remove affinity group for new deployments - Affinity Groups are no longer recommended. Follow steps for #1 to deploy a new cloud service. Ensure cloud service isn't in an affinity group.
+5. Convert to a Regional Virtual Network - See [How to migrate from Affinity Groups to a Regional Virtual Network (virtual network)](/previous-versions/azure/virtual-network/virtual-networks-migrate-to-regional-vnet).
cloud-services Cloud Services Certs Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-certs-create.md
Title: Cloud Services (classic) and management certificates | Microsoft Docs
description: Learn about how to create and deploy certificates for cloud services and for authenticating with the management API in Azure. Previously updated : 02/21/2023 Last updated : 07/23/2024 -
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-Certificates are used in Azure for cloud services ([service certificates](#what-are-service-certificates)) and for authenticating with the management API ([management certificates](#what-are-management-certificates)). This topic gives a general overview of both certificate types, how to [create](#create) and deploy them to Azure.
+Certificates are used in Azure for cloud services ([service certificates](#what-are-service-certificates)) and for authenticating with the management API ([management certificates](#what-are-management-certificates)). This article gives a general overview of both certificate types, how to [create](#create) and deploy them to Azure.
-Certificates used in Azure are x.509 v3 certificates and can be signed by another trusted certificate or they can be self-signed. A self-signed certificate is signed by its own creator, therefore it is not trusted by default. Most browsers can ignore this problem. You should only use self-signed certificates when developing and testing your cloud services.
+Certificates used in Azure are x.509 v3 certificates. They can self-sign, or another trusted certificate can sign them. A certificate is self-signed when its creator signs it. Self-signed certificates aren't trusted by default, but most browsers can ignore this problem. You should only use self-signed certificates when developing and testing your cloud services.
-Certificates used by Azure can contains a public key. Certificates have a thumbprint that provides a means to identify them in an unambiguous way. This thumbprint is used in the Azure [configuration file](cloud-services-configure-ssl-certificate-portal.md) to identify which certificate a cloud service should use.
+Certificates used by Azure can contain a public key. Certificates have a thumbprint that provides a means to identify them in an unambiguous way. This thumbprint is used in the Azure [configuration file](cloud-services-configure-ssl-certificate-portal.md) to identify which certificate a cloud service should use.
>[!Note] >Azure Cloud Services does not accept AES256-SHA256 encrypted certificate.
Certificates used by Azure can contains a public key. Certificates have a thumbp
## What are service certificates? Service certificates are attached to cloud services and enable secure communication to and from the service. For example, if you deployed a web role, you would want to supply a certificate that can authenticate an exposed HTTPS endpoint. Service certificates, defined in your service definition, are automatically deployed to the virtual machine that is running an instance of your role.
-You can upload service certificates to Azure either using the Azure portal or by using the classic deployment model. Service certificates are associated with a specific cloud service. They are assigned to a deployment in the service definition file.
+You can upload service certificates to Azure either using the Azure portal or by using the classic deployment model. Service certificates are associated with a specific cloud service. The service definition file assigns them to a deployment.
-Service certificates can be managed separately from your services, and may be managed by different individuals. For example, a developer may upload a service package that refers to a certificate that an IT manager has previously uploaded to Azure. An IT manager can manage and renew that certificate (changing the configuration of the service) without needing to upload a new service package. Updating without a new service package is possible because the logical name, store name, and location of the certificate is in the service definition file and while the certificate thumbprint is specified in the service configuration file. To update the certificate, it's only necessary to upload a new certificate and change the thumbprint value in the service configuration file.
+Service certificates can be managed separately from your services, and different individuals may manage them. For example, a developer may upload a service package that refers to a certificate that an IT manager previously uploaded to Azure. An IT manager can manage and renew that certificate (changing the configuration of the service) without needing to upload a new service package. Updating without a new service package is possible because the logical name, store name, and location of the certificate is in the service definition file and while the certificate thumbprint is specified in the service configuration file. To update the certificate, it's only necessary to upload a new certificate and change the thumbprint value in the service configuration file.
>[!Note] >The [Cloud Services FAQ - Configuration and Management](cloud-services-configuration-and-management-faq.yml) article has some helpful information about certificates. ## What are management certificates?
-Management certificates allow you to authenticate with the classic deployment model. Many programs and tools (such as Visual Studio or the Azure SDK) use these certificates to automate configuration and deployment of various Azure services. These are not really related to cloud services.
+Management certificates allow you to authenticate with the classic deployment model. Many programs and tools (such as Visual Studio or the Azure SDK) use these certificates to automate configuration and deployment of various Azure services. These certificates aren't related to cloud services.
> [!WARNING] > Be careful! These types of certificates allow anyone who authenticates with them to manage the subscription they are associated with.
Management certificates allow you to authenticate with the classic deployment mo
> ### Limitations
-There is a limit of 100 management certificates per subscription. There is also a limit of 100 management certificates for all subscriptions under a specific service administratorΓÇÖs user ID. If the user ID for the account administrator has already been used to add 100 management certificates and there is a need for more certificates, you can add a co-administrator to add the additional certificates.
+There's a limit of 100 management certificates per subscription. There's also a limit of 100 management certificates for all subscriptions under a specific service administratorΓÇÖs user ID. If the user ID for the account administrator was already used to add 100 management certificates and there's a need for more certificates, you can add a coadministrator to add more certificates.
-Additionally, management certificates can not be used with CSP subscriptions as CSP subscriptions only support the Azure Resource Manager deployment model and management certificates use the classic deployment model. Reference [Azure Resource Manager vs classic deployment model](../azure-resource-manager/management/deployment-models.md) and [Understanding Authentication with the Azure SDK for .NET](/dotnet/azure/sdk/authentication) for more information on your options for CSP subscriptions.
+Additionally, management certificates canΓÇÖt be used with Cloud Solution Provider (CSP) subscriptions as CSP subscriptions only support the Azure Resource Manager deployment model and management certificates use the classic deployment model. Reference [Azure Resource Manager vs classic deployment model](../azure-resource-manager/management/deployment-models.md) and [Understanding Authentication with the Azure SDK for .NET](/dotnet/azure/sdk/authentication) for more information on your options for CSP subscriptions.
<a name="create"></a> ## Create a new self-signed certificate
You can use any tool available to create a self-signed certificate as long as th
There are two easy ways to create a certificate on Windows, with the `makecert.exe` utility, or IIS. ### Makecert.exe
-This utility has been deprecated and is no longer documented here. For more information, see [this MSDN article](/windows/desktop/SecCrypto/makecert).
+This utility is retired and is no longer documented here. For more information, see [this Microsoft Developer Network (MSDN) article](/windows/desktop/SecCrypto/makecert).
### PowerShell ```powershell
Export-Certificate -Type CERT -Cert $cert -FilePath .\my-cert-file.cer
``` ### Internet Information Services (IIS)
-There are many pages on the internet that cover how to do this with IIS. [Here](https://www.sslshopper.com/article-how-to-create-a-self-signed-certificate-in-iis-7.html) is a great one I found that I think explains it well.
+There are many pages on the internet that cover how to create certificates with IIS, such as [When to Use an IIS Self Signed Certificate](https://www.sslshopper.com/article-how-to-create-a-self-signed-certificate-in-iis-7.html).
### Linux
-[This](../virtual-machines/linux/mac-create-ssh-keys.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) article describes how to create certificates with SSH.
+[Quick steps: Create and use an SSH public-private key pair for Linux VMs in Azure](../virtual-machines/linux/mac-create-ssh-keys.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) describes how to create certificates with SSH.
## Next steps [Upload your service certificate to the Azure portal](cloud-services-configure-ssl-certificate-portal.md).
cloud-services Cloud Services Choose Me https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-choose-me.md
Title: What is Azure Cloud Services (classic) | Microsoft Docs
-description: Learn about what Azure Cloud Services is, specifically that it's designed to support applications that are scalable, reliable, and inexpensive to operate.
+description: Learn about what Azure Cloud Services is, specifically its design to support applications that are scalable, reliable, and inexpensive to operate.
Previously updated : 02/21/2023 Last updated : 07/23/2024
Azure Cloud Services is an example of a [platform as a service](https://azure.mi
![Azure Cloud Services diagram](./media/cloud-services-choose-me/diagram.png)
-More control also means less ease of use. Unless you need the additional control options, it's typically quicker and easier to get a web application up and running in the Web Apps feature of App Service compared to Azure Cloud Services.
+More control also means less ease of use. Unless you need the more control options, it's typically quicker and easier to get a web application up and running in the Web Apps feature of App Service compared to Azure Cloud Services.
There are two types of Azure Cloud Services roles. The only difference between the two is how your role is hosted on the VMs:
-* **Web role**: Automatically deploys and hosts your app through IIS.
+* **Web role**: Automatically deploys and hosts your app through Internet Information Services (IIS).
-* **Worker role**: Does not use IIS, and runs your app standalone.
+* **Worker role**: Doesn't use IIS, and runs your app standalone.
For example, a simple application might use just a single web role, serving a website. A more complex application might use a web role to handle incoming requests from users, and then pass those requests on to a worker role for processing. (This communication might use [Azure Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) or [Azure Queue storage](../storage/common/storage-introduction.md).)
An Azure Cloud Services application is typically made available to users via a t
## Monitoring Azure Cloud Services also provides monitoring. Like Virtual Machines, it detects a failed physical server and restarts the VMs that were running on that server on a new machine. But Azure Cloud Services also detects failed VMs and applications, not just hardware failures. Unlike Virtual Machines, it has an agent inside each web and worker role, and so it's able to start new VMs and application instances when failures occur.
-The PaaS nature of Azure Cloud Services has other implications, too. One of the most important is that applications built on this technology should be written to run correctly when any web or worker role instance fails. To achieve this, an Azure Cloud Services application shouldn't maintain state in the file system of its own VMs. Unlike VMs created with Virtual Machines, writes made to Azure Cloud Services VMs aren't persistent. There's nothing like a Virtual Machines data disk. Instead, an Azure Cloud Services application should explicitly write all state to Azure SQL Database, blobs, tables, or some other external storage. Building applications this way makes them easier to scale and more resistant to failure, which are both important goals of Azure Cloud Services.
+The PaaS nature of Azure Cloud Services has other implications, too. One of the most important implications is that you should write applications built on this technology to run correctly when any web or worker role instance fails. To achieve this goal, an Azure Cloud Services application shouldn't maintain state in the file system of its own VMs. Unlike VMs created with Virtual Machines, writes made to Azure Cloud Services VMs aren't persistent. There's nothing like a Virtual Machines data disk. Instead, an Azure Cloud Services application should explicitly write all state to Azure SQL Database, blobs, tables, or some other external storage. Building applications this way makes them easier to scale and more resistant to failure. Scalability and resiliency are both important goals of Azure Cloud Services.
## Next steps * [Create a cloud service app in .NET](cloud-services-dotnet-get-started.md)
cloud-services Cloud Services Configure Ssl Certificate Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-configure-ssl-certificate-portal.md
Title: Configure TLS for a cloud service | Microsoft Docs
description: Learn how to specify an HTTPS endpoint for a web role and how to upload a TLS/SSL certificate to secure your application. These examples use the Azure portal. Previously updated : 02/21/2023 Last updated : 07/23/2024
Transport Layer Security (TLS), previously known as Secure Socket Layer (SSL) en
> The procedures in this task apply to Azure Cloud Services; for App Services, see [this](../app-service/configure-ssl-bindings.md). >
-This task uses a production deployment. Information on using a staging deployment is provided at the end of this topic.
+This task uses a production deployment. Information on using a staging deployment is provided at the end of this article.
-Read [this](cloud-services-how-to-create-deploy-portal.md) first if you have not yet created a cloud service.
+Read [How to create and deploy an Azure Cloud Service (classic)](cloud-services-how-to-create-deploy-portal.md) first if you haven't yet created a cloud service.
## Step 1: Get a TLS/SSL certificate
-To configure TLS for an application, you first need to get a TLS/SSL certificate that has been signed by a Certificate Authority (CA), a trusted third party who issues certificates for this purpose. If you do not already have one, you need to obtain one from a company that sells TLS/SSL certificates.
+To configure TLS for an application, you first need to get a TLS/SSL certificate signed by a Certificate Authority (CA), a trusted partner who issues certificates for this purpose. If you don't already have one, you need to obtain one from a company that sells TLS/SSL certificates.
The certificate must meet the following requirements for TLS/SSL certificates in Azure: * The certificate must contain a public key. * The certificate must be created for key exchange, exportable to a Personal Information Exchange (.pfx) file.
-* The certificate's subject name must match the domain used to access the cloud service. You cannot obtain a TLS/SSL certificate from a certificate authority (CA) for the cloudapp.net domain. You must acquire a custom domain name to use when access your service. When you request a certificate from a CA, the certificate's subject name must match the custom domain name used to access your application. For example, if your custom domain name is **contoso.com** you would request a certificate from your CA for ***.contoso.com** or **www\.contoso.com**.
+* The certificate's subject name must match the domain used to access the cloud service. You can't obtain a TLS/SSL certificate from a certificate authority (CA) for the cloudapp.net domain. You must acquire a custom domain name to use when accessing your service. When you request a certificate from a CA, the certificate's subject name must match the custom domain name used to access your application. For example, if your custom domain name is **contoso.com** you would request a certificate from your CA for ***.contoso.com** or **www\.contoso.com**.
* The certificate must use a minimum of 2048-bit encryption.
-For test purposes, you can [create](cloud-services-certs-create.md) and use a self-signed certificate. A self-signed certificate is not authenticated through a CA and can use the cloudapp.net domain as the website URL. For example, the following task uses a self-signed certificate in which the common name (CN) used in the certificate is **sslexample.cloudapp.net**.
+For test purposes, you can [create](cloud-services-certs-create.md) and use a self-signed certificate. A self-signed certificate isn't authenticated through a CA and can use the cloudapp.net domain as the website URL. For example, the following task uses a self-signed certificate in which the common name (CN) used in the certificate is **sslexample.cloudapp.net**.
Next, you must include information about the certificate in your service definition and service configuration files.
Your application must be configured to use the certificate, and an HTTPS endpoin
</WebRole> ```
- The **Certificates** section defines the name of our certificate, its location, and the name of the store where it is located.
+ The **Certificates** section defines the name of our certificate, its location, and the name of the store where it's located.
Permissions (`permissionLevel` attribute) can be set to one of the following values:
Your application must be configured to use the certificate, and an HTTPS endpoin
</WebRole> ```
- All the required changes to the service definition file have been
- completed; but, you still need to add the certificate information to
- the service configuration file.
-4. In your service configuration file (CSCFG), ServiceConfiguration.Cloud.cscfg, add a **Certificates**
-value with that of your certificate. The following code sample provides
- details of the **Certificates** section, except for the thumbprint value.
+ All the required changes to the service definition file are complete, but you still need to add the certificate information to the service configuration file.
+
+4. In your service configuration file (CSCFG), ServiceConfiguration.Cloud.cscfg, add a **Certificates** value with that of your certificate. The following code sample provides details of the **Certificates** section, except for the thumbprint value.
```xml <Role name="Deployment">
value with that of your certificate. The following code sample provides
(This example uses **sha1** for the thumbprint algorithm. Specify the appropriate value for your certificate's thumbprint algorithm.)
-Now that the service definition and service configuration files have
-been updated, package your deployment for uploading to Azure. If
-you are using **cspack**, don't use the
-**/generateConfigurationFile** flag, as that will overwrite the
-certificate information you just inserted.
+Now that you updated the service definition and service configuration files, package your deployment for uploading to Azure. If
+you're using **cspack**, don't use the
+**/generateConfigurationFile** flag, as that overwrites the
+certificate information you inserted.
## Step 3: Upload a certificate Connect to the Azure portal and...
Connect to the Azure portal and...
![Publish your cloud service](media/cloud-services-configure-ssl-certificate-portal/browse.png)
-2. Click **Certificates**.
+2. Select **Certificates**.
![Click the certificates icon](media/cloud-services-configure-ssl-certificate-portal/certificate-item.png)
-3. Click **Upload** at the top of the certificates area.
+3. Select **Upload** at the top of the certificates area.
![Click the Upload menu item](media/cloud-services-configure-ssl-certificate-portal/Upload_menu.png)
-4. Provide the **File**, **Password**, then click **Upload** at the bottom of the data entry area.
+4. Provide the **File**, **Password**, then select **Upload** at the bottom of the data entry area.
## Step 4: Connect to the role instance by using HTTPS Now that your deployment is up and running in Azure, you can connect to it using HTTPS.
-1. Click the **Site URL** to open up the web browser.
+1. Select the **Site URL** to open up the web browser.
![Click the Site URL](media/cloud-services-configure-ssl-certificate-portal/navigate.png)
cloud-services Cloud Services Connect To Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-connect-to-custom-domain.md
Title: Connect a Cloud Service (classic) to a custom Domain Controller | Microso
description: Learn how to connect your web/worker roles to a custom AD Domain using PowerShell and AD Domain Extension Previously updated : 02/21/2023 Last updated : 07/23/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-We will first set up a Virtual Network (VNet) in Azure. We will then add an Active Directory Domain Controller (hosted on an Azure Virtual Machine) to the VNet. Next, we will add existing cloud service roles to the pre-created VNet, then connect them to the Domain Controller.
+We first set up a virtual network in Azure. We then add an Active Directory Domain Controller (hosted on an Azure Virtual Machine) to the virtual network. Next, we add existing cloud service roles to the precreated virtual network, then connect them to the Domain Controller.
Before we get started, couple of things to keep in mind: 1. This tutorial uses PowerShell, so make sure you have Azure PowerShell installed and ready to go. To get help with setting up Azure PowerShell, see [How to install and configure Azure PowerShell](/powershell/azure/).
-2. Your AD Domain Controller and Web/Worker Role instances need to be in the VNet.
+2. Your AD Domain Controller and Web/Worker Role instances need to be in the virtual network.
-Follow this step-by-step guide and if you run into any issues, leave us a comment at the end of the article. Someone will get back to you (yes, we do read comments).
+Follow this step-by-step guide and if you run into any issues, leave us a comment at the end of the article.
The network that is referenced by the cloud service must be a **classic virtual network**.
-## Create a Virtual Network
-You can create a Virtual Network in Azure using the Azure portal or PowerShell. For this tutorial, PowerShell is used. To create a virtual network using the Azure portal, see [Create a virtual network](../virtual-network/quick-create-portal.md). The article covers creating a virtual network (Resource Manager), but you must create a virtual network (Classic) for cloud services. To do so, in the portal, select **Create a resource**, type *virtual network* in the **Search** box, and then press **Enter**. In the search results, under **Everything**, select **Virtual network**. Under **Select a deployment model**, select **Classic**, then select **Create**. You can then follow the steps in the article.
+## Create a virtual network
+You can create a virtual network in Azure using the Azure portal or PowerShell. For this tutorial, PowerShell is used. To create a virtual network using the Azure portal, see [Create a virtual network](../virtual-network/quick-create-portal.md). The article covers creating a virtual network (Resource Manager), but you must create a virtual network (Classic) for cloud services. To do so, in the portal, select **Create a resource**, type *virtual network* in the **Search** box, and then press **Enter**. In the search results, under **Everything**, select **virtual network**. Under **Select a deployment model**, select **Classic**, then select **Create**. You can then follow the steps in the article.
```powershell
-#Create Virtual Network
+#Create virtual network
$vnetStr = @"<?xml version="1.0" encoding="utf-8"?>
Set-AzureVNetConfig -ConfigurationPath $vnetConfigPath
``` ## Create a Virtual Machine
-Once you have completed setting up the Virtual Network, you will need to create an AD Domain Controller. For this tutorial, we will be setting up an AD Domain Controller on an Azure Virtual Machine.
+Once you complete setting up the virtual network, you need to create an AD Domain Controller. For this tutorial, we set up an AD Domain Controller on an Azure Virtual Machine (VM).
-To do this, create a virtual machine through PowerShell using the following commands:
+Create a virtual machine through PowerShell using the following commands:
```powershell # Initialize variables
$username = '<your-username>'
$password = '<your-password>' $affgrp = '<your- affgrp>'
-# Create a VM and add it to the Virtual Network
+# Create a VM and add it to the virtual network
New-AzureQuickVM -Windows -ServiceName $vmsvc1 -Name $vm1 -ImageName $imgname -AdminUsername $username -Password $password -AffinityGroup $affgrp -SubnetNames $subnetname -VNetName $vnetname ``` ## Promote your Virtual Machine to a Domain Controller
-To configure the Virtual Machine as an AD Domain Controller, you will need to log in to the VM and configure it.
+To configure the Virtual Machine as an AD Domain Controller, you need to sign in to the VM and configure it.
-To log in to the VM, you can get the RDP file through PowerShell, use the following commands:
+To sign in to the VM, you can get the remote desktop protocol (RDP) file through PowerShell, use the following commands:
```powershell # Get RDP file Get-AzureRemoteDesktopFile -ServiceName $vmsvc1 -Name $vm1 -LocalPath <rdp-file-path> ```
-Once you are signed in to the VM, set up your Virtual Machine as an AD Domain Controller by following the step-by-step guide on [How to set up your customer AD Domain Controller](https://social.technet.microsoft.com/wiki/contents/articles/12370.windows-server-2012-set-up-your-first-domain-controller-step-by-step.aspx).
+Once you sign into the VM, set up your Virtual Machine as an AD Domain Controller by following the step-by-step guide on [How to set up your customer AD Domain Controller](https://social.technet.microsoft.com/wiki/contents/articles/12370.windows-server-2012-set-up-your-first-domain-controller-step-by-step.aspx).
-## Add your Cloud Service to the Virtual Network
-Next, you need to add your cloud service deployment to the new VNet. To do this, modify your cloud service cscfg by adding the relevant sections to your cscfg using Visual Studio or the editor of your choice.
+## Add your Cloud Service to the virtual network
+Next, you need to add your cloud service deployment to the new virtual network. To add your cloud service deployment, modify your cloud service cscfg by adding the relevant sections to your cscfg using Visual Studio or the editor of your choice.
```xml <ServiceConfiguration serviceName="[hosted-service-name]" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="[os-family]" osVersion="*">
cloud-services Cloud Services Custom Domain Name Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-custom-domain-name-portal.md
Title: Configure a custom domain name in Cloud Services (classic) | Microsoft Docs
-description: Learn how to expose your Azure application or data to the internet on a custom domain by configuring DNS settings. These examples use the Azure portal.
+description: Learn how to expose your Azure application or data to the internet on a custom domain by configuring Domain Name System (DNS) settings. These examples use the Azure portal.
Previously updated : 02/21/2023 Last updated : 07/23/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-When you create a Cloud Service, Azure assigns it to a subdomain of **cloudapp.net**. For example, if your Cloud Service is named "contoso", your users will be able to access your application on a URL like `http://contoso.cloudapp.net`. Azure also assigns a virtual IP address.
+When you create a Cloud Service, Azure assigns it to a subdomain of **cloudapp.net**. For example, if your Cloud Service is named `contoso`, your users are able to access your application on a URL like `http://contoso.cloudapp.net`. Azure also assigns a virtual IP address.
However, you can also expose your application on your own domain name, such as **contoso.com**. This article explains how to reserve or configure a custom domain name for Cloud Service web roles.
Do you already understand what CNAME and A records are? [Jump past the explanati
> ## Understand CNAME and A records
-CNAME (or alias records) and A records both allow you to associate a domain name with a specific server (or service in this case,) however they work differently. There are also some specific considerations when using A records with Azure Cloud services that you should consider before deciding which to use.
+CNAME (or alias records) and A records both allow you to associate a domain name with a specific server (or service in this case); however, they work differently. There are also some specific considerations when using A records with Azure Cloud services that you should consider before deciding which to use.
### CNAME or Alias record
-A CNAME record maps a *specific* domain, such as **contoso.com** or **www\.contoso.com**, to a canonical domain name. In this case, the canonical domain name is the **[myapp].cloudapp.net** domain name of your Azure hosted application. Once created, the CNAME creates an alias for the **[myapp].cloudapp.net**. The CNAME entry will resolve to the IP address of your **[myapp].cloudapp.net** service automatically, so if the IP address of the cloud service changes, you do not have to take any action.
+A CNAME record maps a *specific* domain, such as **contoso.com** or **www\.contoso.com**, to a canonical domain name. In this case, the canonical domain name is the **[myapp].cloudapp.net** domain name of your Azure hosted application. Once created, the CNAME creates an alias for the **[myapp].cloudapp.net**. The CNAME entry resolves to the IP address of your **[myapp].cloudapp.net** service automatically, so if the IP address of the cloud service changes, you don't have to take any action.
> [!NOTE] > Some domain registrars only allow you to map subdomains when using a CNAME record, such as www\.contoso.com, and not root names, such as contoso.com. For more information on CNAME records, see the documentation provided by your registrar, [the Wikipedia entry on CNAME record](https://en.wikipedia.org/wiki/CNAME_record), or the [IETF Domain Names - Implementation and Specification](https://tools.ietf.org/html/rfc1035) document. ### A record
-An *A* record maps a domain, such as **contoso.com** or **www\.contoso.com**, *or a wildcard domain* such as **\*.contoso.com**, to an IP address. In the case of an Azure Cloud Service, the virtual IP of the service. So the main benefit of an A record over a CNAME record is that you can have one entry that uses a wildcard, such as \***.contoso.com**, which would handle requests for multiple sub-domains such as **mail.contoso.com**, **login.contoso.com**, or **www\.contso.com**.
+An *A* record maps a domain, such as **contoso.com** or **www\.contoso.com**, *or a wildcard domain* such as **\*.contoso.com**, to an IP address. With an Azure Cloud Service, the virtual IP of the service. So the main benefit of an A record over a CNAME record is that you can have one entry that uses a wildcard, such as \***.contoso.com**, which would handle requests for multiple subdomains such as **mail.contoso.com**, **login.contoso.com**, or **www\.contso.com**.
> [!NOTE] > Since an A record is mapped to a static IP address, it cannot automatically resolve changes to the IP address of your Cloud Service. The IP address used by your Cloud Service is allocated the first time you deploy to an empty slot (either production or staging.) If you delete the deployment for the slot, the IP address is released by Azure and any future deployments to the slot may be given a new IP address.
To create a CNAME record, you must add a new entry in the DNS table for your cus
1. Use one of these methods to find the **.cloudapp.net** domain name assigned to your cloud service.
- * Login to the [Azure portal], select your cloud service, look at the **Overview** section and then find the **Site URL** entry.
+ * Sign into the [Azure portal], select your cloud service, look at the **Overview** section and then find the **Site URL** entry.
![quick glance section showing the site URL][csurl]
To create a CNAME record, you must add a new entry in the DNS table for your cus
Get-AzureDeployment -ServiceName yourservicename | Select Url ```
- Save the domain name used in the URL returned by either method, as you will need it when creating a CNAME record.
-2. Log on to your DNS registrar's website and go to the page for managing DNS. Look for links or areas of the site labeled as **Domain Name**, **DNS**, or **Name Server Management**.
-3. Now find where you can select or enter CNAME's. You may have to select the record type from a drop down, or go to an advanced settings page. You should look for the words **CNAME**, **Alias**, or **Subdomains**.
+ Save the domain name used in the URL returned by either method, as you need it when creating a CNAME record.
+2. Sign into your DNS registrar's website and go to the page for managing DNS. Look for links or areas of the site labeled as **Domain Name**, **DNS**, or **Name Server Management**.
+3. Now find where you can select or enter CNAMEs. You may have to select the record type from a drop-down or go to an advanced settings page. You should look for the words **CNAME**, **Alias**, or **Subdomains**.
4. You must also provide the domain or subdomain alias for the CNAME, such as **www** if you want to create an alias for **www\.customdomain.com**. If you want to create an alias for the root domain, it may be listed as the '**\@**' symbol in your registrar's DNS tools. 5. Then, you must provide a canonical host name, which is your application's **cloudapp.net** domain in this case.
For example, the following CNAME record forwards all traffic from **www\.contoso
> (contoso.cloudapp.net), so the forwarding process is invisible to the > end user. >
-> The example above only applies to traffic at the **www** subdomain. Since you cannot use wildcards with CNAME records, you must create one CNAME for each domain/subdomain. If you want to direct traffic from subdomains, such as *.contoso.com, to your cloudapp.net address, you can configure a **URL Redirect** or **URL Forward** entry in your DNS settings, or create an A record.
+> The preceding example only applies to traffic at the **www** subdomain. Since you cannot use wildcards with CNAME records, you must create one CNAME for each domain/subdomain. If you want to direct traffic from subdomains, such as *.contoso.com, to your cloudapp.net address, you can configure a **URL Redirect** or **URL Forward** entry in your DNS settings, or create an A record.
## Add an A record for your custom domain To create an A record, you must first find the virtual IP address of your cloud service. Then add a new entry in the DNS table for your custom domain by using the tools provided by your registrar. Each registrar has a similar but slightly different method of specifying an A record, but the concepts are the same. 1. Use one of the following methods to get the IP address of your cloud service.
- * Login to the [Azure portal], select your cloud service, look at the **Overview** section and then find the **Public IP addresses** entry.
+ * Sign into the [Azure portal], select your cloud service, look at the **Overview** section and then find the **Public IP addresses** entry.
![quick glance section showing the VIP][vip]
To create an A record, you must first find the virtual IP address of your cloud
get-azurevm -servicename yourservicename | get-azureendpoint -VM {$_.VM} | select Vip ```
- Save the IP address, as you will need it when creating an A record.
-2. Log on to your DNS registrar's website and go to the page for managing DNS. Look for links or areas of the site labeled as **Domain Name**, **DNS**, or **Name Server Management**.
-3. Now find where you can select or enter A record's. You may have to select the record type from a drop down, or go to an advanced settings page.
-4. Select or enter the domain or subdomain that will use this A record. For example, select **www** if you want to create an alias for **www\.customdomain.com**. If you want to create a wildcard entry for all subdomains, enter '*****'. This will cover all sub-domains such as **mail.customdomain.com**, **login.customdomain.com**, and **www\.customdomain.com**.
+ Save the IP address, as you need it when creating an A record.
+2. Sign into your DNS registrar's website and go to the page for managing DNS. Look for links or areas of the site labeled as **Domain Name**, **DNS**, or **Name Server Management**.
+3. Now find where you can select or enter A records. You may have to select the record type from a drop-down, or go to an advanced settings page.
+4. Select or enter the domain or subdomain that uses this A record. For example, select **www** if you want to create an alias for **www\.customdomain.com**. If you want to create a wildcard entry for all subdomains, enter `*****`. This entry covers all subdomains such as **mail.customdomain.com**, **login.customdomain.com**, and **www\.customdomain.com**.
If you want to create an A record for the root domain, it may be listed as the '**\@**' symbol in your registrar's DNS tools.
-5. Enter the IP address of your cloud service in the provided field. This associates the domain entry used in the A record with the IP address of your cloud service deployment.
+5. Enter the IP address of your cloud service in the provided field. This step associates the domain entry used in the A record with the IP address of your cloud service deployment.
For example, the following A record forwards all traffic from **contoso.com** to **137.135.70.239**, the IP address of your deployed application:
This example demonstrates creating an A record for the root domain. If you wish
## Next steps * [How to Manage Cloud Services](cloud-services-how-to-manage-portal.md)
-* [How to Map CDN Content to a Custom Domain](../cdn/cdn-map-content-to-custom-domain.md)
+* [How to Map Content Delivery Network (CDN) Content to a Custom Domain](../cdn/cdn-map-content-to-custom-domain.md)
* [General configuration of your cloud service](cloud-services-how-to-configure-portal.md). * Learn how to [deploy a cloud service](cloud-services-how-to-create-deploy-portal.md). * Configure [TLS/SSL certificates](cloud-services-configure-ssl-certificate-portal.md).
cloud-services Cloud Services Diagnostics Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-diagnostics-powershell.md
Title: Enable diagnostics in Azure Cloud Services (classic) using PowerShell | M
description: Learn how to use PowerShell to enable collecting diagnostic data from an Azure Cloud Service with the Azure Diagnostics extension. Previously updated : 02/21/2023 Last updated : 07/23/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-You can collect diagnostic data like application logs, performance counters etc. from a Cloud Service using the Azure Diagnostics extension. This article describes how to enable the Azure Diagnostics extension for a Cloud Service using PowerShell. See [How to install and configure Azure PowerShell](/powershell/azure/) for the prerequisites needed for this article.
+You can collect diagnostic data like application logs, performance counters etc. from a Cloud Service using the Azure Diagnostics extension. This article describes how to enable the Azure Diagnostics extension for a Cloud Service using PowerShell. See [How to install and configure Azure PowerShell](/powershell/azure/) for the prerequisites needed for this article.
## Enable diagnostics extension as part of deploying a Cloud Service This approach is applicable to continuous integration type of scenarios, where the diagnostics extension can be enabled as part of deploying the cloud service. When creating a new Cloud Service deployment, you can enable the diagnostics extension by passing in the *ExtensionConfiguration* parameter to the [New-AzureDeployment](/powershell/module/servicemanagement/azure/new-azuredeployment) cmdlet. The *ExtensionConfiguration* parameter takes an array of diagnostics configurations that can be created using the [New-AzureServiceDiagnosticsExtensionConfig](/powershell/module/servicemanagement/azure/new-azureservicediagnosticsextensionconfig) cmdlet.
$workerrole_diagconfig = New-AzureServiceDiagnosticsExtensionConfig -Role "Worke
New-AzureDeployment -ServiceName $service_name -Slot Production -Package $service_package -Configuration $service_config -ExtensionConfiguration @($webrole_diagconfig,$workerrole_diagconfig) ```
-If the diagnostics configuration file specifies a `StorageAccount` element with a storage account name, then the `New-AzureServiceDiagnosticsExtensionConfig` cmdlet will automatically use that storage account. For this to work, the storage account needs to be in the same subscription as the Cloud Service being deployed.
+If the diagnostics configuration file specifies a `StorageAccount` element with a storage account name, then the `New-AzureServiceDiagnosticsExtensionConfig` cmdlet automatically uses that storage account. For this configuration to work, the storage account needs to be in the same subscription as the Cloud Service being deployed.
-From Azure SDK 2.6 onward the extension configuration files generated by the MSBuild publish target output will include the storage account name based on the diagnostics configuration string specified in the service configuration file (.cscfg). The script below shows you how to parse the Extension configuration files from the publish target output and configure diagnostics extension for each role when deploying the cloud service.
+From Azure SDK 2.6 onward, the extension configuration files generated by the MSBuild publish target output includes the storage account name based on the diagnostics configuration string specified in the service configuration file (.cscfg). The following script shows you how to parse the Extension configuration files from the publish target output and configure diagnostics extension for each role when deploying the cloud service.
```powershell $service_name = "MyService"
foreach ($extPath in $diagnosticsExtensions)
New-AzureDeployment -ServiceName $service_name -Slot Production -Package $service_package -Configuration $service_config -ExtensionConfiguration $diagnosticsConfigurations ```
-Visual Studio Online uses a similar approach for automated deployments of Cloud Services with the diagnostics extension. See [Publish-AzureCloudDeployment.ps1](https://github.com/Microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureCloudPowerShellDeploymentV1/Publish-AzureCloudDeployment.ps1) for a complete example.
+Visual Studio Codespace uses a similar approach for automated deployments of Cloud Services with the diagnostics extension. See [Publish-AzureCloudDeployment.ps1](https://github.com/Microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureCloudPowerShellDeploymentV1/Publish-AzureCloudDeployment.ps1) for a complete example.
-If no `StorageAccount` was specified in the diagnostics configuration, then you need to pass in the *StorageAccountName* parameter to the cmdlet. If the *StorageAccountName* parameter is specified, then the cmdlet will always use the storage account that is specified in the parameter and not the one that is specified in the diagnostics configuration file.
+If no `StorageAccount` was specified in the diagnostics configuration, then you need to pass in the *StorageAccountName* parameter to the cmdlet. If you specify the *StorageAccountName* parameter, then the cmdlet uses the storage account specified in the parameter and not the one specified in the diagnostics configuration file.
-If the diagnostics storage account is in a different subscription from the Cloud Service, then you need to explicitly pass in the *StorageAccountName* and *StorageAccountKey* parameters to the cmdlet. The *StorageAccountKey* parameter is not needed when the diagnostics storage account is in the same subscription, as the cmdlet can automatically query and set the key value when enabling the diagnostics extension. However, if the diagnostics storage account is in a different subscription, then the cmdlet might not be able to get the key automatically and you need to explicitly specify the key through the *StorageAccountKey* parameter.
+If the diagnostics storage account is in a different subscription from the Cloud Service, then you need to explicitly pass in the *StorageAccountName* and *StorageAccountKey* parameters to the cmdlet. The *StorageAccountKey* parameter isn't needed when the diagnostics storage account is in the same subscription, as the cmdlet can automatically query and set the key value when enabling the diagnostics extension. However, if the diagnostics storage account is in a different subscription, then the cmdlet might not be able to get the key automatically and you need to explicitly specify the key through the *StorageAccountKey* parameter.
```powershell $webrole_diagconfig = New-AzureServiceDiagnosticsExtensionConfig -Role "WebRole" -DiagnosticsConfigurationPath $webrole_diagconfigpath -StorageAccountName $diagnosticsstorage_name -StorageAccountKey $diagnosticsstorage_key
Remove-AzureServiceDiagnosticsExtension -ServiceName "MyService" -Role "WebRole"
``` ## Next Steps
-* For additional guidance on using Azure diagnostics and other techniques to troubleshoot problems, see [Enabling Diagnostics in Azure Cloud Services and Virtual Machines](cloud-services-dotnet-diagnostics.md).
+* For more information on using Azure diagnostics and other techniques to troubleshoot problems, see [Enabling Diagnostics in Azure Cloud Services and Virtual Machines](cloud-services-dotnet-diagnostics.md).
* The [Diagnostics Configuration Schema](../azure-monitor/agents/diagnostics-extension-schema-windows.md) explains the various xml configurations options for the diagnostics extension. * To learn how to enable the diagnostics extension for Virtual Machines, see [Create a Windows Virtual machine with monitoring and diagnostics using Azure Resource Manager Template](../virtual-machines/extensions/diagnostics-template.md)
cloud-services Cloud Services Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-disaster-recovery-guidance.md
Title: Handling an Azure service disruption that impacts Azure Cloud Services (classic)
-description: Learn what to do in the event of an Azure service disruption that impacts Azure Cloud Services.
+description: Learn what to do if an Azure service disruption that impacts Azure Cloud Services.
Previously updated : 02/21/2023 Last updated : 07/23/2024
-# What to do in the event of an Azure service disruption that impacts Azure Cloud Services (classic)
+# What to do if an Azure service disruption that impacts Azure Cloud Services (classic)
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-At Microsoft, we work hard to make sure that our services are always available to you when you need them. Forces beyond our control sometimes impact us in ways that cause unplanned service disruptions.
+At Microsoft, we work hard to make sure that our services are always available to you when you need them. Forces beyond our control sometimes affect us in ways that cause unplanned service disruptions.
Microsoft provides a Service Level Agreement (SLA) for its services as a commitment for uptime and connectivity. The SLA for individual Azure services can be found at [Azure Service Level Agreements](https://azure.microsoft.com/support/legal/sla/). Azure already has many built-in platform features that support highly available applications. For more about these services, read [Disaster recovery and high availability for Azure applications](/azure/architecture/framework/resiliency/backup-and-recovery).
-This article covers a true disaster recovery scenario, when a whole region experiences an outage due to major natural disaster or widespread service interruption. These are rare occurrences, but you must prepare for the possibility that there is an outage of an entire region. If an entire region experiences a service disruption, the locally redundant copies of your data would temporarily be unavailable. If you have enabled geo-replication, three additional copies of your Azure Storage blobs and tables are stored in a different region. In the event of a complete regional outage or a disaster in which the primary region is not recoverable, Azure remaps all of the DNS entries to the geo-replicated region.
+This article covers a true disaster recovery scenario, when a whole region experiences an outage due to major natural disaster or widespread service interruption. These scenarios are rare occurrences, but you must prepare for the possibility that there's an outage of an entire region. If an entire region experiences a service disruption, the locally redundant copies of your data would temporarily be unavailable. If you enabled geo-replication, three extra copies of your Azure Storage blobs and tables are stored in a different region. If a complete regional outage or a disaster in which the primary region isn't recoverable occurs, Azure remaps all of the Domain Name System (DNS) entries to the geo-replicated region.
> [!NOTE] > Be aware that you do not have any control over this process, and it will only occur for datacenter-wide service disruptions. Because of this, you must also rely on other application-specific backup strategies to achieve the highest level of availability. For more information, see [Disaster recovery and high availability for applications built on Microsoft Azure](/azure/architecture/framework/resiliency/backup-and-recovery). If you would like to be able to affect your own failover, you might want to consider the use of [read-access geo-redundant storage (RA-GRS)](../storage/common/storage-redundancy.md), which creates a read-only copy of your data in another region.
The most robust disaster recovery solution involves maintaining multiple deploym
![Balancing Azure Cloud Services across regions with Azure Traffic Manager](./media/cloud-services-disaster-recovery-guidance/using-azure-traffic-manager.png)
-For the fastest response to the loss of a region, it is important that you configure Traffic Manager's [endpoint monitoring](../traffic-manager/traffic-manager-monitoring.md).
+For the fastest response to the loss of a region, it's important that you configure Traffic Manager's [endpoint monitoring](../traffic-manager/traffic-manager-monitoring.md).
## Option 2: Deploy your application to a new region Maintaining multiple active deployments as described in the previous option incurs additional ongoing costs. If your recovery time objective (RTO) is flexible enough and you have the original code or compiled Cloud Services package, you can create a new instance of your application in another region and update your DNS records to point to the new deployment.
Depending on your application data sources, you may need to check the recovery p
## Option 3: Wait for recovery
-In this case, no action on your part is required, but your service will be unavailable until the region is restored. You can see the current service status on the [Azure Service Health Dashboard](https://azure.microsoft.com/status/).
+In this case, no action on your part is required, but your service is unavailable until the region is restored. You can see the current service status on the [Azure Service Health Dashboard](https://azure.microsoft.com/status/).
## Next steps To learn more about how to implement a disaster recovery and high availability strategy, see [Disaster recovery and high availability for Azure applications](/azure/architecture/framework/resiliency/backup-and-recovery).
cloud-services Cloud Services Dotnet Diagnostics Trace Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-diagnostics-trace-flow.md
Title: Trace the flow in Cloud Services (classic) Application with Azure Diagnos
description: Add tracing messages to an Azure application to help debugging, measuring performance, monitoring, traffic analysis, and more. Previously updated : 02/21/2023 Last updated : 07/23/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-Tracing is a way for you to monitor the execution of your application while it is running. You can use the [System.Diagnostics.Trace](/dotnet/api/system.diagnostics.trace), [System.Diagnostics.Debug](/dotnet/api/system.diagnostics.debug), and [System.Diagnostics.TraceSource](/dotnet/api/system.diagnostics.tracesource) classes to record information about errors and application execution in logs, text files, or other devices for later analysis. For more information about tracing, see [Tracing and Instrumenting Applications](/dotnet/framework/debug-trace-profile/tracing-and-instrumenting-applications).
+Tracing is a way for you to monitor the execution of your application while it's running. You can use the [System.Diagnostics.Trace](/dotnet/api/system.diagnostics.trace), [System.Diagnostics.Debug](/dotnet/api/system.diagnostics.debug), and [System.Diagnostics.TraceSource](/dotnet/api/system.diagnostics.tracesource) classes to record information about errors and application execution in logs, text files, or other devices for later analysis. For more information about tracing, see [Tracing and Instrumenting Applications](/dotnet/framework/debug-trace-profile/tracing-and-instrumenting-applications).
## Use trace statements and trace switches
-Implement tracing in your Cloud Services application by adding the [DiagnosticMonitorTraceListener](/previous-versions/azure/reference/ee758610(v=azure.100)) to the application configuration and making calls to System.Diagnostics.Trace or System.Diagnostics.Debug in your application code. Use the configuration file *app.config* for worker roles and the *web.config* for web roles. When you create a new hosted service using a Visual Studio template, Azure Diagnostics is automatically added to the project and the DiagnosticMonitorTraceListener is added to the appropriate configuration file for the roles that you add.
+Implement tracing in your Cloud Services application by adding the [DiagnosticMonitorTraceListener](/previous-versions/azure/reference/ee758610(v=azure.100)) to the application configuration and making calls to System.Diagnostics.Trace or System.Diagnostics.Debug in your application code. Use the configuration file *app.config* for worker roles and the *web.config* for web roles. When you create a new hosted service using a Visual Studio template, Azure Diagnostics is automatically added to the project, and the DiagnosticMonitorTraceListener is added to the appropriate configuration file for the roles that you add.
For information on placing trace statements, see [How to: Add Trace Statements to Application Code](/dotnet/framework/debug-trace-profile/how-to-add-trace-statements-to-application-code).
-By placing [Trace Switches](/dotnet/framework/debug-trace-profile/trace-switches) in your code, you can control whether tracing occurs and how extensive it is. This lets you monitor the status of your application in a production environment. This is especially important in a business application that uses multiple components running on multiple computers. For more information, see [How to: Configure Trace Switches](/dotnet/framework/debug-trace-profile/how-to-create-initialize-and-configure-trace-switches).
+By placing [Trace Switches](/dotnet/framework/debug-trace-profile/trace-switches) in your code, you can control whether tracing occurs and how extensive it is. Tracing lets you monitor the status of your application in a production environment. Monitoring application status is especially important in a business application that uses multiple components running on multiple computers. For more information, see [How to: Configure Trace Switches](/dotnet/framework/debug-trace-profile/how-to-create-initialize-and-configure-trace-switches).
## Configure the trace listener in an Azure application
-Trace, Debug and TraceSource, require you set up "listeners" to collect and record the messages that are sent. Listeners collect, store, and route tracing messages. They direct the tracing output to an appropriate target, such as a log, window, or text file. Azure Diagnostics uses the [DiagnosticMonitorTraceListener](/previous-versions/azure/reference/ee758610(v=azure.100)) class.
+Trace, Debug, and TraceSource require you set up "listeners" to collect and record the messages that are sent. Listeners collect, store, and route tracing messages. They direct the tracing output to an appropriate target, such as a log, window, or text file. Azure Diagnostics uses the [DiagnosticMonitorTraceListener](/previous-versions/azure/reference/ee758610(v=azure.100)) class.
-Before you complete the following procedure, you must initialize the Azure diagnostic monitor. To do this, see [Enabling Diagnostics in Microsoft Azure](cloud-services-dotnet-diagnostics.md).
+Before you complete the following procedure, you must initialize the Azure diagnostic monitor. To initialize the Azure diagnostic monitor, see [Enabling Diagnostics in Microsoft Azure](cloud-services-dotnet-diagnostics.md).
-Note that if you use the templates that are provided by Visual Studio, the configuration of the listener is added automatically for you.
+> [!NOTE]
+> If you use the templates that are provided by Visual Studio, the configuration of the listener is added automatically for you.
### Add a trace listener 1. Open the web.config or app.config file for your role.
-2. Add the following code to the file. Change the Version attribute to use the version number of the assembly you are referencing. The assembly version does not necessarily change with each Azure SDK release unless there are updates to it.
+2. Add the following code to the file. Change the Version attribute to use the version number of the assembly you're referencing. The assembly version doesn't necessarily change with each Azure SDK release unless there are updates to it.
```xml <system.diagnostics>
Note that if you use the templates that are provided by Visual Studio, the confi
``` > [!IMPORTANT]
- > Make sure you have a project reference to the Microsoft.WindowsAzure.Diagnostics assembly. Update the version number in the xml above to match the version of the referenced Microsoft.WindowsAzure.Diagnostics assembly.
+ > Make sure you have a project reference to the Microsoft.WindowsAzure.Diagnostics assembly. Update the version number in the preceding xml to match the version of the referenced Microsoft.WindowsAzure.Diagnostics assembly.
3. Save the config file.
After you complete the steps to add the listener, you can add trace statements t
### To add trace statement to your code 1. Open a source file for your application. For example, the \<RoleName>.cs file for the worker role or web role.
-2. Add the following using directive if it has not already been added:
+2. Add the following using directive if it isn't present:
``` using System.Diagnostics; ```
-3. Add Trace statements where you want to capture information about the state of your application. You can use a variety of methods to format the output of the Trace statement. For more information, see [How to: Add Trace Statements to Application Code](/dotnet/framework/debug-trace-profile/how-to-add-trace-statements-to-application-code).
+3. Add Trace statements where you want to capture information about the state of your application. You can use various methods to format the output of the Trace statement. For more information, see [How to: Add Trace Statements to Application Code](/dotnet/framework/debug-trace-profile/how-to-add-trace-statements-to-application-code).
4. Save the source file.
cloud-services Cloud Services Dotnet Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-diagnostics.md
Title: How to use Azure diagnostics (.NET) with Cloud Services (classic) | Micro
description: Using Azure diagnostics to gather data from Azure cloud Services for debugging, measuring performance, monitoring, traffic analysis, and more. Previously updated : 02/21/2023 Last updated : 07/23/2024
See [Azure Diagnostics Overview](../azure-monitor/agents/diagnostics-extension-overview.md) for a background on Azure Diagnostics. ## How to Enable Diagnostics in a Worker Role
-This walkthrough describes how to implement an Azure worker role that emits telemetry data using the .NET EventSource class. Azure Diagnostics is used to collect the telemetry data and store it in an Azure storage account. When creating a worker role, Visual Studio automatically enables Diagnostics 1.0 as part of the solution in Azure SDKs for .NET 2.4 and earlier. The following instructions describe the process for creating the worker role, disabling Diagnostics 1.0 from the solution, and deploying Diagnostics 1.2 or 1.3 to your worker role.
+This walkthrough describes how to implement an Azure worker role that emits telemetry data using the .NET EventSource class. Azure Diagnostics is used to collect the telemetry data and store it in an Azure storage account. When you create a worker role, Visual Studio automatically enables Diagnostics 1.0 as part of the solution in Azure Software Development Kits (SDKs) for .NET 2.4 and earlier. The following instructions describe the process for creating the worker role, disabling Diagnostics 1.0 from the solution, and deploying Diagnostics 1.2 or 1.3 to your worker role.
### Prerequisites
-This article assumes you have an Azure subscription and are using Visual Studio with the Azure SDK. If you do not have an Azure subscription, you can sign up for the [Free Trial][Free Trial]. Make sure to [Install and configure Azure PowerShell version 0.8.7 or later][Install and configure Azure PowerShell version 0.8.7 or later].
+This article assumes you have an Azure subscription and are using Visual Studio with the Azure SDK. If you don't have an Azure subscription, you can sign up for the [Free Trial][Free Trial]. Make sure to [Install and configure Azure PowerShell version 0.8.7 or later][Install and configure Azure PowerShell version 0.8.7 or later].
### Step 1: Create a Worker Role 1. Launch **Visual Studio**.
-2. Create an **Azure Cloud Service** project from the **Cloud** template that targets .NET Framework 4.5. Name the project "WadExample" and click Ok.
-3. Select **Worker Role** and click Ok. The project will be created.
+2. Create an **Azure Cloud Service** project from the **Cloud** template that targets .NET Framework 4.5. Name the project "WadExample" and select Ok.
+3. Select **Worker Role** and select Ok. The project is created.
4. In **Solution Explorer**, double-click the **WorkerRole1** properties file.
-5. In the **Configuration** tab, un-check **Enable Diagnostics** to disable Diagnostics 1.0 (Azure SDK 2.4 and earlier).
+5. In the **Configuration** tab, uncheck **Enable Diagnostics** to disable Diagnostics 1.0 (Azure SDK 2.4 and earlier).
6. Build your solution to verify that you have no errors. ### Step 2: Instrument your code
-Replace the contents of WorkerRole.cs with the following code. The class SampleEventSourceWriter, inherited from the [EventSource Class][EventSource Class], implements four logging methods: **SendEnums**, **MessageMethod**, **SetOther** and **HighFreq**. The first parameter to the **WriteEvent** method defines the ID for the respective event. The Run method implements an infinite loop that calls each of the logging methods implemented in the **SampleEventSourceWriter** class every 10 seconds.
+Replace the contents of WorkerRole.cs with the following code. The class SampleEventSourceWriter, inherited from the [EventSource Class][EventSource Class], implements four logging methods: **SendEnums**, **MessageMethod**, **SetOther**, and **HighFreq**. The first parameter to the **WriteEvent** method defines the ID for the respective event. The Run method implements an infinite loop that calls each of the logging methods implemented in the **SampleEventSourceWriter** class every 10 seconds.
```csharp using Microsoft.WindowsAzure.ServiceRuntime;
namespace WorkerRole1
3. In the **Microsoft Azure Publish Settings** dialog, select **Create New…**. 4. In the **Create Cloud Service and Storage Account** dialog, enter a **Name** (for example, "WadExample") and select a region or affinity group. 5. Set the **Environment** to **Staging**.
-6. Modify any other **Settings** as appropriate and click **Publish**.
-7. After deployment has completed, verify in the Azure portal that your cloud service is in a **Running** state.
+6. Modify any other **Settings** as appropriate and select **Publish**.
+7. After the deployment completes, verify in the Azure portal that your cloud service is in a **Running** state.
### Step 4: Create your Diagnostics configuration file and install the extension 1. Download the public configuration file schema definition by executing the following PowerShell command:
namespace WorkerRole1
```powershell (Get-AzureServiceAvailableExtension -ExtensionName 'PaaSDiagnostics' -ProviderNamespace 'Microsoft.Azure.Diagnostics').PublicConfigurationSchema | Out-File -Encoding utf8 -FilePath 'WadConfig.xsd' ```
-2. Add an XML file to your **WorkerRole1** project by right-clicking on the **WorkerRole1** project and select **Add** -> **New Item…** -> **Visual C# items** -> **Data** -> **XML File**. Name the file "WadExample.xml".
+2. Add an XML file to your **WorkerRole1** project by right-clicking on the **WorkerRole1** project and select **Add** -> **New Item…** -> **Visual C# items** -> **Data** -> **XML File**. Name the file `WadExample.xml`.
![CloudServices_diag_add_xml](./media/cloud-services-dotnet-diagnostics/AddXmlFile.png)
-3. Associate the WadConfig.xsd with the configuration file. Make sure the WadExample.xml editor window is the active window. Press **F4** to open the **Properties** window. Click the **Schemas** property in the **Properties** window. Click the **…** in the **Schemas** property. Click the **Add…** button and navigate to the location where you saved the XSD file and select the file WadConfig.xsd. Click **OK**.
+3. Associate the WadConfig.xsd with the configuration file. Make sure the WadExample.xml editor window is the active window. Press **F4** to open the **Properties** window. Select the **Schemas** property in the **Properties** window. Select the **…** in the **Schemas** property. Select the **Add…** button and navigate to the location where you saved the .xsd file and select the file WadConfig.xsd. Select **OK**.
4. Replace the contents of the WadExample.xml configuration file with the following XML and save the file. This configuration file defines a couple performance counters to collect: one for CPU utilization and one for memory utilization. Then the configuration defines the four events corresponding to the methods in the SampleEventSourceWriter class.
Set-AzureServiceDiagnosticsExtension -StorageContext $storageContext -Diagnostic
``` ### Step 6: Look at your telemetry data
-In the Visual Studio **Server Explorer**, navigate to the wadexample storage account. After the cloud service has been running about five (5) minutes, you should see the tables **WADEnumsTable**, **WADHighFreqTable**, **WADMessageTable**, **WADPerformanceCountersTable** and **WADSetOtherTable**. Double-click one of the tables to view the telemetry that has been collected.
+In the Visual Studio **Server Explorer**, navigate to the wadexample storage account. After the cloud service has been running about five (5) minutes, you should see the tables **WADEnumsTable**, **WADHighFreqTable**, **WADMessageTable**, **WADPerformanceCountersTable**, and **WADSetOtherTable**. Double-click one of the tables to view the collected telemetry.
![CloudServices_diag_tables](./media/cloud-services-dotnet-diagnostics/WadExampleTables.png)
The Diagnostics configuration file defines values that are used to initialize di
If you have trouble, see [Troubleshooting Azure Diagnostics](../azure-monitor/agents/diagnostics-extension-troubleshooting.md) for help with common problems. ## Next Steps
-[See a list of related Azure virtual-machine diagnostic articles](../azure-monitor/agents/diagnostics-extension-overview.md) to change the data you are collecting, troubleshoot problems or learn more about diagnostics in general.
+[See a list of related Azure virtual-machine diagnostic articles](../azure-monitor/agents/diagnostics-extension-overview.md) to change the data you collect, troubleshoot problems, or learn more about diagnostics in general.
[EventSource Class]: /dotnet/api/system.diagnostics.tracing.eventsource
cloud-services Cloud Services Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-get-started.md
Title: Get started with Azure Cloud Services (classic) and ASP.NET | Microsoft Docs
-description: Learn how to create a multi-tier app using ASP.NET MVC and Azure. The app runs in a cloud service, with web role and worker role. It uses Entity Framework, SQL Database, and Azure Storage queues and blobs.
+description: Learn how to create a multi-tier app using ASP.NET Model-View-Controller (MVC) and Azure. The app runs in a cloud service, with web role and worker role. It uses Entity Framework, SQL Database, and Azure Storage queues and blobs.
Previously updated : 02/21/2023 Last updated : 07/23/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-This tutorial shows how to create a multi-tier .NET application with an ASP.NET MVC front-end, and deploy it to an [Azure cloud service](cloud-services-choose-me.md). The application uses [Azure SQL Database](/previous-versions/azure/ee336279(v=azure.100)), the [Azure Blob service](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/unstructured-blob-storage), and the [Azure Queue service](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/queue-centric-work-pattern). You can [download the Visual Studio project](https://code.msdn.microsoft.com/Simple-Azure-Cloud-Service-e01df2e4) from the MSDN Code Gallery.
+This tutorial shows you how to create a multi-tier .NET application with an ASP.NET Model-View-Controller (MVC) front-end and deploy it to an [Azure cloud service](cloud-services-choose-me.md). The application uses [Azure SQL Database](/previous-versions/azure/ee336279(v=azure.100)), the [Azure Blob service](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/unstructured-blob-storage), and the [Azure Queue service](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/queue-centric-work-pattern). You can [download the Visual Studio project](https://code.msdn.microsoft.com/Simple-Azure-Cloud-Service-e01df2e4) from the Microsoft Developer Network (MSDN) Code Gallery.
The tutorial shows you how to build and run the application locally, how to deploy it to Azure and run in the cloud, and how to build it from scratch. You can start by building from scratch and then do the test and deploy steps afterward if you prefer.
The application uses the [queue-centric work pattern](https://www.asp.net/aspnet
## Alternative architecture: App Service and WebJobs This tutorial shows how to run both front-end and back-end in an Azure cloud service. An alternative is to run the front-end in [Azure App Service](../app-service/index.yml) and use the [WebJobs](../app-service/webjobs-create.md) feature for the back-end. For a tutorial that uses WebJobs, see [Get Started with the Azure WebJobs SDK](https://github.com/Azure/azure-webjobs-sdk/wiki). For information about how to choose the services that best fit your scenario, see [Azure App Service, Cloud Services, and virtual machines comparison](/azure/architecture/guide/technology-choices/compute-decision-tree).
-## What you'll learn
+## Learning goals
* How to enable your machine for Azure development by installing the Azure SDK. * How to create a Visual Studio cloud service project with an ASP.NET MVC web role and a worker role. * How to test the cloud service project locally, using the Azure Storage Emulator.
This tutorial shows how to run both front-end and back-end in an Azure cloud ser
* How to use the Azure Queue service for communication between tiers. ## Prerequisites
-The tutorial assumes that you understand [basic concepts about Azure cloud services](cloud-services-choose-me.md) such as *web role* and *worker role* terminology. It also assumes that you know how to work with [ASP.NET MVC](https://www.asp.net/mvc/tutorials/mvc-5/introduction/getting-started) or [Web Forms](https://www.asp.net/web-forms/tutorials/aspnet-45/getting-started-with-aspnet-45-web-forms/introduction-and-overview) projects in Visual Studio. The sample application uses MVC, but most of the tutorial also applies to Web Forms.
+The tutorial assumes that you understand [basic concepts about Azure cloud services](cloud-services-choose-me.md) such as *web role* and *worker role* terminology. It also assumes that you know how to work with [ASP.NET MVC](https://www.asp.net/mvc/tutorials/mvc-5/introduction/getting-started) or [Web Forms](https://www.asp.net/web-forms/tutorials/aspnet-45/getting-started-with-aspnet-45-web-forms/introduction-and-overview) projects in Visual Studio. The sample application uses MVC, but most of the tutorial also applies to Web Forms.
-You can run the app locally without an Azure subscription, but you'll need one to deploy the application to the cloud. If you don't have an account, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A55E3C668) or [sign up for a free trial](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A55E3C668).
+You can run the app locally without an Azure subscription, but you need one to deploy the application to the cloud. If you don't have an account, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A55E3C668) or [sign up for a free trial](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A55E3C668).
The tutorial instructions work with any of the following products:
The tutorial instructions work with any of the following products:
If you don't have one of these, Visual Studio may be installed automatically when you install the Azure SDK. ## Application architecture
-The app stores ads in a SQL database, using Entity Framework Code First to create the tables and access the data. For each ad, the database stores two URLs, one for the full-size image and one for the thumbnail.
+The app stores ads in an SQL database, using Entity Framework Code First to create the tables and access the data. For each ad, the database stores two URLs, one for the full-size image and one for the thumbnail.
![This is an image of an Ad table](./media/cloud-services-dotnet-get-started/adtable.png)
When a user uploads an image, the front-end running in a web role stores the ima
1. Download and unzip the [completed solution](https://code.msdn.microsoft.com/Simple-Azure-Cloud-Service-e01df2e4). 2. Start Visual Studio. 3. From the **File** menu choose **Open Project**, navigate to where you downloaded the solution, and then open the solution file.
-4. Press CTRL+SHIFT+B to build the solution.
+4. To build the solution, press CTRL+SHIFT+B.
- By default, Visual Studio automatically restores the NuGet package content, which was not included in the *.zip* file. If the packages don't restore, install them manually by going to the **Manage NuGet Packages for Solution** dialog box and clicking the **Restore** button at the top right.
+ By default, Visual Studio automatically restores the NuGet package content, which wasn't included in the *.zip* file. If the packages don't restore, install them manually by going to the **Manage NuGet Packages for Solution** dialog box and clicking the **Restore** button at the top right.
5. In **Solution Explorer**, make sure that **ContosoAdsCloudService** is selected as the startup project. 6. If you're using Visual Studio 2015 or higher, change the SQL Server connection string in the application *Web.config* file of the ContosoAdsWeb project and in the *ServiceConfiguration.Local.cscfg* file of the ContosoAdsCloudService project. In each case, change "(localdb)\v11.0" to "(localdb)\MSSQLLocalDB".
-7. Press CTRL+F5 to run the application.
+7. To run the application, press CTRL+F5.
When you run a cloud service project locally, Visual Studio automatically invokes the Azure *compute emulator* and Azure *storage emulator*. The compute emulator uses your computer's resources to simulate the web role and worker role environments. The storage emulator uses a [SQL Server Express LocalDB](/sql/database-engine/configure-windows/sql-server-2016-express-localdb) database to simulate Azure cloud storage. The first time you run a cloud service project, it takes a minute or so for the emulators to start up. When emulator startup is finished, the default browser opens to the application home page. ![Contoso Ads architecture 1](./media/cloud-services-dotnet-get-started/home.png)
-8. Click **Create an Ad**.
-9. Enter some test data and select a *.jpg* image to upload, and then click **Create**.
+8. Select **Create an Ad**.
+9. Enter some test data and select a *.jpg* image to upload, and then select **Create**.
![Image shows Create page](./media/cloud-services-dotnet-get-started/create.png)
- The app goes to the Index page, but it doesn't show a thumbnail for the new ad because that processing hasn't happened yet.
+ The app goes to the Index page, but it doesn't show a thumbnail for the new ad because that processing has yet to happen.
10. Wait a moment and then refresh the Index page to see the thumbnail. ![Index page](./media/cloud-services-dotnet-get-started/list.png)
-11. Click **Details** for your ad to see the full-size image.
+11. Select **Details** for your ad to see the full-size image.
![Details page](./media/cloud-services-dotnet-get-started/details.png) You've been running the application entirely on your local computer, with no connection to the cloud. The storage emulator stores the queue and blob data in a SQL Server Express LocalDB database, and the application stores the ad data in another LocalDB database. Entity Framework Code First automatically created the ad database the first time the web app tried to access it.
-In the following section you'll configure the solution to use Azure cloud resources for queues, blobs, and the application database when it runs in the cloud. If you wanted to continue to run locally but use cloud storage and database resources, you could do that. It's just a matter of setting connection strings, which you'll see how to do.
+In the following section, you configure the solution to use Azure cloud resources for queues, blobs, and the application database when it runs in the cloud. If you wanted to continue to run locally but use cloud storage and database resources, you could do that. It's just a matter of setting connection strings, which you see how to do.
## Deploy the application to Azure
-You'll do the following steps to run the application in the cloud:
+You do the following steps to run the application in the cloud:
* Create an Azure cloud service. * Create a database in Azure SQL Database.
You'll do the following steps to run the application in the cloud:
* Deploy the project to your Azure cloud service. ### Create an Azure cloud service
-An Azure cloud service is the environment the application will run in.
+An Azure cloud service is the environment the application runs in.
1. In your browser, open the [Azure portal](https://portal.azure.com).
-2. Click **Create a resource > Compute > Cloud Service**.
+2. Select **Create a resource > Compute > Cloud Service**.
-3. In the DNS name input box, enter a URL prefix for the cloud service.
+3. In the Domain Name System (DNS) name input box, enter a URL prefix for the cloud service.
- This URL has to be unique. You'll get an error message if the prefix you choose is already in use.
-4. Specify a new Resource group for the service. Click **Create new** and then type a name in the Resource group input box, such as CS_contososadsRG.
+ This URL has to be unique. You get an error message if the prefix you choose is already in use.
+4. Specify a new Resource group for the service. Select **Create new** and then type a name in the Resource group input box, such as CS_contososadsRG.
5. Choose the region where you want to deploy the application.
- This field specifies which datacenter your cloud service will be hosted in. For a production application, you'd choose the region closest to your customers. For this tutorial, choose the region closest to you.
-5. Click **Create**.
+ This field specifies which datacenter your cloud service is hosted in. For a production application, you'd choose the region closest to your customers. For this tutorial, choose the region closest to you.
+5. Select **Create**.
In the following image, a cloud service is created with the URL CSvccontosoads.cloudapp.net. ![Image shows New Cloud Service](./media/cloud-services-dotnet-get-started/newcs.png) ### Create a database in Azure SQL Database
-When the app runs in the cloud, it will use a cloud-based database.
+When the app runs in the cloud, it uses a cloud-based database.
-1. In the [Azure portal](https://portal.azure.com), click **Create a resource > Databases > SQL Database**.
+1. In the [Azure portal](https://portal.azure.com), select **Create a resource > Databases > SQL Database**.
2. In the **Database Name** box, enter *contosoads*.
-3. In the **Resource group**, click **Use existing** and select the resource group used for the cloud service.
-4. In the following image, click **Server - Configure required settings** and **Create a new server**.
+3. In the **Resource group**, choose **Use existing** and select the resource group used for the cloud service.
+4. In the following image, select **Server - Configure required settings** and **Create a new server**.
![Tunnel to database server](./media/cloud-services-dotnet-get-started/newdb.png)
When the app runs in the cloud, it will use a cloud-based database.
6. Enter an administrator **Login Name** and **Password**.
- If you selected **Create a new server**, you aren't entering an existing name and password here. You're entering a new name and password that you're defining now to use later when you access the database. If you selected a server that you created previously, you'll be prompted for the password to the administrative user account you already created.
+ If you selected **Create a new server**, you aren't entering an existing name and password here. You're entering a new name and password that you're defining now to use later when you access the database. If you selected a server that you created previously, the portal prompts you for the password to the administrative user account you already created.
7. Choose the same **Location** that you chose for the cloud service.
- When the cloud service and database are in different datacenters (different regions), latency will increase and you will be charged for bandwidth outside the data center. Bandwidth within a data center is free.
+ When the cloud service and database are in different datacenters (different regions), latency increases and you incur charges for bandwidth outside the data center. Bandwidth within a data center is free.
8. Check **Allow azure services to access server**.
-9. Click **Select** for the new server.
+9. Select **Select** for the new server.
![New server](./media/cloud-services-dotnet-get-started/newdbserver.png)
-10. Click **Create**.
+10. Choose **Create**.
### Create an Azure storage account An Azure storage account provides resources for storing queue and blob data in the cloud.
-In a real-world application, you would typically create separate accounts for application data versus logging data, and separate accounts for test data versus production data. For this tutorial, you'll use just one account.
+In a real-world application, you would typically create separate accounts for application data versus logging data, and separate accounts for test data versus production data. For this tutorial, you use just one account.
-1. In the [Azure portal](https://portal.azure.com), click **Create a resource > Storage > Storage account - blob, file, table, queue**.
+1. In the [Azure portal](https://portal.azure.com), select **Create a resource > Storage > Storage account - blob, file, table, queue**.
2. In the **Name** box, enter a URL prefix.
- This prefix plus the text you see under the box will be the unique URL to your storage account. If the prefix you enter has already been used by someone else, you'll have to choose a different prefix.
+ This prefix plus the text you see under the box is the unique URL to your storage account. If the prefix you enter is already in use by someone else, choose a different prefix.
3. Set the **Deployment model** to *Classic*. 4. Set the **Replication** drop-down list to **Locally redundant storage**. When geo-replication is enabled for a storage account, the stored content is replicated to a secondary datacenter to enable failover if a major disaster occurs in the primary location. Geo-replication can incur additional costs. For test and development accounts, you generally don't want to pay for geo-replication. For more information, see [Create, manage, or delete a storage account](../storage/common/storage-account-create.md).
-5. In the **Resource group**, click **Use existing** and select the resource group used for the cloud service.
+5. In the **Resource group**, select **Use existing** and select the resource group used for the cloud service.
6. Set the **Location** drop-down list to the same region you chose for the cloud service.
- When the cloud service and storage account are in different datacenters (different regions), latency will increase and you will be charged for bandwidth outside the data center. Bandwidth within a data center is free.
+ When the cloud service and storage account are in different datacenters (different regions), latency increases and you incur charges for bandwidth outside the data center. Bandwidth within a data center is free.
- Azure affinity groups provide a mechanism to minimize the distance between resources in a data center, which can reduce latency. This tutorial does not use affinity groups. For more information, see [How to Create an Affinity Group in Azure](/previous-versions/azure/reference/gg715317(v=azure.100)).
-7. Click **Create**.
+ Azure affinity groups provide a mechanism to minimize the distance between resources in a data center, which can reduce latency. This tutorial doesn't use affinity groups. For more information, see [How to Create an Affinity Group in Azure](/previous-versions/azure/reference/gg715317(v=azure.100)).
+7. Choose **Create**.
![New storage account](./media/cloud-services-dotnet-get-started/newstorage.png)
In a real-world application, you would typically create separate accounts for ap
The web project and the worker role project each has its own database connection string, and each needs to point to the database in Azure SQL Database when the app runs in Azure.
-You'll use a [Web.config transform](https://www.asp.net/mvc/tutorials/deployment/visual-studio-web-deployment/web-config-transformations) for the web role and a cloud service environment setting for the worker role.
+You use a [Web.config transform](https://www.asp.net/mvc/tutorials/deployment/visual-studio-web-deployment/web-config-transformations) for the web role and a cloud service environment setting for the worker role.
> [!NOTE] > In this section and the next section, you store credentials in project files. [Don't store sensitive data in public source code repositories](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/source-control#secrets).
You'll use a [Web.config transform](https://www.asp.net/mvc/tutorials/deployment
``` Leave the file open for editing.
-2. In the [Azure portal](https://portal.azure.com), click **SQL Databases** in the left pane, click the database you created for this tutorial, and then click **Show connection strings**.
+2. In the [Azure portal](https://portal.azure.com), choose **SQL Databases** in the left pane, select the database you created for this tutorial, and then select **Show connection strings**.
![Show connection strings](./media/cloud-services-dotnet-get-started/showcs.png)
You'll use a [Web.config transform](https://www.asp.net/mvc/tutorials/deployment
4. In the connection string that you pasted into the *Web.Release.config* transform file, replace `{your_password_here}` with the password you created for the new SQL database. 5. Save the file. 6. Select and copy the connection string (without the surrounding quotation marks) for use in the following steps for configuring the worker role project.
-7. In **Solution Explorer**, under **Roles** in the cloud service project, right-click **ContosoAdsWorker** and then click **Properties**.
+7. In **Solution Explorer**, under **Roles** in the cloud service project, right-click **ContosoAdsWorker** and then select **Properties**.
![Screenshot that highlights the Properties menu option.](./media/cloud-services-dotnet-get-started/rolepropertiesworker.png)
-8. Click the **Settings** tab.
+8. Choose the **Settings** tab.
9. Change **Service Configuration** to **Cloud**. 10. Select the **Value** field for the `ContosoAdsDbConnectionString` setting, and then paste the connection string that you copied from the previous section of the tutorial.
You'll use a [Web.config transform](https://www.asp.net/mvc/tutorials/deployment
11. Save your changes. ### Configure the solution to use your Azure storage account when it runs in Azure
-Azure storage account connection strings for both the web role project and the worker role project are stored in environment settings in the cloud service project. For each project, there is a separate set of settings to be used when the application runs locally and when it runs in the cloud. You'll update the cloud environment settings for both web and worker role projects.
+Azure storage account connection strings for both the web role project and the worker role project are stored in environment settings in the cloud service project. For each project, there's a separate set of settings to be used when the application runs locally and when it runs in the cloud. You update the cloud environment settings for both web and worker role projects.
-1. In **Solution Explorer**, right-click **ContosoAdsWeb** under **Roles** in the **ContosoAdsCloudService** project, and then click **Properties**.
+1. In **Solution Explorer**, right-click **ContosoAdsWeb** under **Roles** in the **ContosoAdsCloudService** project, and then select **Properties**.
![Image shows Role properties](./media/cloud-services-dotnet-get-started/roleproperties.png)
-2. Click the **Settings** tab. In the **Service Configuration** drop-down box, choose **Cloud**.
+2. Choose the **Settings** tab. In the **Service Configuration** drop-down box, choose **Cloud**.
![Cloud configuration](./media/cloud-services-dotnet-get-started/sccloud.png)
-3. Select the **StorageConnectionString** entry, and you'll see an ellipsis (**...**) button at the right end of the line. Click the ellipsis button to open the **Create Storage Account Connection String** dialog box.
+3. Select the **StorageConnectionString** entry, and you see an ellipsis (**...**) button at the right end of the line. Choose the ellipsis button to open the **Create Storage Account Connection String** dialog box.
![Open Connection String Create box](./media/cloud-services-dotnet-get-started/opencscreate.png)
-4. In the **Create Storage Connection String** dialog box, click **Your subscription**, choose the storage account that you created earlier, and then click **OK**. If you're not already logged in, you'll be prompted for your Azure account credentials.
+4. In the **Create Storage Connection String** dialog box, select **Your subscription**, choose the storage account that you created earlier, and then select **OK**. The explorer prompts you for your Azure account credentials if you still need to sign in.
![Create Storage Connection String](./media/cloud-services-dotnet-get-started/createstoragecs.png) 5. Save your changes.
Azure storage account connection strings for both the web role project and the w
This connection string is used for logging. 7. Follow the same procedure that you used for the **ContosoAdsWeb** role to set both connection strings for the **ContosoAdsWorker** role. Don't forget to set **Service Configuration** to **Cloud**.
-The role environment settings that you have configured using the Visual Studio UI are stored in the following files in the ContosoAdsCloudService project:
+The role environment settings that you configured using the Visual Studio UI are stored in the following files in the ContosoAdsCloudService project:
* *ServiceDefinition.csdef* - Defines the setting names. * *ServiceConfiguration.Cloud.cscfg* - Provides values for when the app runs in the cloud.
And the *ServiceConfiguration.Cloud.cscfg* file includes the values you entered
</Role> ```
-The `<Instances>` setting specifies the number of virtual machines that Azure will run the worker role code on. The [Next steps](#next-steps) section includes links to more information about scaling out a cloud service,
+The `<Instances>` setting specifies the number of virtual machines that Azure runs the worker role code on. The [Next steps](#next-steps) section includes links to more information about scaling out a cloud service,
### Deploy the project to Azure 1. In **Solution Explorer**, right-click the **ContosoAdsCloudService** cloud project and then select **Publish**. ![Publish menu](./media/cloud-services-dotnet-get-started/pubmenu.png)
-2. In the **Sign in** step of the **Publish Azure Application** wizard, click **Next**.
+2. In the **Sign in** step of the **Publish Azure Application** wizard, select **Next**.
![Sign in step](./media/cloud-services-dotnet-get-started/pubsignin.png)
-3. In the **Settings** step of the wizard, click **Next**.
+3. In the **Settings** step of the wizard, select **Next**.
![Settings step](./media/cloud-services-dotnet-get-started/pubsettings.png) The default settings in the **Advanced** tab are fine for this tutorial. For information about the advanced tab, see [Publish Azure Application Wizard](/visualstudio/azure/vs-azure-tools-publish-azure-application-wizard).
-4. In the **Summary** step, click **Publish**.
+4. In the **Summary** step, select **Publish**.
![Summary step](./media/cloud-services-dotnet-get-started/pubsummary.png) The **Azure Activity Log** window opens in Visual Studio.
-5. Click the right arrow icon to expand the deployment details.
+5. Choose the right arrow icon to expand the deployment details.
The deployment can take up to 5 minutes or more to complete. ![Azure Activity Log window](./media/cloud-services-dotnet-get-started/waal.png)
-6. When the deployment status is complete, click the **Web app URL** to start the application.
+6. When the deployment status is complete, select the **Web app URL** to start the application.
7. You can now test the app by creating, viewing, and editing some ads, as you did when you ran the application locally. > [!NOTE]
The `<Instances>` setting specifies the number of virtual machines that Azure wi
> ## Create the application from scratch
-If you haven't already downloaded
-[the completed application](https://code.msdn.microsoft.com/Simple-Azure-Cloud-Service-e01df2e4), do that now. You'll copy files from the downloaded project into the new project.
+If you still need to download [the completed application](https://code.msdn.microsoft.com/Simple-Azure-Cloud-Service-e01df2e4), do that now. Copy the files from the downloaded project into the new project.
Creating the Contoso Ads application involves the following steps:
After the solution is created, you'll review the code that is unique to cloud se
### Create a cloud service Visual Studio solution 1. In Visual Studio, choose **New Project** from the **File** menu. 2. In the left pane of the **New Project** dialog box, expand **Visual C#** and choose **Cloud** templates, and then choose the **Azure Cloud Service** template.
-3. Name the project and solution ContosoAdsCloudService, and then click **OK**.
+3. Name the project and solution ContosoAdsCloudService, and then select **OK**.
![New Project](./media/cloud-services-dotnet-get-started/newproject.png) 4. In the **New Azure Cloud Service** dialog box, add a web role and a worker role. Name the web role ContosoAdsWeb, and name the worker role ContosoAdsWorker. (Use the pencil icon in the right-hand pane to change the default names of the roles.) ![New Cloud Service Project](./media/cloud-services-dotnet-get-started/newcsproj.png)
-5. When you see the **New ASP.NET Project** dialog box for the web role, choose the MVC template, and then click **Change Authentication**.
+5. When you see the **New ASP.NET Project** dialog box for the web role, choose the MVC template, and then select **Change Authentication**.
![Change Authentication](./media/cloud-services-dotnet-get-started/chgauth.png)
-6. In the **Change Authentication** dialog box, choose **No Authentication**, and then click **OK**.
+6. In the **Change Authentication** dialog box, choose **No Authentication**, and then select **OK**.
![No Authentication](./media/cloud-services-dotnet-get-started/noauth.png)
-7. In the **New ASP.NET Project** dialog, click **OK**.
+7. In the **New ASP.NET Project** dialog, select **OK**.
8. In **Solution Explorer**, right-click the solution (not one of the projects), and choose **Add - New Project**.
-9. In the **Add New Project** dialog box, choose **Windows** under **Visual C#** in the left pane, and then click the **Class Library** template.
-10. Name the project *ContosoAdsCommon*, and then click **OK**.
+9. In the **Add New Project** dialog box, choose **Windows** under **Visual C#** in the left pane, and then select the **Class Library** template.
+10. Name the project *ContosoAdsCommon*, and then select **OK**.
You need to reference the Entity Framework context and the data model from both web and worker role projects. As an alternative, you could define the EF-related classes in the web role project and reference that project from the worker role project. But in the alternative approach, your worker role project would have a reference to web assemblies that it doesn't need. ### Update and add NuGet packages 1. Open the **Manage NuGet Packages** dialog box for the solution. 2. At the top of the window, select **Updates**.
-3. Look for the *WindowsAzure.Storage* package, and if it's in the list, select it and select the web and worker projects to update it in, and then click **Update**.
+3. Look for the *WindowsAzure.Storage* package, and if it's in the list, select it and select the web and worker projects to update it in, and then select **Update**.
- The storage client library is updated more frequently than Visual Studio project templates, so you'll often find that the version in a newly-created project needs to be updated.
+ The storage client library is updated more frequently than Visual Studio project templates, so you may find that the version in a newly created project needs to be updated.
4. At the top of the window, select **Browse**. 5. Find the *EntityFramework* NuGet package, and install it in all three projects. 6. Find the *Microsoft.WindowsAzure.ConfigurationManager* NuGet package, and install it in the worker role project. ### Set project references
-1. In the ContosoAdsWeb project, set a reference to the ContosoAdsCommon project. Right-click the ContosoAdsWeb project, and then click **References** - **Add References**. In the **Reference Manager** dialog box, select **Solution ΓÇô Projects** in the left pane, select **ContosoAdsCommon**, and then click **OK**.
+1. In the ContosoAdsWeb project, set a reference to the ContosoAdsCommon project. Right-click the ContosoAdsWeb project, and then select **References** - **Add References**. In the **Reference Manager** dialog box, select **Solution ΓÇô Projects** in the left pane, select **ContosoAdsCommon**, and then select **OK**.
2. In the ContosoAdsWorker project, set a reference to the ContosoAdsCommon project.
- ContosoAdsCommon will contain the Entity Framework data model and context class, which will be used by both the front-end and back-end.
+ ContosoAdsCommon contains the Entity Framework data model and context class, which uses both the front-end and back-end.
3. In the ContosoAdsWorker project, set a reference to `System.Drawing`. This assembly is used by the back-end to convert images to thumbnails.
In this section, you configure Azure Storage and SQL connection strings for test
If you're using Visual Studio 2015 or higher, replace "v11.0" with "MSSQLLocalDB". 2. Save your changes.
-3. In the ContosoAdsCloudService project, right-click ContosoAdsWeb under **Roles**, and then click **Properties**.
+3. In the ContosoAdsCloudService project, right-click ContosoAdsWeb under **Roles**, and then select **Properties**.
![Role properties image](./media/cloud-services-dotnet-get-started/roleproperties.png)
-4. In the **ContosoAdsWeb [Role]** properties window, click the **Settings** tab, and then click **Add Setting**.
+4. In the **ContosoAdsWeb [Role]** properties window, select the **Settings** tab, and then select **Add Setting**.
Leave **Service Configuration** set to **All Configurations**. 5. Add a setting named *StorageConnectionString*. Set **Type** to *ConnectionString*, and set **Value** to *UseDevelopmentStorage=true*.
In this section, you configure Azure Storage and SQL connection strings for test
![New connection string](./media/cloud-services-dotnet-get-started/scall.png) 6. Save your changes. 7. Follow the same procedure to add a storage connection string in the ContosoAdsWorker role properties.
-8. Still in the **ContosoAdsWorker [Role]** properties window, add another connection string:
+8. While still in the **ContosoAdsWorker [Role]** properties window, add another connection string:
* Name: ContosoAdsDbConnectionString * Type: String
In this section, you configure Azure Storage and SQL connection strings for test
``` ### Add code files
-In this section, you copy code files from the downloaded solution into the new solution. The following sections will show and explain key parts of this code.
+In this section, you copy code files from the downloaded solution into the new solution. The following sections show and explain key parts of this code.
-To add files to a project or a folder, right-click the project or folder and click **Add** - **Existing Item**. Select the files you want and then click **Add**. If asked whether you want to replace existing files, click **Yes**.
+To add files to a project or a folder, right-click the project or folder and select **Add** - **Existing Item**. Select the files you want and then select **Add**. If asked whether you want to replace existing files, select **Yes**.
1. In the ContosoAdsCommon project, delete the *Class1.cs* file and add in its place the *Ad.cs* and *ContosoAdscontext.cs* files from the downloaded project. 2. In the ContosoAdsWeb project, add the following files from the downloaded project.
To add files to a project or a folder, right-click the project or folder and cli
* In the *Views\Ad* folder (create the folder first): five *.cshtml* files. 3. In the ContosoAdsWorker project, add *WorkerRole.cs* from the downloaded project.
-You can now build and run the application as instructed earlier in the tutorial, and the app will use local database and storage emulator resources.
+You can now build and run the application as instructed earlier in the tutorial, and the app uses local database and storage emulator resources.
-The following sections explain the code related to working with the Azure environment, blobs, and queues. This tutorial does not explain how to create MVC controllers and views using scaffolding, how to write Entity Framework code that works with SQL Server databases, or the basics of asynchronous programming in ASP.NET 4.5. For information about these topics, see the following resources:
+The following sections explain the code related to working with the Azure environment, blobs, and queues. This tutorial doesn't explain how to create MVC controllers and views using scaffolding, how to write Entity Framework code that works with SQL Server databases, or the basics of asynchronous programming in ASP.NET 4.5. For information about these topics, see the following resources:
* [Get started with MVC 5](https://www.asp.net/mvc/tutorials/mvc-5/introduction/getting-started) * [Get started with EF 6 and MVC 5](https://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc)
public class Ad
``` ### ContosoAdsCommon - ContosoAdsContext.cs
-The ContosoAdsContext class specifies that the Ad class is used in a DbSet collection, which Entity Framework will store in a SQL database.
+The ContosoAdsContext class specifies that the Ad class is used in a DbSet collection, which Entity Framework stores in an SQL database.
```csharp public class ContosoAdsContext : DbContext
public class ContosoAdsContext : DbContext
} ```
-The class has two constructors. The first of them is used by the web project, and specifies the name of a connection string that is stored in the Web.config file. The second constructor enables you to pass in the actual connection string used by the worker role project, since it doesn't have a Web.config file. You saw earlier where this connection string was stored, and you'll see later how the code retrieves the connection string when it instantiates the DbContext class.
+The class has two constructors. The first of them is used by the web project, and specifies the name of a connection string that is stored in the Web.config file. The second constructor enables you to pass in the actual connection string used by the worker role project, since it doesn't have a Web.config file. You saw earlier where this connection string was stored. Later, you see how the code retrieves the connection string when it instantiates the DbContext class.
### ContosoAdsWeb - Global.asax.cs
-Code that is called from the `Application_Start` method creates an *images* blob container and an *images* queue if they don't already exist. This ensures that whenever you start using a new storage account, or start using the storage emulator on a new computer, the required blob container and queue will be created automatically.
+Code that is called from the `Application_Start` method creates an *images* blob container and an *images* queue if they don't already exist. This code ensures that whenever you use a new storage account or use the storage emulator on a new computer, the code automatically creates the required blob container and queue.
The code gets access to the storage account by using the storage connection string from the *.cscfg* file.
An `<input>` element tells the browser to provide a file selection dialog.
### ContosoAdsWorker - WorkerRole.cs - OnStart method The Azure worker role environment calls the `OnStart` method in the `WorkerRole` class when the worker role is getting started, and it calls the `Run` method when the `OnStart` method finishes.
-The `OnStart` method gets the database connection string from the *.cscfg* file and passes it to the Entity Framework DbContext class. The SQLClient provider is used by default, so the provider does not have to be specified.
+The `OnStart` method gets the database connection string from the *.cscfg* file and passes it to the Entity Framework DbContext class. The SQLClient provider is used by default, so the provider doesn't have to be specified.
```csharp var dbConnString = CloudConfigurationManager.GetSetting("ContosoAdsDbConnectionString");
public override void Run()
} ```
-After each iteration of the loop, if no queue message was found, the program sleeps for a second. This prevents the worker role from incurring excessive CPU time and storage transaction costs. The Microsoft Customer Advisory Team tells a story about a developer who forgot to include this, deployed to production, and left for vacation. When they got back, their oversight cost more than the vacation.
+After each iteration of the loop, if no queue message was found, the program sleeps for a second. This sleep prevents the worker role from incurring excessive CPU time and storage transaction costs. The Microsoft Customer Advisory Team tells a story about a developer who forgot to include this sleep function, deployed to production, and left for vacation. When they got back, their oversight cost more than the vacation.
-Sometimes the content of a queue message causes an error in processing. This is called a *poison message*, and if you just logged an error and restarted the loop, you could endlessly try to process that message. Therefore the catch block includes an if statement that checks to see how many times the app has tried to process the current message, and if it has been more than 5 times, the message is deleted from the queue.
+Sometimes the content of a queue message causes an error in processing. This kind of message is called a *poison message*. If you merely logged an error and restarted the loop, you could endlessly try to process that message. Therefore, the catch block includes an if statement that checks to see how many times the app tried to process the current message. If the count is higher than five times, the message is deleted from the queue.
`ProcessQueueMessage` is called when a queue message is found.
This code reads the database to get the image URL, converts the image to a thumb
In case something doesn't work while you're following the instructions in this tutorial, here are some common errors and how to resolve them. ### ServiceRuntime.RoleEnvironmentException
-The `RoleEnvironment` object is provided by Azure when you run an application in Azure or when you run locally using the Azure Compute Emulator. If you get this error when you're running locally, make sure that you have set the ContosoAdsCloudService project as the startup project. This sets up the project to run using the Azure Compute Emulator.
+The `RoleEnvironment` object is provided by Azure when you run an application in Azure or when you run locally using the Azure Compute Emulator. If you get this error when you're running locally, make sure that you set the ContosoAdsCloudService project as the startup project. This setting makes the project run using the Azure Compute Emulator.
-One of the things the application uses the Azure RoleEnvironment for is to get the connection string values that are stored in the *.cscfg* files, so another cause of this exception is a missing connection string. Make sure that you created the StorageConnectionString setting for both Cloud and Local configurations in the ContosoAdsWeb project, and that you created both connection strings for both configurations in the ContosoAdsWorker project. If you do a **Find All** search for StorageConnectionString in the entire solution, you should see it 9 times in 6 files.
+One of the things the application uses the Azure RoleEnvironment for is to get the connection string values that are stored in the *.cscfg* files, so another cause of this exception is a missing connection string. Make sure that you created the StorageConnectionString setting for both Cloud and Local configurations in the ContosoAdsWeb project, and that you created both connection strings for both configurations in the ContosoAdsWorker project. If you do a **Find All** search for StorageConnectionString in the entire solution, you should see it nine times in six files.
-### Cannot override to port xxx. New port below minimum allowed value 8080 for protocol http
-Try changing the port number used by the web project. Right-click the ContosoAdsWeb project, and then click **Properties**. Click the **Web** tab, and then change the port number in the **Project Url** setting.
+### Can't override to port xxx. New port below minimum allowed value 8080 for protocol http
+Try changing the port number used by the web project. Right-click the ContosoAdsWeb project, and then select **Properties**. Choose the **Web** tab, and then change the port number in the **Project Url** setting.
For another alternative that might resolve the problem, see the following section. ### Other errors when running locally
-By default new cloud service projects use the Azure Compute Emulator express to simulate the Azure environment. This is a lightweight version of the full compute emulator, and under some conditions the full emulator will work when the express version does not.
+By default new cloud service projects use the Azure Compute Emulator express to simulate the Azure environment. The Azure Compute Emulator is a lightweight version of the full compute emulator, and under some conditions the full emulator works when the express version doesn't.
-To change the project to use the full emulator, right-click the ContosoAdsCloudService project, and then click **Properties**. In the **Properties** window click the **Web** tab, and then click the **Use Full Emulator** radio button.
+To change the project to use the full emulator, right-click the ContosoAdsCloudService project, and then select **Properties**. In the **Properties** window, select the **Web** tab, and then select the **Use Full Emulator** radio button.
In order to run the application with the full emulator, you have to open Visual Studio with administrator privileges. ## Next steps
-The Contoso Ads application has intentionally been kept simple for a getting-started tutorial. For example, it doesn't implement [dependency injection](https://www.asp.net/mvc/tutorials/hands-on-labs/aspnet-mvc-4-dependency-injection) or the [repository and unit of work patterns](https://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/advanced-entity-framework-scenarios-for-an-mvc-web-application#repo), it doesn't [use an interface for logging](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/monitoring-and-telemetry#log), it doesn't use [EF Code First Migrations](https://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/migrations-and-deployment-with-the-entity-framework-in-an-asp-net-mvc-application) to manage data model changes or [EF Connection Resiliency](https://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/connection-resiliency-and-command-interception-with-the-entity-framework-in-an-asp-net-mvc-application) to manage transient network errors, and so forth.
-
-Here are some cloud service sample applications that demonstrate more real-world coding practices, listed from less complex to more complex:
-
-* [PhluffyFotos](https://code.msdn.microsoft.com/PhluffyFotos-Sample-7ecffd31). Similar in concept to Contoso Ads but implements more features and more real-world coding practices.
-* [Azure Cloud Service Multi-Tier Application with Tables, Queues, and Blobs](https://code.msdn.microsoft.com/windowsazure/Windows-Azure-Multi-Tier-eadceb36). Introduces Azure Storage tables as well as blobs and queues. Based on an older version of the Azure SDK for .NET, will require some modifications to work with the current version.
+The Contoso Ads application is intentionally made simple for a getting-started tutorial. For example, it doesn't implement [dependency injection](https://www.asp.net/mvc/tutorials/hands-on-labs/aspnet-mvc-4-dependency-injection) or the [repository and unit of work patterns](https://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/advanced-entity-framework-scenarios-for-an-mvc-web-application#repo). It doesn't [use an interface for logging](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/monitoring-and-telemetry#log), it doesn't use [EF Code First Migrations](https://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/migrations-and-deployment-with-the-entity-framework-in-an-asp-net-mvc-application) to manage data model changes or [EF Connection Resiliency](https://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/connection-resiliency-and-command-interception-with-the-entity-framework-in-an-asp-net-mvc-application) to manage transient network errors, and so forth.
For general information about developing for the cloud, see [Building Real-World Cloud Apps with Azure](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/introduction).
-For a video introduction to Azure Storage best practices and patterns, see Microsoft Azure Storage ΓÇô What's New, Best Practices and Patterns.
- For more information, see the following resources: * [How to manage Cloud Services](cloud-services-how-to-manage-portal.md)
cloud-services Cloud Services Dotnet Install Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-install-dotnet.md
Title: Install .NET on Azure Cloud Services (classic) roles
description: This article describes how to manually install the .NET Framework on your cloud service web and worker roles. Previously updated : 02/21/2023 Last updated : 07/23/2024
To add the installer for a *web* role:
To add the installer for a *worker* role: * Right-click your *worker* role and select **Add** > **Existing Item**. Select the .NET installer and add it to the role.
-When files are added in this way to the role content folder, they're automatically added to your cloud service package. The files are then deployed to a consistent location on the virtual machine. Repeat this process for each web and worker role in your cloud service so that all roles have a copy of the installer.
+When files are added in this way to the role content folder, they automatically add to your cloud service package. The files are then deployed to a consistent location on the virtual machine. Repeat this process for each web and worker role in your cloud service so that all roles have a copy of the installer.
> [!NOTE] > You should install .NET Framework 4.6.2 on your cloud service role even if your application targets .NET Framework 4.6. The Guest OS includes the Knowledge Base [update 3098779](https://support.microsoft.com/kb/3098779) and [update 3097997](https://support.microsoft.com/kb/3097997). Issues can occur when you run your .NET applications if .NET Framework 4.6 is installed on top of the Knowledge Base updates. To avoid these issues, install .NET Framework 4.6.2 rather than version 4.6. For more information, see the [Knowledge Base article 3118750](https://support.microsoft.com/kb/3118750) and [4340191](https://support.microsoft.com/kb/4340191).
You can use startup tasks to perform operations before a role starts. Installing
2. Create a file named **install.cmd** and add the following install script to the file.
- The script checks whether the specified version of the .NET Framework is already installed on the machine by querying the registry. If the .NET Framework version is not installed, then the .NET Framework web installer is opened. To help troubleshoot any issues, the script logs all activity to the file startuptasklog-(current date and time).txt that is stored in **InstallLogs** local storage.
+ The script checks whether the specified version of the .NET Framework is present on your machine by querying the registry. If the .NET Framework version isn't installed, then the .NET Framework web installer is opened. To help troubleshoot any issues, the script logs all activity to the file startuptasklog-(current date and time).txt that is stored in **InstallLogs** local storage.
> [!IMPORTANT] > Use a basic text editor like Windows Notepad to create the install.cmd file. If you use Visual Studio to create a text file and change the extension to .cmd, the file might still contain a UTF-8 byte order mark. This mark can cause an error when the first line of the script is run. To avoid this error, make the first line of the script a REM statement that can be skipped by the byte order processing.
You can use startup tasks to perform operations before a role starts. Installing
EXIT /B 0 ```
-3. Add the install.cmd file to each role by using **Add** > **Existing Item** in **Solution Explorer** as described earlier in this topic.
+3. Add the install.cmd file to each role by using **Add** > **Existing Item** in **Solution Explorer** as described earlier in this article.
After this step is complete, all roles should have the .NET installer file and the install.cmd file.
To configure Diagnostics, open the diagnostics.wadcfgx file and add the followin
This XML configures Diagnostics to transfer the files in the log directory in the **NETFXInstall** resource to the Diagnostics storage account in the **netfx-install** blob container. ## Deploy your cloud service
-When you deploy your cloud service, the startup tasks install the .NET Framework if it's not already installed. Your cloud service roles are in the *busy* state while the framework is being installed. If the framework installation requires a restart, the service roles might also restart.
+When you deploy your cloud service, the startup tasks install the .NET Framework (if necessary). Your cloud service roles are in the *busy* state while the framework is being installed. If the framework installation requires a restart, the service roles might also restart.
-## Additional resources
+## Next steps
* [Installing the .NET Framework][Installing the .NET Framework] * [Determine which .NET Framework versions are installed][How to: Determine Which .NET Framework Versions Are Installed] * [Troubleshooting .NET Framework installations][Troubleshooting .NET Framework Installations]
cloud-services Cloud Services Enable Communication Role Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-enable-communication-role-instances.md
Title: Communication for Roles in Cloud Services (classic) | Microsoft Docs
description: Role instances in Cloud Services can have endpoints (http, https, tcp, udp) defined for them that communicate with the outside or between other role instances. Previously updated : 02/21/2023 Last updated : 07/23/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-Cloud service roles communicate through internal and external connections. External connections are called **input endpoints** while internal connections are called **internal endpoints**. This topic describes how to modify the [service definition](cloud-services-model-and-package.md#csdef) to create endpoints.
+Cloud service roles communicate through internal and external connections. External connections are called **input endpoints** while internal connections are called **internal endpoints**. This article describes how to modify the [service definition](cloud-services-model-and-package.md#csdef) to create endpoints.
## Input endpoint
-The input endpoint is used when you want to expose a port to the outside. You specify the protocol type and the port of the endpoint which then applies for both the external and internal ports for the endpoint. If you want, you can specify a different internal port for the endpoint with the [localPort](/previous-versions/azure/reference/gg557552(v=azure.100)#inputendpoint) attribute.
+The input endpoint is used when you want to expose a port to the outside. You specify the protocol type and the port of the endpoint, which then applies for both the external and internal ports for the endpoint. If you want, you can specify a different internal port for the endpoint with the [localPort](/previous-versions/azure/reference/gg557552(v=azure.100)#inputendpoint) attribute.
The input endpoint can use the following protocols: **http, https, tcp, udp**.
To create an input endpoint, add the **InputEndpoint** child element to the **En
``` ## Instance input endpoint
-Instance input endpoints are similar to input endpoints but allows you map specific public-facing ports for each individual role instance by using port forwarding on the load balancer. You can specify a single public-facing port, or a range of ports.
+Instance input endpoints are similar to input endpoints but allow you to map specific public-facing ports for each individual role instance by using port forwarding on the load balancer. You can specify a single public-facing port, or a range of ports.
The instance input endpoint can only use **tcp** or **udp** as the protocol.
To create an instance input endpoint, add the **InstanceInputEndpoint** child el
``` ## Internal endpoint
-Internal endpoints are available for instance-to-instance communication. The port is optional and if omitted, a dynamic port is assigned to the endpoint. A port range can be used. There is a limit of five internal endpoints per role.
+Internal endpoints are available for instance-to-instance communication. The port is optional and if omitted, a dynamic port is assigned to the endpoint. A port range can be used. There's a limit of five internal endpoints per role.
The internal endpoint can use the following protocols: **http, tcp, udp, any**.
You can also use a port range.
## Worker roles vs. Web roles
-There is one minor difference with endpoints when working with both worker and web roles. The web role must have at minimum a single input endpoint using the **HTTP** protocol.
+There's one minor difference with endpoints when working with both worker and web roles. The web role must have at minimum a single input endpoint using the **HTTP** protocol.
```xml <Endpoints>
There is one minor difference with endpoints when working with both worker and w
``` ## Using the .NET SDK to access an endpoint
-The Azure Managed Library provides methods for role instances to communicate at runtime. From code running within a role instance, you can retrieve information about the existence of other role instances and their endpoints, as well as information about the current role instance.
+The Azure Managed Library provides methods for role instances to communicate at runtime. From code running within a role instance, you can retrieve information about the existence of other role instances and their endpoints. You can also obtain information about the current role instance.
> [!NOTE] > You can only retrieve information about role instances that are running in your cloud service and that define at least one internal endpoint. You cannot obtain data about role instances running in a different service.
The Azure Managed Library provides methods for role instances to communicate at
You can use the [Instances](/previous-versions/azure/reference/ee741904(v=azure.100)) property to retrieve instances of a role. First use the [CurrentRoleInstance](/previous-versions/azure/reference/ee741907(v=azure.100)) to return a reference to the current role instance, and then use the [Role](/previous-versions/azure/reference/ee741918(v=azure.100)) property to return a reference to the role itself.
-When you connect to a role instance programmatically through the .NET SDK, it's relatively easy to access the endpoint information. For example, after you've already connected to a specific role environment, you can get the port of a specific endpoint with this code:
+When you connect to a role instance programmatically through the .NET SDK, it's relatively easy to access the endpoint information. For example, after you connect to a specific role environment, you can get the port of a specific endpoint with this code:
```csharp int port = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["StandardWeb"].IPEndpoint.Port; ```
-The **Instances** property returns a collection of **RoleInstance** objects. This collection always contains the current instance. If the role does not define an internal endpoint, the collection includes the current instance but no other instances. The number of role instances in the collection will always be 1 in the case where no internal endpoint is defined for the role. If the role defines an internal endpoint, its instances are discoverable at runtime, and the number of instances in the collection will correspond to the number of instances specified for the role in the service configuration file.
+The **Instances** property returns a collection of **RoleInstance** objects. This collection always contains the current instance. If the role doesn't define an internal endpoint, the collection includes the current instance but no other instances. The number of role instances in the collection is always one in the case where no internal endpoint is defined for the role. If the role defines an internal endpoint, its instances are discoverable at runtime, and the number of instances in the collection corresponds to the number of instances specified for the role in the service configuration file.
> [!NOTE] > The Azure Managed Library does not provide a means of determining the health of other role instances, but you can implement such health assessments yourself if your service needs this functionality. You can use [Azure Diagnostics](cloud-services-dotnet-diagnostics.md) to obtain information about running role instances. > >
-To determine the port number for an internal endpoint on a role instance, you can use the [`InstanceEndpoints`](/previous-versions/azure/reference/ee741917(v=azure.100)) property to return a Dictionary object that contains endpoint names and their corresponding IP addresses and ports. The [`IPEndpoint`](/previous-versions/azure/reference/ee741919(v=azure.100)) property returns the IP address and port for a specified endpoint. The `PublicIPEndpoint` property returns the port for a load balanced endpoint. The IP address portion of the `PublicIPEndpoint` property is not used.
+To determine the port number for an internal endpoint on a role instance, you can use the [`InstanceEndpoints`](/previous-versions/azure/reference/ee741917(v=azure.100)) property to return a Dictionary object that contains endpoint names and their corresponding IP addresses and ports. The [`IPEndpoint`](/previous-versions/azure/reference/ee741919(v=azure.100)) property returns the IP address and port for a specified endpoint. The `PublicIPEndpoint` property returns the port for a load balanced endpoint. The IP address portion of the `PublicIPEndpoint` property isn't used.
-Here is an example that iterates role instances.
+Here's an example that iterates role instances.
```csharp foreach (RoleInstance roleInst in RoleEnvironment.CurrentRoleInstance.Role.Instances)
foreach (RoleInstance roleInst in RoleEnvironment.CurrentRoleInstance.Role.Insta
} ```
-Here is an example of a worker role that gets the endpoint exposed through the service definition and starts listening for connections.
+Here's an example of a worker role that gets the endpoint exposed through the service definition and starts listening for connections.
> [!WARNING] > This code will only work for a deployed service. When running in the Azure Compute Emulator, service configuration elements that create direct port endpoints (**InstanceInputEndpoint** elements) are ignored.
Only allows network traffic from **WebRole1** to **WorkerRole1**, **WebRole1** t
</ServiceDefinition> ```
-An XML schema reference for the elements used above can be found [here](/previous-versions/azure/reference/gg557551(v=azure.100)).
+An XML schema reference for the elements used can be found [here](/previous-versions/azure/reference/gg557551(v=azure.100)).
## Next steps Read more about the Cloud Service [model](cloud-services-model-and-package.md).
cloud-services Cloud Services Guestos Family 2 3 4 Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-family-2-3-4-retirement.md
Title: Guest OS family 2, 3, and 4 retirement notice | Microsoft Docs
-description: Information about when the Azure Guest OS Family 2, 3, and 4 retirement happened and how to determine if you're affected.
+description: Information about when the Azure Guest OS Family 2, 3, and 4 retirement happened and how to determine if their retirement affects you.
Previously updated : 07/08/2024 Last updated : 07/23/2024
foreach($subscription in Get-AzureSubscription) {
} ```
-Your cloud services are impacted by this retirement if the `osFamily` column in the script output contains a `2`, `3`, `4`, or is empty. If empty, the default `osFamily` attribute will point to `osFamily` `5`.
+This retirement affects your cloud services if the `osFamily` column in the script output contains a `2`, `3`, `4`, or is empty. If empty, the default `osFamily` attribute points to `osFamily` `5`.
## Recommendations
-If you're affected, we recommend you migrate your Cloud Service or [Cloud Services Extended Support](../cloud-services-extended-support/overview.md) roles to one of the supported Guest OS Families:
+If this retirement affects you, we recommend you migrate your Cloud Service or [Cloud Services Extended Support](../cloud-services-extended-support/overview.md) roles to one of the supported Guest OS Families:
**Guest OS family 7.x**ΓÇ»- Windows Server 2022ΓÇ»*(recommended)*
cloud-services Cloud Services Guestos Family1 Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-family1-retirement.md
Title: Guest OS family 1 retirement notice | Microsoft Docs
-description: Provides information about when the Azure Guest OS Family 1 retirement happened and how to determine if you are affected
+description: Provides information about when the Azure Guest OS Family 1 retirement happened and how to determine if its retirement affects you.
Previously updated : 02/21/2023 Last updated : 07/23/2024
The retirement of OS Family 1 was first announced on June 1, 2013.
-**Sept 2, 2014** The Azure Guest operating system (Guest OS) Family 1.x, which is based on the Windows Server 2008 operating system, was officially retired. All attempts to deploy new services or upgrade existing services using Family 1 will fail with an error message informing you that the Guest OS Family 1 has been retired.
+**Sept 2, 2014** The Azure Guest operating system (Guest OS) Family 1.x, which is based on the Windows Server 2008 operating system, was officially retired. All attempts to deploy new services or upgrade existing services using Family 1 fail with an error message informing you that the Guest OS Family 1 is retired.
-**November 3, 2014** Extended support for Guest OS Family 1 ended and it is fully retired. All services still on Family 1 will be impacted. We may stop those services at any time. There is no guarantee your services will continue to run unless you manually upgrade them yourself.
+**November 3, 2014** Extended support for Guest OS Family 1 ended. Guest OS Family 1 is retired. This retirement affects all services still on Family 1. We may stop those services at any time. There's no guarantee your services continue to run unless you manually upgrade them yourself.
-If you have additional questions, visit the [Microsoft Q&A question page for Cloud Services](/answers/topics/azure-cloud-services.html) or [contact Azure support](https://azure.microsoft.com/support/options/).
+If you have other questions, visit the [Microsoft Question & Answer page for Cloud Services](/answers/topics/azure-cloud-services.html) or [contact Azure support](https://azure.microsoft.com/support/options/).
## Are you affected?
-Your Cloud Services are affected if any one of the following applies:
+This retirement affects your cloud services if any one of the following applies:
1. You have a value of "osFamily = "1" explicitly specified in the ServiceConfiguration.cscfg file for your Cloud Service.
-2. You do not have a value for osFamily explicitly specified in the ServiceConfiguration.cscfg file for your Cloud Service. Currently, the system uses the default value of "1" in this case.
+2. You don't have a value for osFamily explicitly specified in the ServiceConfiguration.cscfg file for your Cloud Service. Currently, the system uses the default value of "1" in this case.
3. The Azure portal lists your Guest Operating System family value as "Windows Server 2008". To find which of your cloud services are running which OS Family, you can run the following script in Azure PowerShell, though you must [set up Azure PowerShell](/powershell/azure/) first. For more information on the script, see [Azure Guest OS Family 1 End of Life: June 2014](/archive/blogs/ryberry/azure-guest-os-family-1-end-of-life-june-2014).
foreach($subscription in Get-AzureSubscription) {
} ```
-Your cloud services will be impacted by OS Family 1 retirement if the osFamily column in the script output is empty or contains a "1".
+The OS Family 1 retirement affects your cloud services if the osFamily column in the script output is empty or contains a "1".
-## Recommendations if you are affected
+## Recommendations
We recommend you migrate your Cloud Service roles to one of the supported Guest OS Families:
We recommend you migrate your Cloud Service roles to one of the supported Guest
1. Ensure that your application is using SDK 1.3 and above with .NET framework 3.5 or 4.0. 2. Set the osFamily attribute to "2" in the ServiceConfiguration.cscfg file, and redeploy your cloud service.
-## Extended Support for Guest OS Family 1 ended Nov 3, 2014
+## Extended Support for Guest OS Family 1 ended November 3, 2014
Cloud services on Guest OS family 1 are no longer supported. Migrate off family 1 as soon as possible to avoid service disruption.
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
Title: List of updates applied to the Azure Guest OS | Microsoft Docs
-description: This article lists the Microsoft Security Response Center updates applied to different Azure Guest OS. See if an update applies to the Guest OS you are using.
+description: This article lists the Microsoft Security Response Center updates applied to different Azure Guest OS. See if an update applies to your Guest OS.
ms.assetid: d0a272a9-ed01-4f4c-a0b3-bd5e841bdd77 Previously updated : 07/01/2024 Last updated : 07/23/2024 # Azure Guest OS
-The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to your Guest OS. Updates always carry forward for the particular [family][family-explain] they were introduced in.
## June 2024 Guest OS
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 18-07 | [4338613], [4338600], [4338605] |.NET 3.5, 4.x, 4.5x Security |4.56|July 10, 2018 | | Rel 18-07 | [4338832] |Flash |3.63, 4.76, 5.21 |July 10, 2018 | | Rel 18-07 | [4339093] |Internet Explorer |2.76, 3.63, 4.76 |July 10, 2018 |
-| N/A | [4284826] |June non-security rollup |2.76 |June 12, 2018 |
-| N/A | [4284855] |June non-security rollup |3.63 |June 12, 2018 |
-| N/A | [4284815] |June non-security rollup |4.56 |June 12, 2018 |
+| N/A | [4284826] |June nonsecurity rollup |2.76 |June 12, 2018 |
+| N/A | [4284855] |June nonsecurity rollup |3.63 |June 12, 2018 |
+| N/A | [4284815] |June nonsecurity rollup |4.56 |June 12, 2018 |
## June 2018 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 18-06 | [4284878] |Windows Security only |4.55 |June 12, 2018 | | Rel 18-06 | [4230450] |Internet Explorer |2.75, 3.62, 4.75 |June 12, 2018 | | Rel 18-06 | [4287903] |Flash |3.62, 4.75, 5.20 |June 12, 2018 |
-| N/A | [4103718] |May non-security rollup |2.75 |May 8, 2018 |
-| N/A | [4103730] |May non-security rollup |3.62 |May 8, 2018 |
-| N/A | [4103725] |May non-security rollup |4.55 |May 8, 2018 |
-| N/A | [4040980], [4040977] |Sept ΓÇÖ17 .NET non-security rollup |2.75 |November 14, 2017 |
-| N/A | [4095874] |May .NET 3.5 non-security release |2.75 |May 8, 2018 |
-| N/A | [4096495] |May .NET 4.x non-security release |2.75 |May 8, 2018 |
-| N/A | [4040975] |Sept ΓÇÖ17 .NET non-security rollup |3.62 |November 14, 2017 |
-| N/A | [4095872] |May .NET 3.5 non-security release |3.62 |May 8, 2018 |
-| N/A | [4096494] |May .NET 4.x non-security release |3.62 |May 8, 2018 |
-| N/A | [4096416] |May .NET 4.5x non-security release |3.62 |May 8, 2018 |
-| N/A | [4040974], [4040972] |Sept ΓÇÖ17 .NET non-security rollup |4.55 |November 14, 2017 |
-| N/A | [4043763] |Oct ΓÇÖ17 .NET non-security rollup |4.55 |September 12, 2017 |
-| N/A | [4095876] |May .NET 4.x non-security release |4.55 |May 8, 2018 |
-| N/A | [4096417] |May .NET 4.5x non-security release |4.55 |May 8, 2018 |
+| N/A | [4103718] |May nonsecurity rollup |2.75 |May 8, 2018 |
+| N/A | [4103730] |May nonsecurity rollup |3.62 |May 8, 2018 |
+| N/A | [4103725] |May nonsecurity rollup |4.55 |May 8, 2018 |
+| N/A | [4040980], [4040977] |Sept ΓÇÖ17 .NET nonsecurity rollup |2.75 |November 14, 2017 |
+| N/A | [4095874] |May .NET 3.5 nonsecurity release |2.75 |May 8, 2018 |
+| N/A | [4096495] |May .NET 4.x nonsecurity release |2.75 |May 8, 2018 |
+| N/A | [4040975] |Sept ΓÇÖ17 .NET nonsecurity rollup |3.62 |November 14, 2017 |
+| N/A | [4095872] |May .NET 3.5 nonsecurity release |3.62 |May 8, 2018 |
+| N/A | [4096494] |May .NET 4.x nonsecurity release |3.62 |May 8, 2018 |
+| N/A | [4096416] |May .NET 4.5x nonsecurity release |3.62 |May 8, 2018 |
+| N/A | [4040974], [4040972] |Sept ΓÇÖ17 .NET nonsecurity rollup |4.55 |November 14, 2017 |
+| N/A | [4043763] |Oct ΓÇÖ17 .NET nonsecurity rollup |4.55 |September 12, 2017 |
+| N/A | [4095876] |May .NET 4.x nonsecurity release |4.55 |May 8, 2018 |
+| N/A | [4096417] |May .NET 4.5x nonsecurity release |4.55 |May 8, 2018 |
| N/A | [4132216] |May SSU |5.20 |May 8, 2018 | ## May 2018 Guest OS
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 18-05 | [4054856] |.NET 4.7x Security |5.19 |May 8, 2018 | | Rel 18-05 | [4103768] |Internet Explorer |2.74, 3.61, 4.74 |May 8, 2018 | | Rel 18-05 | [4103729] |Flash |3.61, 4.74, 5.19 |May 8, 2018 |
-| N/A | [4093118] |April non-security rollup |2.73 |April 10, 2018 |
-| N/A | [4093123] |April non-security rollup |3.61 |April 10, 2018 |
-| N/A | [4093114] |April non-security rollup |4.74 |April 10, 2018 |
+| N/A | [4093118] |April nonsecurity rollup |2.73 |April 10, 2018 |
+| N/A | [4093123] |April nonsecurity rollup |3.61 |April 10, 2018 |
+| N/A | [4093114] |April nonsecurity rollup |4.74 |April 10, 2018 |
| N/A | [4093137] |April SSU |5.19 |April 10, 2018 | | N/A | [4093753] |Timezone update |2.74, 3.61, 4.74 |April 10, 2018 |
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 18-04 | [4093115] |Windows Security only |4.53 |April 10, 2018 | | Rel 18-04 | [4092946] |Internet Explorer |2.73, 3.60, 4.53 |April 10, 2018 | | Rel 18-04 | [4093110] |Flash |3.60, 4.53, 5.18 |April 10, 2018 |
-| N/A | [4088875] |March non-security rollup |2.73 |March 13, 2018 |
-| N/A | [4099950] |March non-security rollup pre-requisite|2.73 |March 13, 2018 |
-| N/A | [4088877] |March non-security rollup |3.60 |March 13, 2018 |
-| N/A | [4088876] |March non-security rollup |4.53 |March 13, 2018 |
+| N/A | [4088875] |March nonsecurity rollup |2.73 |March 13, 2018 |
+| N/A | [4099950] |March nonsecurity rollup prerequisite|2.73 |March 13, 2018 |
+| N/A | [4088877] |March nonsecurity rollup |3.60 |March 13, 2018 |
+| N/A | [4088876] |March nonsecurity rollup |4.53 |March 13, 2018 |
## March 2018 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 18-03 | [4088878], [4088880], [4088879] |Windows Security only |2.72, 3.59, 4.52 |March 13, 2018 | | Rel 18-03 | [4089187] |Internet Explorer |2.72, 3.59, 4.52 |March 13, 2018 | | Rel 18-03 | [4074595] |Flash |3.59, 4.52, 5.17 |March 13, 2018 |
-| N/A | [4074598] |February non-security rollup |2.72 |February 13, 2018 |
-| N/A | [4074593] |February non-security rollup |3.59 |February 13, 2018 |
-| N/A | [4074594] |February non-security rollup |4.52 |February 13, 2018 |
+| N/A | [4074598] |February nonsecurity rollup |2.72 |February 13, 2018 |
+| N/A | [4074593] |February nonsecurity rollup |3.59 |February 13, 2018 |
+| N/A | [4074594] |February nonsecurity rollup |4.52 |February 13, 2018 |
| N/A | [4074837] |Timezone update |2.72, 3.59, 4.52 |February 13, 2018 |
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 18-02 | [4074587], [4074589], [4074597] |Windows Security only |2.71, 3.58, 4.51 |February 13, 2018 | | Rel 18-02 | [4074736] |Internet Explorer |2.71, 3.58, 4.51 |February 13, 2018 | | Rel 18-02 | [4074595] |Flash |3.58, 4.51, 5.16 |February 13, 2018 |
-| N/A | [4056894] |January non-security rollup |2.71 |January 4, 2018 |
-| N/A | [4056896] |January non-security rollup |3.58 |January 4, 2018 |
-| N/A | [4056895] |January non-security rollup |4.51 |January 4, 2018 |
+| N/A | [4056894] |January nonsecurity rollup |2.71 |January 4, 2018 |
+| N/A | [4056896] |January nonsecurity rollup |3.58 |January 4, 2018 |
+| N/A | [4056895] |January nonsecurity rollup |4.51 |January 4, 2018 |
| N/A | [4054176], [4054172] |January .NET rollup |2.71 |January 4, 2018 | | N/A | [4054175], [4054171] |January .NET rollup |3.58 |January 4, 2018 | | N/A | [4054177], [4054170] |January .NET rollup |4.51 |January 4, 2018 |
The following tables show the Microsoft Security Response Center (MSRC) updates
| | | | | | | Rel 18-01 | [4056898], [4056897], [4056899] |Windows Security only |2.70, 3.57, 4.50 |January 3, 2018 | | Rel 18-01 | [4056890], [4056892] |Windows Security only |5.15 |January 3, 2018 |
-| N/A | [4054518] |December non-security rollup |2.70 |December 12, 2017 |
-| N/A | [4054520] |December non-security rollup |3.57 |December 12, 2017 |
-| N/A | [4054519] |December non-security rollup |4.50 |December 12, 2017 |
+| N/A | [4054518] |December nonsecurity rollup |2.70 |December 12, 2017 |
+| N/A | [4054520] |December nonsecurity rollup |3.57 |December 12, 2017 |
+| N/A | [4054519] |December nonsecurity rollup |4.50 |December 12, 2017 |
| N/A | [4051956] |January timezone update |2.70, 3.57, 4.50 |December 12, 2017 |
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 17-12 | [4054521], [4054522], [4054523] |Windows Security only |2.69, 3.56, 4.49 |December 12, 2017 | | Rel 17-12 | [4052978] |Internet Explorer |2.69, 3.56, 4.49 |December 12, 2017 | | Rel 17-12 | [4052978] |Flash |3.56, 4.49, 5.14 |December 12, 2017 |
-| N/A | [4048957] |November non-security rollup |2.69 |November 14, 2017 |
-| N/A | [4048959] |November non-security rollup |3.56 |November 14, 2017 |
-| N/A | [4048958] |November non-security rollup |4.49 |November 14, 2017 |
+| N/A | [4048957] |November nonsecurity rollup |2.69 |November 14, 2017 |
+| N/A | [4048959] |November nonsecurity rollup |3.56 |November 14, 2017 |
+| N/A | [4048958] |November nonsecurity rollup |4.49 |November 14, 2017 |
| N/A | [4049068] |December Timezone update |2.69, 3.56, 4.49 |December 12, 2017 | ## November 2017 Guest OS
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 17-11 | [4048960], [4048962], [4048961] |Windows Security only |2.68, 3.55, 4.48 |November 14, 2017 | | Rel 17-11 | [4047206] |Internet Explorer |2.68, 3.55, 4.48 |November 14, 2017 | | Rel 17-11 | [4048951] |Flash |3.55, 4.48, 5.13 |November 14, 2017 |
-| N/A | [4041681] |October non-security rollup |2.68 |October 10, 2017 |
-| N/A | [4041690] |October non-security rollup |3.55 |October 10, 2017 |
-| N/A | [4041693] |October non-security rollup |4.48 |October 10, 2017 |
+| N/A | [4041681] |October nonsecurity rollup |2.68 |October 10, 2017 |
+| N/A | [4041690] |October nonsecurity rollup |3.55 |October 10, 2017 |
+| N/A | [4041693] |October nonsecurity rollup |4.48 |October 10, 2017 |
| N/A | [3191566] |Update for Windows Management Framework 5.1 |2.68 |November 14, 2017 | | N/A | [3191565] |Update for Windows Management Framework 5.1 |3.55 |November 14, 2017 | | N/A | [3191564] |Update for Windows Management Framework 5.1 |4.48 |November 14, 2017 |
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 17-10 | [4041678], [4041679], [4041687] |Windows Security only |2.67, 3.54, 4.47 |October 10, 2017 | | Rel 17-10 | [4040685], |Internet Explorer |2.67, 3.54, 4.47 |October 10, 2017 | | Rel 17-10 | [4041681], [4041690], [4041693] |Windows Monthly Rollups |2.67, 3.54, 4.47 |October 10, 2017 |
-| N/A | [4038777] |September non-security rollup |2.67 |September 12, 2017 |
-| N/A | [4038799] |September non-security rollup |3.54 |September 12, 2017 |
-| N/A | [4038792] |September non-security rollup |4.47 |September 12, 2017 |
-| N/A | [4040980] |September .NET non-security rollup |2.67 |September 12, 2017 |
-| N/A | [4040979] |September .NET non-security rollup |3.54 |September 12, 2017 |
-| N/A | [4040981] |September .NET non-security rollup |4.47 |September 12, 2017 |
+| N/A | [4038777] |September nonsecurity rollup |2.67 |September 12, 2017 |
+| N/A | [4038799] |September nonsecurity rollup |3.54 |September 12, 2017 |
+| N/A | [4038792] |September nonsecurity rollup |4.47 |September 12, 2017 |
+| N/A | [4040980] |September .NET nonsecurity rollup |2.67 |September 12, 2017 |
+| N/A | [4040979] |September .NET nonsecurity rollup |3.54 |September 12, 2017 |
+| N/A | [4040981] |September .NET nonsecurity rollup |4.47 |September 12, 2017 |
## September 2017 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 17-09 | [4040966], [4040960], [4040965], [4040959], [4033988], [4040955], [4040967], [4040958]|September .NET update |2.66, 3.53, 4.46 |September 12, 2017 | | Rel 17-09 | [4036586] |Internet explorer |2.66, 3.53, 4.46 |September 12, 2017 | | CVE-2017-8704 | [4038782] |Denial of Service |5.11 |September 12, 2017 |
-| N/A | [4034664] |August non-security rollup |2.66 |August 8, 2017 |
-| N/A | [4034665] |August non-security rollup |5.11 |August 8, 2017 |
-| N/A | [4034681] |August non-security rollup |4.46 |August 8, 2017 |
+| N/A | [4034664] |August nonsecurity rollup |2.66 |August 8, 2017 |
+| N/A | [4034665] |August nonsecurity rollup |5.11 |August 8, 2017 |
+| N/A | [4034681] |August nonsecurity rollup |4.46 |August 8, 2017 |
## August 2017 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 17-07 | [4034733] |Internet Explorer |2.65, 3.52, 4.45, 5.10 |August 8, 2017 | | Rel 17-07 | [4034664], [4034665], [4034681] |Windows Monthly Rollups |2.65, 3.52, 4.45 |August 8, 2017 | | Rel 17-07 | [4034668], [4034660], [4034658], [4034674] |Re-release of CVE-2017-0071, Re-release of CVE-2017-0228 |5.10 |August 8, 2017 |
-| Rel 17-07 | [4025341] |July non-security rollup |2.65 |July 11, 2017 |
-| Rel 17-07 | [4025331] |July non-security rollup |3.52 |July 11, 2017 |
-| Rel 17-07 | [4025336] |July non-security rollup |4.45 |July 11, 2017 |
+| Rel 17-07 | [4025341] |July nonsecurity rollup |2.65 |July 11, 2017 |
+| Rel 17-07 | [4025331] |July nonsecurity rollup |3.52 |July 11, 2017 |
+| Rel 17-07 | [4025336] |July nonsecurity rollup |4.45 |July 11, 2017 |
## July 2017 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 17-07 | [4025376] |Flash |3.51, 4.44, 5.9 |July 11, 2017 | | Rel 17-07 | [4025252] |Internet Explorer |2.64, 3.51, 4.44 |July 11, 2017 | | N/A | [4020322] |Timezone Update |2.64, 3.51, 4.44 |July 11, 2017 |
-| N/A | [4022719] |June non-security rollup |2.64 |June 13, 2017 |
-| N/A | [4022724] |June non-security rollup |3.51 |June 13, 2017 |
-| N/A | [4022726] |June non-security rollup |4.44 |June 13, 2017 |
+| N/A | [4022719] |June nonsecurity rollup |2.64 |June 13, 2017 |
+| N/A | [4022724] |June nonsecurity rollup |3.51 |June 13, 2017 |
+| N/A | [4022726] |June nonsecurity rollup |4.44 |June 13, 2017 |
## June 2017 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 17-06 | [4022730] |Security update for Adobe Flash Player |3.50, 4.43, 5.8 |June 13, 2017 | | Rel 17-06 | [4015217], [4015221], [4015583], [4015550], [4015219] |Re-release of CVE-2017-0167 |4.43, 5.8 |April 11, 2017 | | N/A | [4023136] |Timezone update |2.63, 3.50, 4.43 |June 13, 2017 |
-| N/A | [4019264] |May non-security rollup |2.63 |June 13, 2017 |
-| N/A | [4014545] |May .NET non-security rollup |2.63 |April 11, 2017 |
-| N/A | [4014508] |May .NET non-security rollup |2.63 |May 9, 2017 |
-| N/A | [4014511] |May .NET non-security rollup |2.63 |May 9, 2017 |
-| N/A | [4014514] |May .NET non-security rollup |2.63 |May 9, 2017 |
-| N/A | [4019216] |May non-security rollup |3.50 |May 9, 2017 |
-| N/A | 4014503 |May .NET non-security rollup |3.50 |May 9, 2017 |
-| N/A | [4014506] |May .NET non-security rollup |3.50 |May 9, 2017 |
-| N/A | [4014509] |May .NET non-security rollup |3.50 |May 9, 2017 |
-| N/A | [4014513] |May .NET non-security rollup |3.50 |May 9, 2017 |
-| N/A | [4019215] |May non-security rollup |4.43 |May 9, 2017 |
-| N/A | [4014505] |May .NET non-security rollup |4.43 |May 9, 2017 |
-| N/A | [4014507] |May .NET non-security rollup |4.43 |May 9, 2017 |
-| N/A | [4014510] |May .NET non-security rollup |4.43 |May 9, 2017 |
-| N/A | [4014512] |May .NET non-security rollup |4.43 |May 9, 2017 |
+| N/A | [4019264] |May nonsecurity rollup |2.63 |June 13, 2017 |
+| N/A | [4014545] |May .NET nonsecurity rollup |2.63 |April 11, 2017 |
+| N/A | [4014508] |May .NET nonsecurity rollup |2.63 |May 9, 2017 |
+| N/A | [4014511] |May .NET nonsecurity rollup |2.63 |May 9, 2017 |
+| N/A | [4014514] |May .NET nonsecurity rollup |2.63 |May 9, 2017 |
+| N/A | [4019216] |May nonsecurity rollup |3.50 |May 9, 2017 |
+| N/A | 4014503 |May .NET nonsecurity rollup |3.50 |May 9, 2017 |
+| N/A | [4014506] |May .NET nonsecurity rollup |3.50 |May 9, 2017 |
+| N/A | [4014509] |May .NET nonsecurity rollup |3.50 |May 9, 2017 |
+| N/A | [4014513] |May .NET nonsecurity rollup |3.50 |May 9, 2017 |
+| N/A | [4019215] |May nonsecurity rollup |4.43 |May 9, 2017 |
+| N/A | [4014505] |May .NET nonsecurity rollup |4.43 |May 9, 2017 |
+| N/A | [4014507] |May .NET nonsecurity rollup |4.43 |May 9, 2017 |
+| N/A | [4014510] |May .NET nonsecurity rollup |4.43 |May 9, 2017 |
+| N/A | [4014512] |May .NET nonsecurity rollup |4.43 |May 9, 2017 |
## May 2017 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 17-05 | [4022345] |Microsoft Security Advisory |5.7 | May 9, 2017 | | Rel 17-05 | [4021279] |.NET /ASP.NET Core Advisory |2.62, 3.49, 4.42, 5.7 | May 9, 2017 | | N/A | [4012864] |Timezone Update |2.62, 3.49, 4.42 | May 9, 2017 |
-| N/A | [4014565] |April .NET non-security rollup |2.62 | April 11, 2017 |
-| N/A | [4014559] |April .NET non-security rollup |2.62 | April 11, 2017 |
+| N/A | [4014565] |April .NET nonsecurity rollup |2.62 | April 11, 2017 |
+| N/A | [4014559] |April .NET nonsecurity rollup |2.62 | April 11, 2017 |
| N/A | [4015549] |April non-Security Rollup |2.62 | April 11, 2017 | | N/A | [4019990] |D3DCompiler update - requirement for .NET 4.7 |3.49 | May 9, 2017 |
-| N/A | [4014563] |April .NET non-security rollup |3.49 | April 11, 2017 |
-| N/A | [4014557] |April .NET non-security rollup |3.49 | April 11, 2017 |
-| N/A | [4014545] |April .NET non-security rollup |3.49 | April 11, 2017 |
-| N/A | [4014548] |April .NET non-security rollup |3.49 | April 11, 2017 |
-| N/A | [4015551] |April non-security rollup |3.49 | April 11, 2017 |
+| N/A | [4014563] |April .NET nonsecurity rollup |3.49 | April 11, 2017 |
+| N/A | [4014557] |April .NET nonsecurity rollup |3.49 | April 11, 2017 |
+| N/A | [4014545] |April .NET nonsecurity rollup |3.49 | April 11, 2017 |
+| N/A | [4014548] |April .NET nonsecurity rollup |3.49 | April 11, 2017 |
+| N/A | [4015551] |April nonsecurity rollup |3.49 | April 11, 2017 |
| N/A | [3173424] |Servicing Stack Update |4.42 | July 12, 2016 |
-| N/A | [4014555] |April .NET non-security rollup |4.42 | April 11, 2017 |
-| N/A | [4014567] |April .NET non-security rollup |4.42 | April 11, 2017 |
-| N/A | [4015550] |April non-security rollup |4.42 | April 11, 2017 |
+| N/A | [4014555] |April .NET nonsecurity rollup |4.42 | April 11, 2017 |
+| N/A | [4014567] |April .NET nonsecurity rollup |4.42 | April 11, 2017 |
+| N/A | [4015550] |April nonsecurity rollup |4.42 | April 11, 2017 |
| N/A | [4013418] |Servicing Stack Update |5.7 | March 14, 2017 | ## April 2017 Guest OS
The following tables show the Microsoft Security Response Center (MSRC) updates
| MS16-077 |[3165191] |Security Update for WPAD |4.33, 3.40, 2.52 |June 14, 2016 | | MS16-080 |[3164302] |Security Update for Microsoft Windows PDF |4.33, 3.40 |June 14, 2016 | | MS16-081 |[3160352] |Security Update for Active Directory |4.33, 3.40, 2.52 |June 14, 2016 |
-| N/A |[2922223] |You cannot change system time if RealTimeIsUniversal registry entry is enabled in Windows |2.52 |June 14, 2016 |
+| N/A |[2922223] |You can't change system time if RealTimeIsUniversal registry entry is enabled in Windows |2.52 |June 14, 2016 |
| N/A |[3121255] |"0x00000024" Stop error in FsRtlNotifyFilterReportChange and copy file may fail in Windows |2.52 |June 14, 2016 | | N/A |[3125424] |LSASS deadlocks cause Windows Server 2012 R2 or Windows Server 2012 not to respond |4.33, 3.40 |June 14, 2016 | | N/A |[3125574] |Convenience rollup update for Windows 7 SP1 and Windows Server 2008 R2 SP1 |2.52 |June 14, 2016 |
The following tables show the Microsoft Security Response Center (MSRC) updates
| N/A |[3012325] |Windows APN database entries update for DIGI, Vodafone, and Telekom mobile operators in Windows 8.1 and Windows 8 |4.15, 3.22, 2.34 |Jan 13 2015 | | N/A |[3007054] |PIN-protected printing option always shows when you print a document within a Windows Store application in Windows |4.15, 3.22, 2.34 |Jan 13 2015 | | N/A |[2999802] |Solid lines instead of dotted lines are printed in Windows |4.15, 3.22, 2.34 |Jan 13 2015 |
-| N/A |[2896881] |Long logon time when you use the AddPrinterConnection VBScript command to map printers for users during logon process in Windows |4.15, 3.22, 2.34 |Jan 13 2015 |
+| N/A |[2896881] |Long sign in time when you use the AddPrinterConnection VBScript command to map printers for users during sign in process in Windows |4.15, 3.22, 2.34 |Jan 13 2015 |
[4457131]: https://support.microsoft.com/kb/4457131 [4457145]: https://support.microsoft.com/kb/4457145
cloud-services Cloud Services Guestos Retirement Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-retirement-policy.md
Title: Supportability and retirement policy guide for Azure Guest OS | Microsoft Docs
-description: Provides information about what Microsoft will support as regards to the Azure Guest OS used by Cloud Services.
+description: Provides information about what Microsoft supports regarding the Azure Guest OS used by Cloud Services.
ms.assetid: 919dd781-4dc6-4e50-bda8-9632966c5458 Previously updated : 02/21/2023 Last updated : 07/23/2024 # Azure Guest OS supportability and retirement policy
-The information on this page relates to the Azure Guest operating system ([Guest OS](cloud-services-guestos-update-matrix.md)) for Cloud Services worker and web roles (PaaS). It does not apply to Virtual Machines (IaaS).
+The information on this page relates to the Azure Guest operating system ([Guest OS](cloud-services-guestos-update-matrix.md)) for Cloud Services worker and web roles (PaaS). It doesn't apply to Virtual Machines (IaaS).
-Microsoft has a published [support policy for the Guest OS](https://support.microsoft.com/gp/azure-cloud-lifecycle-faq). The page you are reading now describes how the policy is implemented.
+Microsoft has a published [support policy for the Guest OS](https://support.microsoft.com/gp/azure-cloud-lifecycle-faq). This page describes how the policy is implemented.
-The policy is
+The policy is:
-1. Microsoft will support **at least the latest two families of the Guest OS**. When a family is retired, customers have 12 months from the official retirement date to update to a newer supported Guest OS family.
-2. Microsoft will support **at least the latest two versions of the supported Guest OS families**.
-3. Microsoft will support **at least the latest two versions of the Azure SDK**. When a version of the SDK is retired, customers will have 12 months from the official retirement date to update to a newer version.
+* Microsoft supports **at least the latest two families of the Guest OS**. When a family is retired, customers have 12 months from the official retirement date to update to a newer supported Guest OS family.
+* Microsoft supports **at least the latest two versions of the supported Guest OS families**.
+* Microsoft supports **at least the latest two versions of the Azure SDK**. When a version of the SDK is retired, customers have 12 months from the official retirement date to update to a newer version.
-At times, more than two families or releases may be supported. Official Guest OS support information will appear on the [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md).
+At times, more than two families or releases may be supported. Official Guest OS support information appears on the [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md).
## When a Guest OS version is retired
-New Guest OS **versions** are introduced about every month to incorporate the latest MSRC updates. Because of the regular monthly updates, a Guest OS version is normally disabled around 60 days after its release. This activity keeps at least two Guest OS versions for each family available for use.
+New Guest OS **versions** are introduced about every month to incorporate the latest Microsoft Security Response Center (MSRC) updates. Because of the regular monthly updates, a Guest OS version is normally disabled around 60 days after its release. This activity keeps at least two Guest OS versions for each family available for use.
### Process during a Guest OS family retirement
-Once the retirement is announced, customers have a 12 month "transition" period before the older family is officially removed from service. This transition time may be extended at the discretion of Microsoft. Updates will be posted on the [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md).
+Once the retirement is announced, customers have a 12 month "transition" period before the older family is officially removed from service. This transition time may be extended at the discretion of Microsoft. Microsoft posts updates on the [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md).
-A gradual retirement process will begin six (6) months into the transition period. During this time:
+A gradual retirement process begins six (6) months into the transition period. During this time:
-1. Microsoft will notify customers of the retirement.
-2. The newer version of the Azure SDK wonΓÇÖt support the retired Guest OS family.
-3. New deployments and redeployments of Cloud Services will not be allowed on the retired family
+* Microsoft notifies customers of the retirement.
+* The newer version of the Azure SDK doesn't support the retired Guest OS family.
+* New deployments and redeployments of Cloud Services are prohibited on the retired family
-Microsoft will continue to introduce new Guest OS version incorporating the latest MSRC updates until the last day of the transition period, known as the "expiration date". On the expiration date, Cloud Services still running will be unsupported under the Azure SLA. Microsoft has the discretion to force upgrade, delete or stop those services after that date.
+Microsoft continues to introduce new Guest OS version incorporating the latest MSRC updates until the last day of the transition period, known as the "expiration date." On the expiration date, cloud services still running are unsupported under the Azure Service Level Agreement (SLA). Microsoft has the discretion to force upgrade, delete or stop those services after that date.
### Process during a Guest OS Version retirement
-If customers set their Guest OS to automatically update, they never have to worry about dealing with Guest OS versions. They will always be using the latest Guest OS version.
+If customers set their Guest OS to automatically update, they never have to worry about dealing with Guest OS versions. They're always using the latest Guest OS version.
Guest OS Versions are released every month. Because of the rate of regular releases, each version has a fixed lifespan.
-At 60 days into the lifespan, a version is "*disabled*". "Disabled" means that the version is removed from the portal. The version can no longer be set from the CSCFG configuration file. Existing deployments are left running. But new deployments and code and configuration updates to existing deployments will not be allowed.
+At 60 days into the lifespan, a version is "*disabled*." "Disabled" means that the version is removed from the portal. The version can no longer be set from the CSCFG configuration file. Existing deployments are left running, but new deployments and code and configuration updates to existing deployments are prohibited.
-Sometime after becoming "disabled", the Guest OS version "expires" and any installations still running that expired version are exposed to security and vulnerability issues. Generally, expiration is done in batches, so the period from disablement to expiration can vary.
+Sometime after the Guest OS version becomes "disabled," it "expires," and any installations still running that expired version are exposed to security and vulnerability issues. Generally, expiration is done in batches, so the period from disablement to expiration can vary.
-Customers who configure their services to update the Guest OS manually, should ensure that their services are running on a supported Guest OS. If a service is configured to update the Guest OS automatically, the underlying platform will ensure compliance and will upgrade to the latest Guest OS.
+Customers who configure their services to update the Guest OS manually, should ensure that their services are running on a supported Guest OS. If a service is configured to update the Guest OS automatically, the underlying platform ensures compliance and upgrades to the latest Guest OS.
-These periods may be made longer at Microsoft's discretion to ease customer transitions. Any changes will be communicated on the [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md).
+These periods may be made longer at Microsoft's discretion to ease customer transitions. Microsoft communicates any changes on the [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md).
### Notifications during retirement
-* **Family retirement** <br>Microsoft will use blog posts and portal notification. Customers who are still using a retired Guest OS family will be notified through direct communication (email, portal messages, phone call) to assigned service administrators. All changes will be posted to the [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md).
-* **Version Retirement** <br>All changes and the dates they occur will be posted to the [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md), including release, disabled, and expiration. Services admins will receive emails if they have deployments running on a disabled Guest OS version or family. The timing of these emails can vary. Generally they are at least a month before disablement, though this timing is not an official SLA.
+* **Family retirement** <br>Microsoft uses blog posts and portal notification. Microsoft informs customers who are still using a retired Guest OS family through direct communication (email, portal messages, phone call) to assigned service administrators. Microsoft posts all changes to the [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md).
+* **Version Retirement** <br>Microsoft posts all changes and the dates they occur to the [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md), including release, disabled, and expiration. Services admins receive emails if they have deployments running on a disabled Guest OS version or family. The timing of these emails can vary. Generally they are at least a month before disablement, though this timing isn't an official SLA.
## Frequently asked questions **How can I mitigate the impacts of migration?** We recommend that you use latest Guest OS family for designing your Cloud Services.
-1. Start planning your migration to a newer family early.
-2. Set up temporary test deployments to test your Cloud Service running on the new family.
-3. Set your Guest OS version to **Automatic** (osVersion=* in the [.cscfg](cloud-services-model-and-package.md#cscfg) file) so the migration to new Guest OS versions occurs automatically.
+* Start planning your migration to a newer family early.
+* Set up temporary test deployments to test your Cloud Service running on the new family.
+* Set your Guest OS version to **Automatic** (osVersion=* in the [.cscfg](cloud-services-model-and-package.md#cscfg) file) so the migration to new Guest OS versions occurs automatically.
**What if my web application requires deeper integration with the OS?**
-If your web application architecture depends on underlying features of the operating system, use platform supported capabilities such as [startup tasks](cloud-services-startup-tasks.md) or other extensibility mechanisms. Alternatively, you can also use [Azure Virtual Machines](https://azure.microsoft.com/documentation/scenarios/virtual-machines/) (IaaS ΓÇô Infrastructure as a Service), where you are responsible for maintaining the underlying operating system.
+If your web application architecture depends on underlying features of the operating system, use platform supported capabilities such as [startup tasks](cloud-services-startup-tasks.md) or other extensibility mechanisms. Alternatively, you can also use [Azure Virtual Machines](https://azure.microsoft.com/documentation/scenarios/virtual-machines/) (IaaS ΓÇô Infrastructure as a Service), where you're responsible for maintaining the underlying operating system.
## Next steps Review the latest [Guest OS releases](cloud-services-guestos-update-matrix.md).
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
ms.assetid: 6306cafe-1153-44c7-8554-623b03d59a34 Previously updated : 06/28/2024 Last updated : 07/23/2024 # Azure Guest OS releases and SDK compatibility matrix
-Provides you with up-to-date information about the latest Azure Guest OS releases for Cloud Services. This information helps you plan your upgrade path before a Guest OS is disabled. If you configure your roles to use *automatic* Guest OS updates as described in [Azure Guest OS Update Settings][Azure Guest OS Update Settings], it is not vital that you read this page.
+Provides you with up-to-date information about the latest Azure Guest OS releases for Cloud Services. This information helps you plan your upgrade path before a Guest OS is disabled. If you configure your roles to use *automatic* Guest OS updates as described in [Azure Guest OS Update Settings][Azure Guest OS Update Settings], it isn't vital that you read this page.
> [!IMPORTANT] > This page applies to Cloud Services web and worker roles, which run on top of a Guest OS. It does **not apply** to IaaS Virtual Machines.
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates ###### **June 27, 2024**
-The June Guest OS has released.
+The June Guest OS released.
###### **June 1, 2024**
-The May Guest OS has released.
+The May Guest OS released.
###### **April 19, 2024**
-The April Guest OS has released.
+The April Guest OS released.
###### **April 9, 2024**
-The March Guest OS has released.
+The March Guest OS released.
###### **February 24, 2024**
-The February Guest OS has released.
+The February Guest OS released.
###### **January 22, 2024**
-The January Guest OS has released.
+The January Guest OS released.
###### **January 16, 2023**
-The December Guest OS has released.
+The December Guest OS released.
###### **December 8, 2023**
-The November Guest OS has released.
+The November Guest OS released.
###### **October 23, 2023**
-The October Guest OS has released.
+The October Guest OS released.
###### **September 26, 2023**
-The September Guest OS has released.
+The September Guest OS released.
###### **August 21, 2023**
-The August Guest OS has released.
+The August Guest OS released.
###### **July 27, 2023**
-The July Guest OS has released.
+The July Guest OS released.
###### **July 8, 2023**
-The June Guest OS has released.
+The June Guest OS released.
###### **May 19, 2023**
-The May Guest OS has released.
+The May Guest OS released.
###### **April 27, 2023**
-The April Guest OS has released.
+The April Guest OS released.
###### **March 28, 2023**
-The March Guest OS has released.
+The March Guest OS released.
###### **March 1, 2023**
-The February Guest OS has released.
+The February Guest OS released.
###### **January 31, 2023**
-The January Guest OS has released.
+The January Guest OS released.
###### **January 19, 2023**
-The December Guest OS has released.
+The December Guest OS released.
###### **December 12, 2022**
-The November Guest OS has released.
+The November Guest OS released.
###### **November 4, 2022**
-The October Guest OS has released.
+The October Guest OS released.
###### **September 29, 2022**
-The September Guest OS has released.
+The September Guest OS released.
###### **September 2, 2022**
-The August Guest OS has released.
+The August Guest OS released.
###### **August 3, 2022**
-The July Guest OS has released.
+The July Guest OS released.
###### **July 11, 2022**
-The June Guest OS has released.
+The June Guest OS released.
###### **May 26, 2022**
-The May Guest OS has released.
+The May Guest OS released.
###### **April 30, 2022**
-The April Guest OS has released.
+The April Guest OS released.
###### **March 19, 2022**
-The March Guest OS has released.
+The March Guest OS released.
###### **March 2, 2022**
-The February Guest OS has released.
+The February Guest OS released.
###### **February 11, 2022**
-The January Guest OS has released.
+The January Guest OS released.
###### **January 10, 2022**
-The December Guest OS has released.
+The December Guest OS released.
###### **November 19, 2021**
-The November Guest OS has released.
+The November Guest OS released.
###### **November 1, 2021**
-The October Guest OS has released.
+The October Guest OS released.
###### **October 8, 2021**
-The September Guest OS has released.
+The September Guest OS released.
###### **August 27, 2021**
-The August Guest OS has released.
+The August Guest OS released.
###### **August 13, 2021**
-The July Guest OS has released.
+The July Guest OS released.
###### **July 1, 2021**
-The June Guest OS has released.
+The June Guest OS released.
###### **May 26, 2021**
-The May Guest OS has released.
+The May Guest OS released.
###### **April 30, 2021**
-The April Guest OS has released.
+The April Guest OS released.
###### **March 28, 2021**
-The March Guest OS has released.
+The March Guest OS released.
###### **February 19, 2021**
-The February Guest OS has released.
+The February Guest OS released.
###### **February 5, 2021**
-The January Guest OS has released.
+The January Guest OS released.
###### **January 15, 2021**
-The December Guest OS has released.
+The December Guest OS released.
###### **December 19, 2020**
-The November Guest OS has released.
+The November Guest OS released.
###### **November 17, 2020**
-The October Guest OS has released.
+The October Guest OS released.
###### **October 10, 2020**
-The September Guest OS has released.
+The September Guest OS released.
###### **September 5, 2020**
-The August Guest OS has released.
+The August Guest OS released.
###### **August 17, 2020**
-The July Guest OS has released.
+The July Guest OS released.
###### **August 10, 2020**
-The June Guest OS has released.
+The June Guest OS released.
###### **June 2, 2020**
-The May Guest OS has released.
+The May Guest OS released.
###### **May 4, 2020**
-The April Guest OS has released.
+The April Guest OS released.
###### **April 2, 2020**
-The March Guest OS has released.
+The March Guest OS released.
###### **March 5, 2020**
-The February Guest OS has released.
+The February Guest OS released.
###### **January 24, 2020**
-The January Guest OS has released.
+The January Guest OS released.
###### **January 8, 2020**
-The December Guest OS has released.
+The December Guest OS released.
###### **December 5, 2019**
-The November Guest OS has released.
+The November Guest OS released.
###### **November 1, 2019**
-The October Guest OS has released.
+The October Guest OS released.
###### **October 7, 2019**
-The September Guest OS has released.
+The September Guest OS released.
###### **September 4, 2019**
-The August Guest OS has released.
+The August Guest OS released.
###### **July 26, 2019**
-The July Guest OS has released.
+The July Guest OS released.
###### **July 8, 2019**
-The June Guest OS has released.
+The June Guest OS released.
###### **June 6, 2019**
-The May Guest OS has released.
+The May Guest OS released.
###### **May 7, 2019**
-The April Guest OS has released.
+The April Guest OS released.
###### **March 26, 2019**
-The March Guest OS has released.
+The March Guest OS released.
###### **March 12, 2019**
-The February Guest OS has released.
+The February Guest OS released.
###### **February 5, 2019**
-The January Guest OS has released.
+The January Guest OS released.
###### **January 24, 2019**
-Family 6 Guest OS (Windows Server 2019) has released.
+Family 6 Guest OS (Windows Server 2019) released.
###### **January 7, 2019**
-The December Guest OS has released.
+The December Guest OS released.
###### **December 14, 2018**
-The November Guest OS has released.
+The November Guest OS released.
###### **November 8, 2018**
-The October Guest OS has released.
+The October Guest OS released.
###### **October 12, 2018**
-The September Guest OS has released.
+The September Guest OS released.
## Releases
The September Guest OS has released.
|~~WA-GUEST-OS-4.102_202204-01~~| April 30, 2022 | July 11, 2022 | |~~WA-GUEST-OS-4.101_202203-01~~| March 19, 2022 | May 26, 2022 | |~~WA-GUEST-OS-4.100_202202-01~~| March 2, 2022 | April 30, 2022 |
-|~~WA-GUEST-OS-4.99_202201-02~~| February 11 , 2022 | March 19, 2022 |
-|~~WA-GUEST-OS-4.97_202112-01~~| January 10 , 2022 | March 2, 2022 |
+|~~WA-GUEST-OS-4.99_202201-02~~| February 11, 2022 | March 19, 2022 |
+|~~WA-GUEST-OS-4.97_202112-01~~| January 10, 2022 | March 2, 2022 |
|~~WA-GUEST-OS-4.96_202111-01~~| November 19, 2021 | February 11, 2022 | |~~WA-GUEST-OS-4.95_202110-01~~| November 1, 2021 | January 10, 2022 | |~~WA-GUEST-OS-4.94_202109-01~~| October 8, 2021 | November 19, 2021 |
Even though the [retirement policy for the Azure SDK][retire policy sdk] indicat
| 1 |Version 1.0+ | ## Guest OS release information
-There are three dates that are important to Guest OS releases: **release** date, **disabled** date, and **expiration** date. A Guest OS is considered available when it is in the Portal and can be selected as the target Guest OS. When a Guest OS reaches the **disabled** date, it is removed from Azure. However, any Cloud Service targeting that Guest OS will still operate as normal.
+There are three dates that are important to Guest OS releases: **release** date, **disabled** date, and **expiration** date. A Guest OS is considered available when it is in the Portal and can be selected as the target Guest OS. When a Guest OS reaches the **disabled** date, Microsoft removes it from Azure. However, any Cloud Service targeting that Guest OS still operate as normal.
-The window between the **disabled** date and the **expiration** date provides you with a buffer to easily transition from one Guest OS to one newer. If you're using *automatic* as your Guest OS, you'll always be on the latest version and you don't have to worry about it expiring.
+The window between the **disabled** date and the **expiration** date provides you with a buffer to easily transition from one Guest OS to one newer. If you're using *automatic* as your Guest OS, you're always on the latest version and you don't have to worry about it expiring.
-When the **expiration** date passes, any Cloud Service still using that Guest OS will be stopped, deleted, or forced to upgrade. You can read more about the retirement policy [here][retirepolicy].
+When the **expiration** date passes, any Cloud Service still using that Guest OS stops, deletes, or force upgrades. You can read more about the retirement policy [here][retirepolicy].
## Guest OS family-version explanation The Guest OS families are based on released versions of Microsoft Windows Server. The Guest OS is the underlying operating system that Azure Cloud Services runs on. Each Guest OS has a family, version, and release number.
The Guest OS families are based on released versions of Microsoft Windows Server
Numbers start at 0 and increment by 1 each time a new set of updates is added. Trailing zeros are only shown if important. That is, version 2.10 is a different, much later version than version 2.1. * **Guest OS release**
- A rerelease of a Guest OS version. A rerelease occurs if Microsoft finds issues during testing; requiring changes. The latest release always supersedes any previous releases, public or not. The Azure portal will only allow users to pick the latest release for a given version. Deployments running on a previous release are usually not force upgraded depending on the severity of the bug.
+ A rerelease of a Guest OS version. A rerelease occurs if Microsoft finds issues during testing; requiring changes. The latest release always supersedes any previous releases, public or not. The Azure portal only allows users to pick the latest release for a given version. Deployments running on a previous release aren't force upgraded depending on the severity of the bug.
-In the example below, 2 is the family, 12 is the version and "rel2" is the release.
+In the following example, 2 is the family, 12 is the version, and "rel2" is the release.
**Guest OS release** - 2.12 rel2 **Configuration string for this release** - WA-GUEST-OS-2.12_201208-02
-The configuration string for a Guest OS has this same information embedded in it, along with a date showing which MSRC patches were considered for that release. In this example, MSRC patches produced for Windows Server 2008 R2 up to and including August 2012 were considered for inclusion. Only patches specifically applying to that version of Windows Server are included. For example, if an MSRC patch applies to Microsoft Office, it will not be included because that product is not part of the Windows Server base image.
+The configuration string for a Guest OS has this same information embedded in it, along with a date showing which MSRC patches were considered for that release. In this example, MSRC patches produced for Windows Server 2008 R2 up to and including August 2012 were considered for inclusion. Only patches specifically applying to that version of Windows Server are included. For example, if an MSRC patch applies to Microsoft Office, it isn't included because that product isn't part of the Windows Server base image.
## Guest OS system update process
-This page includes information on upcoming Guest OS Releases. Customers have indicated that they want to know when a release occurs because their cloud service roles will reboot if they are set to "Automatic" update. Guest OS releases typically occur 2-3 weeks after the MSRC update release that occurs on the second Tuesday of every month. New releases include all the relevant MSRC patches for each Guest OS family.
+This page includes information on upcoming Guest OS Releases. Some customers want to know when a release occurs because cloud service roles set to automatically update reboot on releases. Guest OS releases typically occur 2-3 weeks after the MSRC update release that occurs on the second Tuesday of every month. New releases include all the relevant MSRC patches for each Guest OS family.
-Microsoft Azure is constantly releasing updates. The Guest OS is only one such update in the pipeline. A release can be affected by many factors too numerous to list here. In addition, Azure runs on literally hundreds of thousands of machines. This means that it's impossible to give an exact date and time when your role(s) will reboot. We are working on a plan to limit or time reboots.
+Microsoft Azure is constantly releasing updates. The Guest OS is only one such update in the pipeline. Many factors affect a release, and they're too numerous to list here. In addition, Azure runs on literally hundreds of thousands of machines. This means that it's impossible to give an exact date and time to expect your role or roles to reboot. We're working on a plan to limit or time reboots.
-When a new release of the Guest OS is published, it can take time to fully propagate across Azure. As services are updated to the new Guest OS, they are rebooted honoring update domains. Services set to use "Automatic" updates will get a release first. After the update, youΓÇÖll see the new Guest OS version listed for your service in the Azure portal. Rereleases may occur during this period. Some versions may be deployed over longer periods of time and automatic upgrade reboots may not occur for many weeks after the official release date. Once a Guest OS is available, you can then explicitly choose that version from the portal or in your configuration file.
+When a new release of the Guest OS is published, it can take time to fully propagate across Azure. As services are updated to the new Guest OS, they reboot, honoring update domains. Services set to use "Automatic" updates get a release first. After the update, youΓÇÖll see the new Guest OS version listed for your service in the Azure portal. Rereleases may occur during this period. Some versions may be deployed over longer periods of time and automatic upgrade reboots may not occur for many weeks after the official release date. Once a Guest OS is available, you can then explicitly choose that version from the portal or in your configuration file.
-For a great deal of valuable information on restarts and pointers to more information technical details of Guest and Host OS updates, see the MSDN blog post titled [Role Instance Restarts Due to OS Upgrades][restarts].
+For a great deal of valuable information on restarts and pointers to more information on Guest and Host OS updates, see the Microsoft Developer Network (MSDN) blog post titled [Role Instance Restarts Due to OS Upgrades][restarts].
-If you manually update your Guest OS, see the [Guest OS retirement policy][retirepolicy] for additional information.
+For more information about manually updating your Guest OS, see the [Guest OS retirement policy][retirepolicy].
## Guest OS supportability and retirement policy The Guest OS supportability and retirement policy is explained [here][retirepolicy].
cloud-services Cloud Services How To Configure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-configure-portal.md
description: Learn how to configure cloud services in Azure. Learn to update the
Previously updated : 02/21/2023 Last updated : 07/23/2024
After opening the [Azure portal](https://portal.azure.com/), navigate to your cl
![Settings Page](./media/cloud-services-how-to-configure-portal/cloud-service.png)
-The **Settings** or **All settings** links will open up **Settings** where you can change the **Properties**, change the **Configuration**, manage the **Certificates**, set up **Alert rules**, and manage the **Users** who have access to this cloud service.
+The **Settings** or **All settings** links open up **Settings** where you can change the **Properties**, change the **Configuration**, manage the **Certificates**, set up **Alert rules**, and manage the **Users** who have access to this cloud service.
![Azure cloud service settings](./media/cloud-services-how-to-configure-portal/cs-settings-blade.png) ### Manage Guest OS version
-By default, Azure periodically updates your guest OS to the latest supported image within the OS family that you've specified in your service configuration (.cscfg), such as Windows Server 2016.
+By default, Azure periodically updates your guest OS to the latest supported image within the OS family that you specified in your service configuration (.cscfg), such as Windows Server 2016.
If you need to target a specific OS version, you can set it in **Configuration**.
If you need to target a specific OS version, you can set it in **Configuration**
## Monitoring
-You can add alerts to your cloud service. Click **Settings** > **Alert Rules** > **Add alert**.
+You can add alerts to your cloud service. Select **Settings** > **Alert Rules** > **Add alert**.
![Screenshot of the Settings pan with the Alert rules option highlighted and outlined in red and the Add alert option outlined in red.](./media/cloud-services-how-to-configure-portal/cs-alerts.png)
From here, you can set up an alert. With the **Metric** drop-down box, you can s
### Configure monitoring from a metric tile
-Instead of using **Settings** > **Alert Rules**, you can click on one of the metric tiles in the **Monitoring** section of the cloud service.
+Instead of using **Settings** > **Alert Rules**, you can select on one of the metric tiles in the **Monitoring** section of the cloud service.
![Cloud Service Monitoring](./media/cloud-services-how-to-configure-portal/cs-monitoring.png)
You can then initiate a remote desktop connection, remotely reboot the instance,
You may need to reconfigure your cloud service through the [service config (cscfg)](cloud-services-model-and-package.md#cscfg) file. First you need to download your .cscfg file, modify it, then upload it.
-1. Click on the **Settings** icon or the **All settings** link to open up **Settings**.
+1. Select on the **Settings** icon or the **All settings** link to open up **Settings**.
![Settings Page](./media/cloud-services-how-to-configure-portal/cloud-service.png)
-2. Click on the **Configuration** item.
+2. Select on the **Configuration** item.
![Configuration Blade](./media/cloud-services-how-to-configure-portal/cs-settings-config.png)
-3. Click on the **Download** button.
+3. Select on the **Download** button.
![Download](./media/cloud-services-how-to-configure-portal/cs-settings-config-panel-download.png) 4. After you update the service configuration file, upload and apply the configuration updates: ![Upload](./media/cloud-services-how-to-configure-portal/cs-settings-config-panel-upload.png)
-5. Select the .cscfg file and click **OK**.
+5. Select the .cscfg file and select **OK**.
## Next steps
cloud-services Cloud Services How To Create Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-create-deploy-portal.md
Title: How to create and deploy a cloud service (classic) | Microsoft Docs
description: Learn how to use the Quick Create method to create a cloud service and use Upload to upload and deploy a cloud service package in Azure. Previously updated : 02/21/2023 Last updated : 07/23/2024
Three components are required to deploy an application as a cloud service in Azu
* **Service Package** The service package (.cspkg) contains the application code and configurations and the service definition file.
-You can learn more about these and how to create a package [here](cloud-services-model-and-package.md).
+You can learn more about these components and how to create a package [here](cloud-services-model-and-package.md).
## Prepare your app Before you can deploy a cloud service, you must create the cloud service package (.cspkg) from your application code and a cloud service configuration file (.cscfg). The Azure SDK provides tools for preparing these required deployment files. You can install the SDK from the [Azure Downloads](https://azure.microsoft.com/downloads/) page, in the language in which you prefer to develop your application code.
Three cloud service features require special configurations before you export a
* If you want to deploy a cloud service that uses Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL), for data encryption, [configure your application](cloud-services-configure-ssl-certificate-portal.md#modify) for TLS. * If you want to configure Remote Desktop connections to role instances, [configure the roles](cloud-services-role-enable-remote-desktop-new-portal.md) for Remote Desktop.
-* If you want to configure verbose monitoring for your cloud service, enable Azure Diagnostics for the cloud service. *Minimal monitoring* (the default monitoring level) uses performance counters gathered from the host operating systems for role instances (virtual machines). *Verbose monitoring* gathers additional metrics based on performance data within the role instances to enable closer analysis of issues that occur during application processing. To find out how to enable Azure Diagnostics, see [Enabling diagnostics in Azure](cloud-services-dotnet-diagnostics.md).
+* If you want to configure verbose monitoring for your cloud service, enable Azure Diagnostics for the cloud service. *Minimal monitoring* (the default monitoring level) uses performance counters gathered from the host operating systems for role instances (virtual machines). *Verbose monitoring* gathers more metrics based on performance data within the role instances to enable closer analysis of issues that occur during application processing. To find out how to enable Azure Diagnostics, see [Enabling diagnostics in Azure](cloud-services-dotnet-diagnostics.md).
To create a cloud service with deployments of web roles or worker roles, you must [create the service package](cloud-services-model-and-package.md#servicepackagecspkg). ## Before you begin
-* If you haven't installed the Azure SDK, click **Install Azure SDK** to open the [Azure Downloads page](https://azure.microsoft.com/downloads/), and then download the SDK for the language in which you prefer to develop your code. (You'll have an opportunity to do this later.)
+* If you need to install the Azure SDK, choose **Install Azure SDK** to open the [Azure Downloads page](https://azure.microsoft.com/downloads/), and then download the SDK for the language in which you prefer to develop your code. You have an opportunity to do the installation later.
* If any role instances require a certificate, create the certificates. Cloud services require a .pfx file with a private key. You can upload the certificates to Azure as you create and deploy the cloud service. ## Create and deploy
-1. Log in to the [Azure portal](https://portal.azure.com/).
-2. Click **Create a resource > Compute**, and then scroll down to and click **Cloud Service**.
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Choose **Create a resource > Compute**, and then scroll down to and select **Cloud Service**.
![Publish your cloud service1](media/cloud-services-how-to-create-deploy-portal/create-cloud-service.png) 3. In the new **Cloud Service** pane, enter a value for the **DNS name**. 4. Create a new **Resource Group** or select an existing one. 5. Select a **Location**.
-6. Click **Package**. This opens the **Upload a package** pane. Fill in the required fields. If any of your roles contain a single instance, ensure **Deploy even if one or more roles contain a single instance** is selected.
+6. Select **Package**, which opens the **Upload a package** pane. Fill in the required fields. If any of your roles contain a single instance, ensure **Deploy even if one or more roles contain a single instance** is selected.
7. Make sure that **Start deployment** is selected.
-8. Click **OK** which will close the **Upload a package** pane.
-9. If you do not have any certificates to add, click **Create**.
+8. Select **OK**, which closes the **Upload a package** pane.
+9. If you don't have any certificates to add, choose **Create**.
![Publish your cloud service2](media/cloud-services-how-to-create-deploy-portal/select-package.png)
To create a cloud service with deployments of web roles or worker roles, you mus
If your deployment package was [configured to use certificates](cloud-services-configure-ssl-certificate-portal.md#modify), you can upload the certificate now. 1. Select **Certificates**, and on the **Add certificates** pane, select the TLS/SSL certificate .pfx file, and then provide the **Password** for the certificate,
-2. Click **Attach certificate**, and then click **OK** on the **Add certificates** pane.
-3. Click **Create** on the **Cloud Service** pane. When the deployment has reached the **Ready** status, you can proceed to the next steps.
+2. Select **Attach certificate**, and then choose **OK** on the **Add certificates** pane.
+3. Select **Create** on the **Cloud Service** pane. When the deployment reaches the **Ready** status, proceed to the next steps.
![Publish your cloud service3](media/cloud-services-how-to-create-deploy-portal/attach-cert.png) ## Verify your deployment completed successfully
-1. Click the cloud service instance.
+1. Select the cloud service instance.
The status should show that the service is **Running**.
-2. Under **Essentials**, click the **Site URL** to open your cloud service in a web browser.
+2. Under **Essentials**, select the **Site URL** to open your cloud service in a web browser.
![CloudServices_QuickGlance](./media/cloud-services-how-to-create-deploy-portal/running.png)
cloud-services Cloud Services How To Manage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-manage-portal.md
Title: Common cloud service management tasks | Microsoft Docs
description: Learn how to manage Cloud Services in the Azure portal. These examples use the Azure portal. Previously updated : 02/21/2023 Last updated : 07/23/2024
In the **Cloud Services** area of the Azure portal, you can:
* Link resources to your cloud service so that you can see the resource dependencies and scale the resources together. * Delete a cloud service or a deployment.
-For more information about how to scale your cloud service, see [Configure auto-scaling for a cloud service in the portal](cloud-services-how-to-scale-portal.md).
+For more information about how to scale your cloud service, see [Configure autoscaling for a cloud service in the portal](cloud-services-how-to-scale-portal.md).
## Update a cloud service role or deployment If you need to update the application code for your cloud service, use **Update** on the cloud service blade. You can update a single role or all roles. To update, you can upload a new service package or service configuration file.
If you need to update the application code for your cloud service, use **Update*
Azure can guarantee only 99.95 percent service availability during a cloud service update if each role has at least two role instances (virtual machines). With two role instances, one virtual machine processes client requests while the other is updated.
-6. Select the **Start deployment** check box to apply the update after the upload of the package has finished.
+6. Select the **Start deployment** check box to apply the update after the upload of the package finishes.
7. Select **OK** to begin updating the service.
There are two key prerequisites for a successful deployment swap:
- All instances of your roles must be running before you can perform the swap. You can check the status of your instances on the **Overview** blade of the Azure portal. Alternatively, you can use the [Get-AzureRole](/powershell/module/servicemanagement/azure/get-azurerole) command in Windows PowerShell.
-Note that guest OS updates and service healing operations also can cause deployment swaps to fail. For more information, see [Troubleshoot cloud service deployment problems](cloud-services-troubleshoot-deployment-problems.md).
+> [!NOTE]
+> Guest OS updates and service healing operations also can cause deployment swaps to fail. For more information, see [Troubleshoot cloud service deployment problems](cloud-services-troubleshoot-deployment-problems.md).
**Does a swap incur downtime for my application? How should I handle it?**
-As described in the previous section, a deployment swap is typically fast because it's just a configuration change in the Azure load balancer. In some cases, it can take 10 or more seconds and result in transient connection failures. To limit impact to your customers, consider implementing [client retry logic](/azure/architecture/best-practices/transient-faults).
+As described in the previous section, a deployment swap is typically fast because it's just a configuration change in the Azure load balancer. In some cases, it can take 10 or more seconds and result in transient connection failures. To limit the impact to your customers, consider implementing [client retry logic](/azure/architecture/best-practices/transient-faults).
## Delete deployments and a cloud service Before you can delete a cloud service, you must delete each existing deployment.
-To save compute costs, you can delete the staging deployment after you verify that your production deployment is working as expected. You are billed for compute costs for deployed role instances that are stopped.
+To save compute costs, you can delete the staging deployment after you verify that your production deployment is working as expected. Even if you stop your deployed role instances, Azure bills you for compute costs.
Use the following procedure to delete a deployment or your cloud service.
cloud-services Cloud Services How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-monitor.md
Title: Monitor an Azure Cloud Service (classic) | Microsoft Docs
description: Describes what monitoring an Azure Cloud Service involves and what some of your options are. Previously updated : 02/21/2023 Last updated : 07/23/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-You can monitor key performance metrics for any cloud service. Every cloud service role collects minimal data: CPU usage, network usage, and disk utilization. If the cloud service has the `Microsoft.Azure.Diagnostics` extension applied to a role, that role can collect additional points of data. This article provides an introduction to Azure Diagnostics for Cloud Services.
+You can monitor key performance metrics for any cloud service. Every cloud service role collects minimal data: CPU usage, network usage, and disk utilization. If the cloud service has the `Microsoft.Azure.Diagnostics` extension applied to a role, that role can collect more points of data. This article provides an introduction to Azure Diagnostics for Cloud Services.
-With basic monitoring, performance counter data from role instances is sampled and collected at 3-minute intervals. This basic monitoring data is not stored in your storage account and has no additional cost associated with it.
-
-With advanced monitoring, additional metrics are sampled and collected at intervals of 5 minutes, 1 hour, and 12 hours. The aggregated data is stored in a storage account, in tables, and is purged after 10 days. The storage account used is configured by role; you can use different storage accounts for different roles. This is configured with a connection string in the [.csdef](cloud-services-model-and-package.md#servicedefinitioncsdef) and [.cscfg](cloud-services-model-and-package.md#serviceconfigurationcscfg) files.
+With basic monitoring, performance counter data from role instances is sampled and collected at 3-minute intervals. This basic monitoring data isn't stored in your storage account and has no additional cost associated with it.
+With advanced monitoring, more metrics are sampled and collected at intervals of 5 minutes, 1 hour, and 12 hours. The aggregated data is stored in a storage account, in tables, and is purged after 10 days. The storage account used is configured per role; you can use different storage accounts for different roles. You use a connection string in the [.csdef](cloud-services-model-and-package.md#servicedefinitioncsdef) and [.cscfg](cloud-services-model-and-package.md#serviceconfigurationcscfg) files for configuration.
## Basic monitoring As stated in the introduction, a cloud service automatically collects basic monitoring data from the host virtual machine. This data includes CPU percentage, network in/out, and disk read/write. The collected monitoring data is automatically displayed on the overview and metrics pages of the cloud service, in the Azure portal.
-Basic monitoring does not require a storage account.
+Basic monitoring doesn't require a storage account.
![basic cloud service monitoring tiles](media/cloud-services-how-to-monitor/basic-tiles.png) ## Advanced monitoring
-Advanced monitoring involves using the **Azure Diagnostics** extension (and optionally the Application Insights SDK) on the role you want to monitor. The diagnostics extension uses a config file (per role) named **diagnostics.wadcfgx** to configure the diagnostics metrics monitored. The Azure Diagnostic extension collects and stores data in an Azure Storage account. These settings are configured in the **.wadcfgx**, [.csdef](cloud-services-model-and-package.md#servicedefinitioncsdef), and [.cscfg](cloud-services-model-and-package.md#serviceconfigurationcscfg) files. This means that there is an extra cost associated with advanced monitoring.
+Advanced monitoring involves using the **Azure Diagnostics** extension (and optionally the Application Insights SDK) on the role you want to monitor. The diagnostics extension uses a config file (per role) named **diagnostics.wadcfgx** to configure the diagnostics metrics monitored. The Azure Diagnostic extension collects and stores data in an Azure Storage account. These settings are configured in the **.wadcfgx**, [.csdef](cloud-services-model-and-package.md#servicedefinitioncsdef), and [.cscfg](cloud-services-model-and-package.md#serviceconfigurationcscfg) files. This means that there's an extra cost associated with advanced monitoring.
As each role is created, Visual Studio adds the Azure Diagnostics extension to it. This diagnostics extension can collect the following types of information:
As each role is created, Visual Studio adds the Azure Diagnostics extension to i
* Application logs * Windows event logs * .NET event source
-* IIS logs
-* Manifest based ETW
-* Crash dumps
+* Internet Information Services (IIS) logs
+* Manifest based Event Tracing for Windows (ETW)
* Customer error logs > [!IMPORTANT]
There are two config files you must change for advanced diagnostics to be enable
### ServiceDefinition.csdef
-In the **ServiceDefinition.csdef** file, add a new setting named `Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString` for each role that uses advanced diagnostics. Visual Studio adds this value to the file when you create a new project. In case it is missing, you can add it now.
+In the **ServiceDefinition.csdef** file, add a new setting named `Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString` for each role that uses advanced diagnostics. Visual Studio adds this value to the file when you create a new project. In case it's missing, you can add it now.
```xml <ServiceDefinition name="AnsurCloudService" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" schemaVersion="2015-04.2.6">
In the **ServiceDefinition.csdef** file, add a new setting named `Microsoft.Wind
<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" /> ```
-This defines a new setting that must be added to every **ServiceConfiguration.cscfg** file.
+This snippet defines a new setting that must be added to every **ServiceConfiguration.cscfg** file.
Most likely you have two **.cscfg** files, one named **ServiceConfiguration.cloud.cscfg** for deploying to Azure, and one named **ServiceConfiguration.local.cscfg** that is used for local deployments in the emulated environment. Open and change each **.cscfg** file. Add a setting named `Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString`. Set the value to the **Primary connection string** of the classic storage account. If you want to use the local storage on your development machine, use `UseDevelopmentStorage=true`.
Most likely you have two **.cscfg** files, one named **ServiceConfiguration.clou
## Use Application Insights
-When you publish the Cloud Service from Visual Studio, you are given the option to send the diagnostic data to Application Insights. You can create the Application Insights Azure resource at that time or send the data to an existing Azure resource. Your cloud service can be monitored by Application Insights for availability, performance, failures, and usage. Custom charts can be added to Application Insights so that you can see the data that matters the most. Role instance data can be collected by using the Application Insights SDK in your cloud service project. For more information on how to integrate Application Insights, see [Application Insights with Cloud Services](../azure-monitor/app/azure-web-apps-net-core.md).
-
-Note that while you can use Application Insights to display the performance counters (and the other settings) you have specified through the Windows Azure Diagnostics extension, you only get a richer experience by integrating the Application Insights SDK into your worker and web roles.
+When you publish the Cloud Service from Visual Studio, you have the option to send the diagnostic data to Application Insights. You can create the Application Insights Azure resource at that time or send the data to an existing Azure resource. Application Insights can monitor your cloud service for availability, performance, failures, and usage. Custom charts can be added to Application Insights so that you can see the data that matters the most. Role instance data can be collected by using the Application Insights SDK in your cloud service project. For more information on how to integrate Application Insights, see [Application Insights with Cloud Services](../azure-monitor/app/azure-web-apps-net-core.md).
+While you can use Application Insights to display the performance counters (and the other settings) you specified through the Microsoft Azure Diagnostics extension, you only get a richer experience by integrating the Application Insights SDK into your worker and web roles.
## Next steps
cloud-services Cloud Services How To Scale Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-scale-portal.md
description: Learn how to use the portal to configure auto scale rules for a clo
Previously updated : 02/21/2023 Last updated : 07/23/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-Conditions can be set for a cloud service worker role that trigger a scale in or out operation. The conditions for the role can be based on the CPU, disk, or network load of the role. You can also set a condition based on a message queue or the metric of some other Azure resource associated with your subscription.
+You can set conditions for a cloud service worker role to trigger scale in or out operations. The conditions for the role can be based on the CPU, disk, or network load of the role. You can also set a condition based on a message queue or the metric of some other Azure resource associated with your subscription.
> [!NOTE] > This article focuses on Cloud Service (classic). When you create a virtual machine (classic) directly, it is hosted in a cloud service. You can scale a standard virtual machine by associating it with an [availability set](/previous-versions/azure/virtual-machines/windows/classic/configure-availability-classic) and manually turn them on or off.
Conditions can be set for a cloud service worker role that trigger a scale in or
## Considerations You should consider the following information before you configure scaling for your application:
-* Scaling is affected by core usage.
+* Core usage affects scaling.
- Larger role instances use more cores. You can scale an application only within the limit of cores for your subscription. For example, say your subscription has a limit of 20 cores. If you run an application with two medium-sized cloud services (a total of 4 cores), you can only scale up other cloud service deployments in your subscription by the remaining 16 cores. For more information about sizes, see [Cloud Service Sizes](cloud-services-sizes-specs.md).
+ Larger role instances use more cores. You can scale an application only within the limit of cores for your subscription. For example, say your subscription has a limit of 20 cores. If you run an application with two medium-sized cloud services (a total of four cores), you can only scale up other cloud service deployments in your subscription by the remaining 16 cores. For more information about sizes, see [Cloud Service Sizes](cloud-services-sizes-specs.md).
* You can scale based on a queue message threshold. For more information about how to use queues, see [How to use the Queue Storage Service](/azure/storage/queues/storage-quickstart-queues-dotnet?tabs=passwordless%2Croles-azure-portal%2Cenvironment-variable-windows%2Csign-in-azure-cli). * You can also scale other resources associated with your subscription.
-* To enable high availability of your application, you should ensure that it is deployed with two or more role instances. For more information, see [Service Level Agreements](https://azure.microsoft.com/support/legal/sla/).
+* To enable high availability of your application, you should ensure it deploys with two or more role instances. For more information, see [Service Level Agreements](https://azure.microsoft.com/support/legal/sla/).
* Auto Scale only happens when all the roles are in **Ready** state.  
You should consider the following information before you configure scaling for y
After you select your cloud service, you should have the cloud service blade visible. 1. On the cloud service blade, on the **Roles and Instances** tile, select the name of the cloud service.
- **IMPORTANT**: Make sure to click the cloud service role, not the role instance that is below the role.
+ **IMPORTANT**: Make sure to select the cloud service role, not the role instance that is below the role.
![Screenshot of the Roles and instances tile with the Worker Role With S B Queue 1 option outlined in red.](./media/cloud-services-how-to-scale-portal/roles-instances.png) 2. Select the **scale** tile.
Set the **Scale by** option to **schedule and performance rules**.
Select **Add Profile**. The profile determines which mode you want to use for the scale: **always**, **recurrence**, **fixed date**.
-After you have configured the profile and rules, select the **Save** icon at the top.
+After you configure the profile and rules, select the **Save** icon at the top.
#### Profile The profile sets minimum and maximum instances for the scale, and also when this scale range is active.
The profile sets minimum and maximum instances for the scale, and also when this
![CLoud service scale with a fixed date](./media/cloud-services-how-to-scale-portal/select-fixed.png)
-After you have configured the profile, select the **OK** button at the bottom of the profile blade.
+After you configure the profile, select the **OK** button at the bottom of the profile blade.
#### Rule Rules are added to a profile and represent a condition that triggers the scale.
The rule trigger is based on a metric of the cloud service (CPU usage, disk acti
![Screenshot of the Rule dialog box with the Metric name option outlined in red.](./media/cloud-services-how-to-scale-portal/rule-settings.png)
-After you have configured the rule, select the **OK** button at the bottom of the rule blade.
+After you configure the rule, select the **OK** button at the bottom of the rule blade.
## Back to manual scale Navigate to the [scale settings](#where-scale-is-located) and set the **Scale by** option to **an instance count that I enter manually**.
This setting removes automated scaling from the role and then you can set the in
2. A role instance slider to set the instances to scale to. 3. Instances of the role to scale to.
-After you have configured the scale settings, select the **Save** icon at the top.
+After you configure the scale settings, select the **Save** icon at the top.
cloud-services Cloud Services How To Scale Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-scale-powershell.md
Title: Scale an Azure cloud service (classic) in Windows PowerShell | Microsoft Docs
-description: (classic) Learn how to use PowerShell to scale a web role or worker role in or out in Azure.
+description: Learn how to use PowerShell to scale a web role or worker role in or out in Azure cloud services (classic).
Previously updated : 02/21/2023 Last updated : 07/23/2024
You can use Windows PowerShell to scale a web role or worker role in or out by adding or removing instances.
-## Log in to Azure
+## Sign in to Azure
-Before you can perform any operations on your subscription through PowerShell, you must log in:
+Before you can perform any operations on your subscription through PowerShell, you must sign in:
```powershell Add-AzureAccount
To scale out your role, pass the desired number of instances as the **Count** pa
Set-AzureRole -ServiceName '<your_service_name>' -RoleName '<your_role_name>' -Slot <target_slot> -Count <desired_instances> ```
-The cmdlet blocks momentarily while the new instances are provisioned and started. During this time, if you open a new PowerShell window and call **Get-AzureRole** as shown earlier, you will see the new target instance count. And if you inspect the role status in the portal, you should see the new instance starting up:
+The cmdlet blocks momentarily while the new instances are provisioned and started. During this time, if you open a new PowerShell window and call **Get-AzureRole** as shown earlier, you see the new target instance count. If you inspect the role status in the portal, you should see the new instance starting up:
![VM instance starting in portal](./media/cloud-services-how-to-scale-powershell/role-instance-starting.png)
-Once the new instances have started, the cmdlet will return successfully:
+Once the new instances start, the cmdlet returns successfully:
![Role instance increase success](./media/cloud-services-how-to-scale-powershell/set-azure-role-success.png)
You can scale in a role by removing instances in the same way. Set the **Count**
## Next steps
-It is not possible to configure auto-scale for cloud services from PowerShell. To do that, see [How to auto scale a cloud service](cloud-services-how-to-scale-portal.md).
+It isn't possible to configure autoscale for cloud services from PowerShell. To do that, see [How to auto scale a cloud service](cloud-services-how-to-scale-portal.md).
cloud-services Cloud Services Model And Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-model-and-package.md
description: Describes the cloud service model (.csdef, .cscfg) and package (.cs
Previously updated : 02/21/2023 Last updated : 07/23/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-A cloud service is created from three components, the service definition *(.csdef)*, the service config *(.cscfg)*, and a service package *(.cspkg)*. Both the **ServiceDefinition.csdef** and **ServiceConfig.cscfg** files are XML-based and describe the structure of the cloud service and how it's configured; collectively called the model. The **ServicePackage.cspkg** is a zip file that is generated from the **ServiceDefinition.csdef** and among other things, contains all the required binary-based dependencies. Azure creates a cloud service from both the **ServicePackage.cspkg** and the **ServiceConfig.cscfg**.
+A cloud service is created from three components, the service definition *(.csdef)*, the service config *(.cscfg)*, and a service package *(.cspkg)*. Both the **ServiceDefinition.csdef** and **ServiceConfig.cscfg** files are XML-based and describe the structure of the cloud service and its configuration; collectively called the model. The **ServicePackage.cspkg** is a zip file that is generated from the **ServiceDefinition.csdef** and among other things, contains all the required binary-based dependencies. Azure creates a cloud service from both the **ServicePackage.cspkg** and the **ServiceConfig.cscfg**.
-Once the cloud service is running in Azure, you can reconfigure it through the **ServiceConfig.cscfg** file, but you cannot alter the definition.
+Once the cloud service is running in Azure, you can reconfigure it through the **ServiceConfig.cscfg** file, but you can't alter the definition.
## What would you like to know more about? * I want to know more about the [ServiceDefinition.csdef](#csdef) and [ServiceConfig.cscfg](#cscfg) files. * I already know about that, give me [some examples](#next-steps) on what I can configure. * I want to create the [ServicePackage.cspkg](#cspkg).
-* I am using Visual Studio and I want to...
+* I'm using Visual Studio and I want to...
* [Create a cloud service][vs_create] * [Reconfigure an existing cloud service][vs_reconfigure] * [Deploy a Cloud Service project][vs_deploy]
The **ServiceDefinition.csdef** file specifies the settings that are used by Azu
</ServiceDefinition> ```
-You can refer to the [Service Definition Schema](/previous-versions/azure/reference/ee758711(v=azure.100)) for a better understanding of the XML schema used here, however, here is a quick explanation of some of the elements:
+You can refer to the [Service Definition Schema](/previous-versions/azure/reference/ee758711(v=azure.100)) for a better understanding of the XML schema used here, however, here's a quick explanation of some of the elements:
**Sites** Contains the definitions for websites or web applications that are hosted in IIS7.
Contains tasks that are run when the role starts. The tasks are defined in a .cm
## ServiceConfiguration.cscfg The configuration of the settings for your cloud service is determined by the values in the **ServiceConfiguration.cscfg** file. You specify the number of instances that you want to deploy for each role in this file. The values for the configuration settings that you defined in the service definition file are added to the service configuration file. The thumbprints for any management certificates that are associated with the cloud service are also added to the file. The [Azure Service Configuration Schema (.cscfg File)](/previous-versions/azure/reference/ee758710(v=azure.100)) provides the allowable format for a service configuration file.
-The service configuration file is not packaged with the application, but is uploaded to Azure as a separate file and is used to configure the cloud service. You can upload a new service configuration file without redeploying your cloud service. The configuration values for the cloud service can be changed while the cloud service is running. The following example shows the configuration settings that can be defined for the Web and Worker roles:
+The service configuration file isn't packaged with the application. The configuration uploads to Azure as a separate file and used to configure the cloud service. You can upload a new service configuration file without redeploying your cloud service. The configuration values for the cloud service can be changed while the cloud service is running. The following example shows the configuration settings that can be defined for the Web and Worker roles:
```xml <?xml version="1.0"?>
The service configuration file is not packaged with the application, but is uplo
</ServiceConfiguration> ```
-You can refer to the [Service Configuration Schema](/previous-versions/azure/reference/ee758710(v=azure.100)) for better understanding the XML schema used here, however, here is a quick explanation of the elements:
+You can refer to the [Service Configuration Schema](/previous-versions/azure/reference/ee758710(v=azure.100)) for better understanding the XML schema used here, however, here's a quick explanation of the elements:
**Instances**
-Configures the number of running instances for the role. To prevent your cloud service from potentially becoming unavailable during upgrades, it is recommended that you deploy more than one instance of your web-facing roles. By deploying more than one instance, you are adhering to the guidelines in the [Azure Compute Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/), which guarantees 99.95% external connectivity for Internet-facing roles when two or more role instances are deployed for a service.
+Configures the number of running instances for the role. To prevent your cloud service from potentially becoming unavailable during upgrades, we recommend you deploy more than one instance of your web-facing roles. By deploying more than one instance, you adhere to the guidelines in the [Azure Compute Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/), which guarantees 99.95% external connectivity for Internet-facing roles when two or more role instances are deployed for a service.
**ConfigurationSettings** Configures the settings for the running instances for a role. The name of the `<Setting>` elements must match the setting definitions in the service definition file.
Configures the certificates that are used by the service. The previous code exam
## Defining ports for role instances Azure allows only one entry point to a web role. Meaning that all traffic occurs through one IP address. You can configure your websites to share a port by configuring the host header to direct the request to the correct location. You can also configure your applications to listen to well-known ports on the IP address.
-The following sample shows the configuration for a web role with a website and web application. The website is configured as the default entry location on port 80, and the web applications are configured to receive requests from an alternate host header that is called ΓÇ£mail.mysite.cloudapp.netΓÇ¥.
+The following sample shows the configuration for a web role with a website and web application. The website is configured as the default entry location on port 80. The web applications are configured to receive requests from an alternate host header that is called ΓÇ£mail.mysite.cloudapp.netΓÇ¥.
```xml <WebRole>
The following sample shows the configuration for a web role with a website and w
## Changing the configuration of a role
-You can update the configuration of your cloud service while it is running in Azure, without taking the service offline. To change configuration information, you can either upload a new configuration file, or edit the configuration file in place and apply it to your running service. The following changes can be made to the configuration of a service:
+You can update the configuration of your cloud service while it runs in Azure, without taking the service offline. To change configuration information, you can either upload a new configuration file, or edit the configuration file in place and apply it to your running service. The following changes can be made to the configuration of a service:
* **Changing the values of configuration settings** When a configuration setting changes, a role instance can choose to apply the change while the instance is online, or to recycle the instance gracefully and apply the change while the instance is offline. * **Changing the service topology of role instances**
- Topology changes do not affect running instances, except where an instance is being removed. All remaining instances generally do not need to be recycled; however, you can choose to recycle role instances in response to a topology change.
+ Topology changes don't affect running instances, except where an instance is being removed. All remaining instances generally don't need to be recycled; however, you can choose to recycle role instances in response to a topology change.
* **Changing the certificate thumbprint**
- You can only update a certificate when a role instance is offline. If a certificate is added, deleted, or changed while a role instance is online, Azure gracefully takes the instance offline to update the certificate and bring it back online after the change is complete.
+ You can only update a certificate when a role instance is offline. If a certificate is added, deleted, or changed while a role instance is online, Azure gracefully takes the instance offline to update the certificate. Azure brings it back online after the change is complete.
### Handling configuration changes with Service Runtime Events The [Azure Runtime Library](/previous-versions/azure/reference/mt419365(v=azure.100)) includes the [Microsoft.WindowsAzure.ServiceRuntime](/previous-versions/azure/reference/ee741722(v=azure.100)) namespace, which provides classes for interacting with the Azure environment from a role. The [RoleEnvironment](/previous-versions/azure/reference/ee773173(v=azure.100)) class defines the following events that are raised before and after a configuration change:
Where the variables are defined as follows:
| | | | \[DirectoryName\] |The subdirectory under the root project directory that contains the .csdef file of the Azure project. | | \[ServiceDefinition\] |The name of the service definition file. By default, this file is named ServiceDefinition.csdef. |
-| \[OutputFileName\] |The name for the generated package file. Typically, this is set to the name of the application. If no file name is specified, the application package is created as \[ApplicationName\].cspkg. |
+| \[OutputFileName\] |The name for the generated package file. Typically, this variable is set to the name of the application. If no file name is specified, the application package is created as \[ApplicationName\].cspkg. |
| \[RoleName\] |The name of the role as defined in the service definition file. | | \[RoleBinariesDirectory] |The location of the binary files for the role. | | \[VirtualPath\] |The physical directories for each virtual path defined in the Sites section of the service definition. |
I'm creating a cloud service package and I want to...
* [Setup remote desktop for a cloud service instance][remotedesktop] * [Deploy a Cloud Service project][deploy]
-I am using Visual Studio and I want to...
+I'm using Visual Studio and I want to...
* [Create a new cloud service][vs_create] * [Reconfigure an existing cloud service][vs_reconfigure]
cloud-services Cloud Services Nodejs Chat App Socketio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-chat-app-socketio.md
Title: Node.js application using Socket.io - Azure
description: Socket.IO is now natively supported on Azure. This old tutorial shows how to self-host a socket.IO-based chat application on Azure. The latest recommendation is to let Socket.IO provide real time communication for a Node.js server and clients, and let Azure manage scaling client connections. Previously updated : 08/31/2023 Last updated : 07/23/2024
server and clients. This tutorial walks you through hosting a
socket.IO based chat application on Azure. For more information on Socket.IO, see [socket.io](https://socket.io).
-A screenshot of the completed application is below:
+The following screenshot shows the completed application:
![A browser window displaying the service hosted on Azure][completed-app]
Ensure that the following products and versions are installed to successfully co
* Install [Python version 2.7.10](https://www.python.org/) ## Create a Cloud Service Project
-The following steps create the cloud service project that will host the Socket.IO application.
+The following steps create the cloud service project that hosts the Socket.IO application.
1. From the **Start Menu** or **Start Screen**, search for **Windows PowerShell**. Finally, right-click **Windows PowerShell** and select **Run As Administrator**.
The following steps create the cloud service project that will host the Socket.I
PS C:\Node> Add-AzureNodeWorkerRole ```
- You will see the following response:
+ You see the following response:
![The output of the new-azureservice and add-azurenodeworkerrolecmdlets](./media/cloud-services-nodejs-chat-app-socketio/socketio-1.png) ## Download the Chat Example
-For this project, we will use the chat example from the [Socket.IO
+For this project, we use the chat example from the [Socket.IO
GitHub repository]. Perform the following steps to download the example and add it to the project you previously created.
and add it to the project you previously created.
![Explorer, displaying the contents of the examples\\chat directory extracted from the archive][chat-contents]
- The highlighted items in the screenshot above are the files copied from the **examples\\chat** directory
+ The highlighted items in the previous screenshot are the files copied from the **examples\\chat** directory
-3. In the **C:\\node\\chatapp\\WorkerRole1** directory, delete the **server.js** file, and then rename the **app.js** file to **server.js**. This removes the default **server.js** file created previously by the **Add-AzureNodeWorkerRole** cmdlet and replaces it with the application file from the chat example.
+3. In the **C:\\node\\chatapp\\WorkerRole1** directory, delete the **server.js** file, and then rename the **app.js** file to **server.js**. This step removes the default **server.js** file created previously by the **Add-AzureNodeWorkerRole** cmdlet and replaces it with the application file from the chat example.
### Modify Server.js and Install Modules Before testing the application in the Azure emulator, we must
server.js file:
1. Open the **server.js** file in Visual Studio or any text editor.
-2. Find the **Module dependencies** section at the beginning of server.js and change the line containing **sio = require('..//..//lib//socket.io')** to **sio = require('socket.io')** as shown below:
+2. Find the **Module dependencies** section at the beginning of server.js and change the line containing **sio = require('..//..//lib//socket.io')** to **sio = require('socket.io')** as follows:
```js var express = require('express')
server.js file:
3. To ensure the application listens on the correct port, open server.js in Notepad or your favorite editor, and then change the
- following line by replacing **3000** with **process.env.port** as shown below:
+ following line by replacing **3000** with **process.env.port** as follows:
```js //app.listen(3000, function () {           //Original
After saving the changes to **server.js**, use the following steps to
install required modules, and then test the application in the Azure emulator:
-1. Using **Azure PowerShell**, change directories to the **C:\\node\\chatapp\\WorkerRole1** directory and use the following command to install the modules required by this application:
+1. In **Azure PowerShell**, change directories to the **C:\\node\\chatapp\\WorkerRole1** directory and use the following command to install the modules required by this application:
```powershell PS C:\node\chatapp\WorkerRole1> npm install ```
- This will install the modules listed in the package.json file. After
+ This command installs the modules listed in the package.json file. After
the command completes, you should see output similar to the
- following:
+ following screenshot:
![The output of the npm install command][The-output-of-the-npm-install-command] 2. Since this example was originally a part of the Socket.IO GitHub repository, and directly referenced the Socket.IO library by
- relative path, Socket.IO was not referenced in the package.json
+ relative path, Socket.IO wasn't referenced in the package.json
file, so we must install it by issuing the following command: ```powershell
Azure emulator:
2. Open a browser and navigate to `http://127.0.0.1`. 3. When the browser window opens, enter a nickname and then hit enter.
- This will allow you to post messages as a specific nickname. To test
- multi-user functionality, open additional browser windows using the
+ This step allows you to post messages as a specific nickname. To test
+ multi-user functionality, open more browser windows using the
same URL and enter different nicknames. ![Two browser windows displaying chat messages from User1 and User2](./media/cloud-services-nodejs-chat-app-socketio/socketio-8.png)
messages between different clients using Socket.IO.
## Next steps
-In this tutorial you learned how to create a basic chat application hosted in an Azure Cloud Service. To learn how to host this application in an Azure Website, see [Build a Node.js Chat Application with Socket.IO on an Azure Web Site][chatwebsite].
+In this tutorial, you learned how to create a basic chat application hosted in an Azure Cloud Service. To learn how to host this application in an Azure Website, see [Build a Node.js Chat Application with Socket.IO on an Azure Web Site][chatwebsite].
For more information, see also the [Node.js Developer Center](/azure/developer/javascript/).
cloud-services Cloud Services Nodejs Develop Deploy App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-develop-deploy-app.md
Title: Node.js Getting Started Guide
-description: Learn how to create a simple Node.js web application and deploy it to an Azure cloud service.
+description: Learn how to create a Node.js web application and deploy it to an Azure cloud service.
Previously updated : 02/21/2023 Last updated : 07/23/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-This tutorial shows how to create a simple Node.js application running in an Azure Cloud Service. Cloud Services are the building blocks of scalable cloud applications in Azure. They allow the separation and independent management and scale-out of front-end and back-end components of your application. Cloud Services provide a robust dedicated virtual machine for hosting each role reliably.
-
-For more information on Cloud Services, and how they compare to Azure Websites and Virtual machines, see [Azure Websites, Cloud Services and Virtual Machines comparison].
+This tutorial shows how to create a Node.js application running in an Azure Cloud Service. Cloud Services are the building blocks of scalable cloud applications in Azure. They allow the separation and independent management and scale-out of front-end and back-end components of your application. Cloud Services provide a robust dedicated virtual machine for hosting each role reliably.
> [!TIP]
-> Looking to build a simple website? If your scenario involves just a simple website front-end, consider [using a lightweight web app]. You can easily upgrade to a Cloud Service as your web app grows and your requirements change.
+> Looking to build a website? If your scenario involves just a simple website front-end, consider [using a lightweight web app]. You can easily upgrade to a Cloud Service as your web app grows and your requirements change.
-By following this tutorial, you will build a simple web application hosted inside a web role. You will use the compute emulator to test your application locally, then deploy it using PowerShell command-line tools.
+By following this tutorial, you build a web application hosted inside a web role. You use the compute emulator to test your application locally, then deploy it using PowerShell command-line tools.
-The application is a simple "hello world" application:
+The application is a "hello world" application:
![A web browser displaying the Hello World web page][A web browser displaying the Hello World web page]
Perform the following tasks to create a new Azure Cloud Service project, along w
1. Run **Windows PowerShell** as Administrator; from the **Start Menu** or **Start Screen**, search for **Windows PowerShell**. 2. [Connect PowerShell] to your subscription.
-3. Enter the following PowerShell cmdlet to create to create the project:
+3. Enter the following PowerShell cmdlet to create the project:
```powershell New-AzureServiceProject helloworld
Perform the following tasks to create a new Azure Cloud Service project, along w
> [!NOTE] > If you do not specify a role name, a default name is used. You can provide a name as the first cmdlet parameter: `Add-AzureNodeWebRole MyRole`
-The Node.js app is defined in the file **server.js**, located in the directory for the web role (**WebRole1** by default). Here is the code:
+The Node.js app is defined in the file **server.js**, located in the directory for the web role (**WebRole1** by default). Here's the code:
```js var http = require('http');
To deploy your application to Azure, you must first download the publishing sett
Get-AzurePublishSettingsFile ```
- This will use your browser to navigate to the publish settings download page. You may be prompted to log in with a Microsoft Account. If so, use the account associated with your Azure subscription.
+ This command uses your browser to navigate to the publish settings download page. You may be prompted to sign in with a Microsoft Account. If so, use the account associated with your Azure subscription.
Save the downloaded profile to a file location you can easily access. 2. Run following cmdlet to import the publishing profile you downloaded:
$ServiceName = "NodeHelloWorld" + $(Get-Date -Format ('ddhhmm'))
Publish-AzureServiceProject -ServiceName $ServiceName -Location "East US" -Launch ```
-* **-ServiceName** specifies the name for the deployment. This must be a unique name, otherwise the publish process will fail. The **Get-Date** command tacks on a date/time string that should make the name unique.
-* **-Location** specifies the datacenter that the application will be hosted in. To see a list of available datacenters, use the **Get-AzureLocation** cmdlet.
-* **-Launch** opens a browser window and navigates to the hosted service after deployment has completed.
+* **-ServiceName** specifies the name for the deployment. This value must be a unique name; otherwise, the publish process fails. The **Get-Date** command tacks on a date/time string that should make the name unique.
+* **-Location** specifies the datacenter that hosts the application. To see a list of available datacenters, use the **Get-AzureLocation** cmdlet.
+* **-Launch** opens a browser window and navigates to the hosted service after the deployment completes.
-After publishing succeeds, you will see a response similar to the following:
+After publishing succeeds, you see a response similar to the screenshot:
![The output of the Publish-AzureService command][The output of the Publish-AzureService command] > [!NOTE] > It can take several minutes for the application to deploy and become available when first published.
-Once the deployment has completed, a browser window will open and navigate to the cloud service.
+Once the deployment completes, a browser window opens and navigates to the cloud service.
![A browser window displaying the hello world page; the URL indicates the page is hosted on Azure.][A browser window displaying the hello world page; the URL indicates the page is hosted on Azure.]
Your application is now running on Azure.
The **Publish-AzureServiceProject** cmdlet performs the following steps: 1. Creates a package to deploy. The package contains all the files in your application folder.
-2. Creates a new **storage account** if one does not exist. The Azure storage account is used to store the application package during deployment. You can safely delete the storage account after deployment is done.
-3. Creates a new **cloud service** if one does not already exist. A **cloud service** is the container in which your application is hosted when it is deployed to Azure. For more information, see [Overview of Creating a Hosted Service for Azure].
+2. Creates a new **storage account** if one doesn't exist. The Azure storage account is used to store the application package during deployment. You can safely delete the storage account after deployment is done.
+3. Creates a new **cloud service** if one doesn't already exist. A **cloud service** is the container in which your application is hosted when it deploys to Azure. For more information, see [Overview of Creating a Hosted Service for Azure].
4. Publishes the deployment package to Azure. ## Stopping and deleting your application
-After deploying your application, you may want to disable it so you can avoid extra costs. Azure bills web role instances per hour of server time consumed. Server time is consumed once your application is deployed, even if the instances are not running and are in the stopped state.
+After deploying your application, you may want to disable it so you can avoid extra costs. Azure bills web role instances per hour of server time consumed. Server time is consumed once your application is deployed, even if the instances aren't running and are in the stopped state.
1. In the Windows PowerShell window, stop the service deployment created in the previous section with the following cmdlet:
After deploying your application, you may want to disable it so you can avoid ex
Stop-AzureService ```
- Stopping the service may take several minutes. When the service is stopped, you receive a message indicating that it has stopped.
+ Stopping the service may take several minutes. When the service is stopped, you receive a message indicating that it stopped.
![The status of the Stop-AzureService command][The status of the Stop-AzureService command] 2. To delete the service, call the following cmdlet:
After deploying your application, you may want to disable it so you can avoid ex
When prompted, enter **Y** to delete the service.
- Deleting the service may take several minutes. After the service has been deleted you receive a message indicating that the service was deleted.
+ Deleting the service may take several minutes. After you delete the service, you receive a message indicating that the service was deleted.
![The status of the Remove-AzureService command][The status of the Remove-AzureService command]
cloud-services Cloud Services Nodejs Develop Deploy Express App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-develop-deploy-express-app.md
Title: Build and deploy a Node.js Express app to Azure Cloud Services (classic)
-description: Use this tutorial to create a new application using the Express module, which provides an MVC framework for creating Node.js web applications.
+description: Use this tutorial to create a new application using the Express module, which provides a Model-View-Control (MVC) framework for creating Node.js web applications.
Previously updated : 02/21/2023 Last updated : 07/23/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)] Node.js includes a minimal set of functionality in the core runtime.
-Developers often use 3rd party modules to provide additional
-functionality when developing a Node.js application. In this tutorial
-you'll create a new application using the [Express](https://github.com/expressjs/express) module, which provides an MVC framework for creating Node.js web applications.
+Developers often use non-Microsoft modules to provide more
+functionality when developing a Node.js application. In this tutorial,
+you create a new application using the [Express](https://github.com/expressjs/express) module, which provides a Model-View-Control framework for creating Node.js web applications.
-A screenshot of the completed application is below:
+The following screenshot shows the completed application:
![A web browser displaying Welcome to Express in Azure](./media/cloud-services-nodejs-develop-deploy-express-app/node36.png)
Perform the following steps to create a new cloud service project named `express
``` > [!NOTE]
- > By default, **Add-AzureNodeWebRole** uses an older version of Node.js. The **Set-AzureServiceProjectRole** statement above instructs Azure to use v0.10.21 of Node. Note the parameters are case-sensitive. You can verify the correct version of Node.js has been selected by checking the **engines** property in **WebRole1\package.json**.
->
->
+ > By default, **Add-AzureNodeWebRole** uses an older version of Node.js. The preceding **Set-AzureServiceProjectRole** line instructs Azure to use v0.10.21 of Node. Note the parameters are case-sensitive. You can verify the correct version of Node.js has been selected by checking the **engines** property in **WebRole1\package.json**.
## Install Express 1. Install the Express generator by issuing the following command:
Perform the following steps to create a new cloud service project named `express
PS C:\node\expressapp> npm install express-generator -g ```
- The output of the npm command should look similar to the result below.
+ The following screenshot shows the output of the npm command. Your output should look similar.
![Windows PowerShell displaying the output of the npm install express command.](./media/cloud-services-nodejs-develop-deploy-express-app/express-g.png)+ 2. Change directories to the **WebRole1** directory and use the express command to generate a new application: ```powershell PS C:\node\expressapp\WebRole1> express ```
- You'll be prompted to overwrite your earlier application. Enter **y** or **yes** to continue. Express will generate the app.js file and a folder structure for building your application.
+ To continue, enter **y** or **yes** when prompted to overwrite your earlier application. Express generates the app.js file and a folder structure for building your application.
![The output of the express command](./media/cloud-services-nodejs-develop-deploy-express-app/node23.png)
-3. To install additional dependencies defined in the package.json file,
+
+3. To install the other dependencies defined in the package.json file,
enter the following command: ```powershell
Perform the following steps to create a new cloud service project named `express
``` ![The output of the npm install command](./media/cloud-services-nodejs-develop-deploy-express-app/node26.png)
-4. Use the following command to copy the **bin/www** file to **server.js**. This is so the cloud service can find the entry point for this application.
+
+4. Use the following command to copy the **bin/www** file to **server.js**. This step allows the cloud service to find the entry point for this application.
```powershell PS C:\node\expressapp\WebRole1> copy bin/www server.js ``` After this command completes, you should have a **server.js** file in the WebRole1 directory.+ 5. Modify the **server.js** to remove one of the '.' characters from the following line. ```js var app = require('../app'); ```
- After making this modification, the line should appear as follows.
+ Once you make this modification, the line should appear as follows:
```js var app = require('./app');
Perform the following steps to create a new cloud service project named `express
## Modifying the View Now modify the view to display the message "Welcome to Express in
-Azure".
+Azure."
1. Enter the following command to open the index.jade file:
Azure".
![The index.jade file, the last line reads: p Welcome to \#{title} in Azure](./media/cloud-services-nodejs-develop-deploy-express-app/node31.png) 3. Save the file and exit Notepad.
-4. Refresh your browser and you'll see your changes.
+4. To see your changes, refresh your browser.
![A browser window, the page contains Welcome to Express in Azure](./media/cloud-services-nodejs-develop-deploy-express-app/node32.png)
In the Azure PowerShell window, use the **Publish-AzureServiceProject** cmdlet t
PS C:\node\expressapp\WebRole1> Publish-AzureServiceProject -ServiceName myexpressapp -Location "East US" -Launch ```
-Once the deployment operation completes, your browser will open and display the web page.
+Once the deployment operation completes, your browser opens and displays the web page.
![A web browser displaying the Express page. The URL indicates it is now hosted on Azure.](./media/cloud-services-nodejs-develop-deploy-express-app/node36.png)
cloud-services Cloud Services Performance Testing Visual Studio Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-performance-testing-visual-studio-profiler.md
Title: Profiling a Cloud Service (classic) Locally in the Compute Emulator | Mic
description: Investigate performance issues in cloud services with the Visual Studio profiler Previously updated : 02/21/2023 Last updated : 07/23/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-A variety of tools and techniques are available for testing the performance of cloud services.
+Various tools and techniques are available for testing the performance of cloud services.
When you publish a cloud service to Azure, you can have Visual Studio collect profiling data and then analyze it locally, as described in [Profiling an Azure Application][1].
-You can also use diagnostics to track a variety of performance
+You can also use diagnostics to track numerous performance
counters, as described in [Using performance counters in Azure][2]. You might also want to profile your application locally in the compute emulator before deploying it to the cloud.
-This article covers the CPU Sampling method of profiling, which can be done locally in the emulator. CPU sampling is a method of profiling that is not very intrusive. At a designated sampling interval, the profiler takes a snapshot of the call stack. The data is collected over a period of time, and shown in a report. This method of profiling tends to indicate where in a computationally intensive application most of the CPU work is being done. This gives you the opportunity to focus on the "hot path" where your application is spending the most time.
+This article covers the CPU Sampling method of profiling, which can be done locally in the emulator. CPU sampling is a method of profiling that isn't intrusive. At a designated sampling interval, the profiler takes a snapshot of the call stack. The data is collected over a period of time, and shown in a report. This method of profiling tends to indicate where in a computationally intensive application most of the CPU work is being done, giving you the opportunity to focus on the "hot path" where your application is spending the most time.
-## 1: Configure Visual Studio for profiling
-First, there are a few Visual Studio configuration options that might be helpful when profiling. To make sense of the profiling reports, you'll need symbols (.pdb files) for your application and also symbols for system libraries. You'll want to make sure that you reference the available symbol servers. To do this, on the **Tools** menu in Visual Studio, choose **Options**, then choose **Debugging**, then **Symbols**. Make sure that Microsoft Symbol Servers is listed under **Symbol file (.pdb) locations**. You can also reference https://referencesource.microsoft.com/symbols, which might have additional symbol files.
+## Configure Visual Studio for profiling
+First, there are a few Visual Studio configuration options that might be helpful when profiling. To make sense of the profiling reports, you need symbols (.pdb files) for your application and also symbols for system libraries. Make sure you reference the available symbol servers; to do so, on the **Tools** menu in Visual Studio, choose **Options**, then choose **Debugging**, then **Symbols**. Make sure that Microsoft Symbol Servers is listed under **Symbol file (.pdb) locations**. You can also reference https://referencesource.microsoft.com/symbols, which might have more symbol files.
![Symbol options][4]
If desired, you can simplify the reports that the profiler generates by setting
![Just My Code options][17]
-You can use these instructions with an existing project or with a new project. If you create a new project to try the techniques described below, choose a C# **Azure Cloud Service** project, and select a **Web Role** and a **Worker Role**.
+You can use these instructions with an existing project or with a new project. If you create a new project to try the following techniques, choose a C# **Azure Cloud Service** project, and select a **Web Role** and a **Worker Role**.
![Azure Cloud Service project roles][5]
private async Task RunAsync(CancellationToken cancellationToken)
} ```
-Build and run your cloud service locally without debugging (Ctrl+F5), with the solution configuration set to **Release**. This ensures that all files and folders are created for running the application locally, and ensures that all the emulators are started. Start the Compute Emulator UI from the taskbar to verify that your worker role is running.
+Build and run your cloud service locally without debugging (Ctrl+F5), with the solution configuration set to **Release**. This setting ensures that all files and folders are created for running the application locally and that all the emulators are started. To verify that your worker role is running, start the Compute Emulator UI from the taskbar.
-## 2: Attach to a process
+## Attach to a process
Instead of profiling the application by starting it from the Visual Studio 2010 IDE, you must attach the profiler to a running process.
-To attach the profiler to a process, on the **Analyze** menu, choose **Profiler** and **Attach/Detach**.
+To attach the profiler to a process, go to the **Analyze** menu, select **Profiler**, and choose **Attach/Detach**.
![Attach profile option][6]
For a worker role, find the WaWorkerHost.exe process.
![WaWorkerHost process][7]
-If your project folder is on a network drive, the profiler will ask you to provide another location to save the profiling reports.
+If your project folder is on a network drive, the profiler asks you to provide another location to save the profiling reports.
You can also attach to a web role by attaching to WaIISHost.exe. If there are multiple worker role processes in your application, you need to use the processID to distinguish them. You can query the processID programmatically by accessing the Process object. For example, if you add this code to the Run method of the RoleEntryPoint-derived class in a role, you can look at the
-log in the Compute Emulator UI to know what process to connect to.
+sign-in the Compute Emulator UI to know what process to connect to.
```csharp var process = System.Diagnostics.Process.GetCurrentProcess();
Open the worker role log console window in the Compute Emulator UI by clicking o
![View process ID][9]
-One you've attached, perform the steps in your application's UI (if needed) to reproduce the scenario.
+Once you attach, perform the steps in your application's UI (if needed) to reproduce the scenario.
When you want to stop profiling, choose the **Stop Profiling** link. ![Stop Profiling option][10]
-## 3: View performance reports
+## View performance reports
The performance report for your application is displayed. At this point, the profiler stops executing, saves data in a .vsp file, and displays a report
that shows an analysis of this data.
![Profiler report][11]
-If you see String.wstrcpy in the Hot Path, click on Just My Code to change the view to show user code only. If you see String.Concat, try pressing the Show All Code button.
+If you see String.wstrcpy in the Hot Path, select on Just My Code to change the view to show user code only. If you see String.Concat, try pressing the **Show All Code** button.
You should see the Concatenate method and String.Concat taking up a large portion of the execution time. ![Analysis of report][12]
-If you added the string concatenation code in this article, you should see a warning in the Task List for this. You may also see a warning that there is an excessive amount of garbage collection, which is due to the number of strings that are created and disposed.
+If you added the string concatenation code in this article, you should see a warning in the Task List for it. You may also see a warning that there's an excessive amount of garbage collection, which is due to the number of strings created and disposed.
![Performance warnings][14]
-## 4: Make changes and compare performance
-You can also compare the performance before and after a code change. Stop the running process, and edit the code to replace the string concatenation operation with the use of StringBuilder:
+## Make changes and compare performance
+You can also compare the performance before and after a code change. To replace the string concatenation operation with the use of StringBuilder, stop the running process and edit the code:
```csharp public static string Concatenate(int number)
The reports highlight differences between the two runs.
![Comparison report][16]
-Congratulations! You've gotten started with the profiler.
+Congratulations! You got started with the profiler.
## Troubleshooting
-* Make sure you are profiling a Release build and start without debugging.
-* If the Attach/Detach option is not enabled on the Profiler menu, run the Performance Wizard.
+* Make sure you profile a Release build and start without debugging.
+* If the Attach/Detach option isn't enabled on the Profiler menu, run the Performance Wizard.
* Use the Compute Emulator UI to view the status of your application. * If you have problems starting applications in the emulator, or attaching the profiler, shut down the compute emulator and restart it. If that doesn't solve the problem, try rebooting. This problem can occur if you use the Compute Emulator to suspend and remove running deployments.
-* If you have used any of the profiling commands from the
- command line, especially the global settings, make sure that VSPerfClrEnv /globaloff has been called and that VsPerfMon.exe has been shut down.
-* If when sampling, you see the message "PRF0025: No data was collected," check that the process you attached to has CPU activity. Applications that are not doing any computational work might not produce any sampling data. It's also possible that the process exited before any sampling was done. Check to see that the Run method for a role that you are profiling does not terminate.
+* If you used any of the profiling commands from the
+ command line, especially the global settings, make sure you call VSPerfClrEnv /globaloff and shut down VsPerfMon.exe.
+* When sampling, you see the message "PRF0025: No data was collected," check the CPU activity of the process. Applications that aren't doing any computational work might not produce any sampling data. It's also possible that the process exited before any sampling was done. Check to see that the Run method for a role that you profile doesn't terminate.
## Next Steps
-Instrumenting Azure binaries in the emulator is not supported in the Visual Studio profiler, but if you want to test memory allocation, you can choose that option when profiling. You can also choose concurrency profiling, which helps you determine whether threads are wasting time competing for locks, or tier interaction profiling, which helps you track down performance problems when interacting between tiers of an application, most frequently between the data tier and a worker role. You can view the database queries that your app generates and use the profiling data to improve your use of the database. For information about tier interaction profiling, see the blog post [Walkthrough: Using the Tier Interaction Profiler in Visual Studio Team System 2010][3].
+Instrumenting Azure binaries in the emulator isn't supported in the Visual Studio profiler, but if you want to test memory allocation, you can choose that option when profiling. You can also choose concurrency profiling, which helps you determine whether threads are wasting time competing for locks, or tier interaction profiling, which helps you track down performance problems when interacting between tiers of an application, most frequently between the data tier and a worker role. You can view the database queries that your app generates and use the profiling data to improve your use of the database. For information about tier interaction profiling, see the blog post [Walkthrough: Using the Tier Interaction Profiler in Visual Studio Team System 2010][3].
[1]: ../azure-monitor/app/profiler.md [2]: /previous-versions/azure/hh411542(v=azure.100)
cloud-services Cloud Services Php Create Web Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-php-create-web-role.md
ms.assetid: 9f7ccda0-bd96-4f7b-a7af-fb279a9e975b
ms.devlang: php Previously updated : 04/11/2018 Last updated : 07/23/2024 # Create PHP web and worker roles+ ## Overview [!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-This guide will show you how to create PHP web or worker roles in a Windows development environment, choose a specific version of PHP from the "built-in" versions available, change the PHP configuration, enable extensions, and finally, deploy to Azure. It also describes how to configure a web or worker role to use a PHP runtime (with custom configuration and extensions) that you provide.
+This guide shows you how to create PHP web or worker roles in a Windows development environment, choose a specific version of PHP from the "built-in" versions available, change the PHP configuration, enable extensions, and finally, deploy to Azure. It also describes how to configure a web or worker role to use a PHP runtime (with custom configuration and extensions) that you provide.
-Azure provides three compute models for running applications: Azure App Service, Azure Virtual Machines, and Azure Cloud Services. All three models support PHP. Cloud Services, which includes web and worker roles, provides *platform as a service (PaaS)*. Within a cloud service, a web role provides a dedicated Internet Information Services (IIS) web server to host front-end web applications. A worker role can run asynchronous, long-running or perpetual tasks independent of user interaction or input.
+Azure provides three compute models for running applications: Azure App Service, Azure Virtual Machines, and Azure Cloud Services. All three models support PHP. Cloud Services, which includes web and worker roles, provides *platform as a service (PaaS)*. Within a cloud service, a web role provides a dedicated Internet Information Services (IIS) web server to host front-end web applications. A worker role can run asynchronous, long-running, or perpetual tasks independent of user interaction or input.
For more information about these options, see [Compute hosting options provided by Azure](cloud-services-choose-me.md). ## Download the Azure SDK for PHP
-The [Azure SDK for PHP](https://github.com/Azure/azure-sdk-for-php) consists of several components. This article will use two of them: Azure PowerShell and the Azure emulators. These two components can be installed via the Microsoft Web Platform Installer. For more information, see [How to install and configure Azure PowerShell](/powershell/azure/).
+The [Azure SDK for PHP](https://github.com/Azure/azure-sdk-for-php) consists of several components. This article uses two of them: Azure PowerShell and the Azure emulators. These two components can be installed via the Microsoft Web Platform Installer. For more information, see [How to install and configure Azure PowerShell](/powershell/azure/).
## Create a Cloud Services project
-The first step in creating a PHP web or worker role is to create an Azure Service project. an Azure Service project serves as a logical container for web and worker roles, and it contains the project's [service definition (.csdef)] and [service configuration (.cscfg)] files.
+The first step in creating a PHP web or worker role is to create an Azure Service project. An Azure Service project serves as a logical container for web and worker roles, and it contains the project's [service definition (.csdef)] and [service configuration (.cscfg)] files.
To create a new Azure Service project, run Azure PowerShell as an administrator, and execute the following command:
To create a new Azure Service project, run Azure PowerShell as an administrator,
PS C:\>New-AzureServiceProject myProject ```
-This command will create a new directory (`myProject`) to which you can add web and worker roles.
+This command creates a new directory (`myProject`) to which you can add web and worker roles.
## Add PHP web or worker roles
PS C:\myProject> Add-AzurePHPWorkerRole roleName
## Use your own PHP runtime
-In some cases, instead of selecting a built-in PHP runtime and configuring it as described above, you may want to provide your own PHP runtime. For example, you can use the same PHP runtime in a web or worker role that you use in your development environment. This makes it easier to ensure that the application will not change behavior in your production environment.
+In some cases, instead of selecting a built-in PHP runtime and configuring it as previously described, you may want to provide your own PHP runtime. For example, you can use the same PHP runtime in a web or worker role that you use in your development environment. This process makes it easier to ensure that the application behavior stays the same in your production environment.
### Configure a web role to use your own PHP runtime To configure a web role to use a PHP runtime that you provide, follow these steps:
-1. Create an Azure Service project and add a PHP web role as described previously in this topic.
+1. Create an Azure Service project and add a PHP web role as described previously in this article.
2. Create a `php` folder in the `bin` folder that is in your web role's root directory, and then add your PHP runtime (all binaries, configuration files, subfolders, etc.) to the `php` folder.
-3. (OPTIONAL) If your PHP runtime uses the [Microsoft Drivers for PHP for SQL Server][sqlsrv drivers], you will need to configure your web role to install [SQL Server Native Client 2012][sql native client] when it is provisioned. To do this, add the [sqlncli.msi x64 installer] to the `bin` folder in your web role's root directory. The startup script described in the next step will silently run the installer when the role is provisioned. If your PHP runtime does not use the Microsoft Drivers for PHP for SQL Server, you can remove the following line from the script shown in the next step:
+3. (OPTIONAL) If your PHP runtime uses the [Microsoft Drivers for PHP for SQL Server][sqlsrv drivers], you need to configure your web role to install [SQL Server Native Client 2012][sql native client] when it provisions. To do so, add the [sqlncli.msi x64 installer] to the `bin` folder in your web role's root directory. The startup script described in the next step will silently run the installer when the role is provisioned. If your PHP runtime doesn't use the Microsoft Drivers for PHP for SQL Server, you can remove the following line from the script shown in the next step:
```console msiexec /i sqlncli.msi /qn IACCEPTSQLNCLILICENSETERMS=YES ```
-4. Define a startup task that configures [Internet Information Services (IIS)][iis.net] to use your PHP runtime to handle requests for `.php` pages. To do this, open the `setup_web.cmd` file (in the `bin` file of your web role's root directory) in a text editor and replace its contents with the following script:
+4. Define a startup task that configures [Internet Information Services (IIS)][iis.net] to use your PHP runtime to handle requests for `.php` pages. To do so, open the `setup_web.cmd` file (in the `bin` file of your web role's root directory) in a text editor and replace its contents with the following script:
```cmd @ECHO ON
To configure a web role to use a PHP runtime that you provide, follow these step
%WINDIR%\system32\inetsrv\appcmd.exe set config -section:system.webServer/handlers /+"[name='PHP',path='*.php',verb='GET,HEAD,POST',modules='FastCgiModule',scriptProcessor='%PHP_FULL_PATH%',resourceType='Either',requireAccess='Script']" /commit:apphost %WINDIR%\system32\inetsrv\appcmd.exe set config -section:system.webServer/fastCgi /"[fullPath='%PHP_FULL_PATH%'].queueLength:50000" ```
-5. Add your application files to your web role's root directory. This will be the web server's root directory.
-6. Publish your application as described in the [Publish your application](#publish-your-application) section below.
+5. Add your application files to your web role's root directory, which becomes the web server's root directory.
+6. Publish your application as described in the [Publish your application section](#publish-your-application).
> [!NOTE]
-> The `download.ps1` script (in the `bin` folder of the web role's root directory) can be deleted after you follow the steps described above for using your own PHP runtime.
->
->
+> The `download.ps1` script (in the `bin` folder of the web role's root directory) can be deleted after you follow the preceding steps for using your own PHP runtime.
### Configure a worker role to use your own PHP runtime To configure a worker role to use a PHP runtime that you provide, follow these steps:
-1. Create an Azure Service project and add a PHP worker role as described previously in this topic.
+1. Create an Azure Service project and add a PHP worker role as described previously in this article.
2. Create a `php` folder in the worker role's root directory, and then add your PHP runtime (all binaries, configuration files, subfolders, etc.) to the `php` folder.
-3. (OPTIONAL) If your PHP runtime uses [Microsoft Drivers for PHP for SQL Server][sqlsrv drivers], you will need to configure your worker role to install [SQL Server Native Client 2012][sql native client] when it is provisioned. To do this, add the [sqlncli.msi x64 installer] to the worker role's root directory. The startup script described in the next step will silently run the installer when the role is provisioned. If your PHP runtime does not use the Microsoft Drivers for PHP for SQL Server, you can remove the following line from the script shown in the next step:
+3. (OPTIONAL) If your PHP runtime uses [Microsoft Drivers for PHP for SQL Server][sqlsrv drivers], you need to configure your worker role to install [SQL Server Native Client 2012][sql native client] when it provisions. To do so, add the [sqlncli.msi x64 installer] to the worker role's root directory. The startup script described in the next step will silently run the installer when the role is provisioned. If your PHP runtime doesn't use the Microsoft Drivers for PHP for SQL Server, you can remove the following line from the script shown in the next step:
```console msiexec /i sqlncli.msi /qn IACCEPTSQLNCLILICENSETERMS=YES ```
-4. Define a startup task that adds your `php.exe` executable to the worker role's PATH environment variable when the role is provisioned. To do this, open the `setup_worker.cmd` file (in the worker role's root directory) in a text editor and replace its contents with the following script:
+4. Define a startup task that adds your `php.exe` executable to the worker role's PATH environment variable when the role is provisioned. To do so, open the `setup_worker.cmd` file (in the worker role's root directory) in a text editor and replace its contents with the following script:
```cmd @echo on
To configure a worker role to use a PHP runtime that you provide, follow these s
exit /b -1 ``` 5. Add your application files to your worker role's root directory.
-6. Publish your application as described in the [Publish your application](#publish-your-application) section below.
+6. Publish your application as described in the [Publish your application section](#publish-your-application).
## Run your application in the compute and storage emulators
-The Azure emulators provide a local environment in which you can test your Azure application before you deploy it to the cloud. There are some differences between the emulators and the Azure environment. To understand this better, see [Use the Azure Storage Emulator for development and testing](../storage/common/storage-use-emulator.md).
+The Azure emulators provide a local environment in which you can test your Azure application before you deploy it to the cloud. There are some differences between the emulators and the Azure environment. To understand these differences better, see [Use the Azure Storage Emulator for development and testing](../storage/common/storage-use-emulator.md).
-Note that you must have PHP installed locally to use the compute emulator. The compute emulator will use your local PHP installation to run your application.
+You must have PHP installed locally to use the compute emulator. The compute emulator uses your local PHP installation to run your application.
To run your project in the emulators, execute the following command from your project's root directory:
To run your project in the emulators, execute the following command from your pr
PS C:\MyProject> Start-AzureEmulator ```
-You will see output similar to this:
+The following sample output is similar to what you should see:
```output Creating local package...
Role is running at http://127.0.0.1:81
Started ```
-You can see your application running in the emulator by opening a web browser and browsing to the local address shown in the output (`http://127.0.0.1:81` in the example output above).
+You can see your application running in the emulator by opening a web browser and browsing to the local address shown in the output (`http://127.0.0.1:81` in the example output shown earlier).
To stop the emulators, execute this command:
cloud-services Cloud Services Powershell Create Cloud Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-powershell-create-cloud-container.md
Title: Create a cloud service (classic) container with PowerShell | Microsoft Do
description: This article explains how to create a cloud service container with PowerShell. The container hosts web and worker roles. Previously updated : 02/21/2023 Last updated : 07/23/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-This article explains how to quickly create a Cloud Services container using Azure PowerShell cmdlets. Please follow the steps below:
+This article explains how to quickly create a Cloud Services container using Azure PowerShell cmdlets. Use the following steps:
1. Install the Microsoft Azure PowerShell cmdlet from the [Azure PowerShell downloads](https://aka.ms/webpi-azps) page. 2. Open the PowerShell command prompt.
Get-help New-AzureService
### Next steps
-* To manage the cloud service deployment, refer to the [Get-AzureService](/powershell/module/servicemanagement/azure/Get-AzureService), [Remove-AzureService](/powershell/module/servicemanagement/azure/Remove-AzureService), and [Set-AzureService](/powershell/module/servicemanagement/azure/set-azureservice) commands. You may also refer to [How to configure cloud services](cloud-services-how-to-configure-portal.md) for further information.
+* To manage the cloud service deployment, refer to the [Get-AzureService](/powershell/module/servicemanagement/azure/Get-AzureService), [Remove-AzureService](/powershell/module/servicemanagement/azure/Remove-AzureService), and [Set-AzureService](/powershell/module/servicemanagement/azure/set-azureservice) commands. For more information, see [How to configure cloud services](cloud-services-how-to-configure-portal.md).
* To publish your cloud service project to Azure, refer to the **PublishCloudService.ps1** code sample from [archived cloud services repository](https://github.com/MicrosoftDocs/azure-cloud-services-files/tree/master/Scripts/cloud-services-continuous-delivery).
cloud-services Cloud Services Python How To Use Service Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-python-how-to-use-service-management.md
Title: Use the Service Management API (Python) - feature guide
+ Title: Use the classic deployment model (Python) - feature guide
description: Learn how to programmatically perform common service management tasks from Python. Previously updated : 02/21/2023 Last updated : 07/23/2024
This guide shows you how to programmatically perform common service management tasks from Python. The **ServiceManagementService** class in the [Azure SDK for Python](https://github.com/Azure/azure-sdk-for-python) supports programmatic access to much of the service management-related functionality that is available in the [Azure portal]. You can use this functionality to create, update, and delete cloud services, deployments, data management services, and virtual machines. This functionality can be useful in building applications that need programmatic access to service management. ## <a name="WhatIs"> </a>What is service management?
-The Azure Service Management API provides programmatic access to much of the service management functionality available through the [Azure portal]. You can use the Azure SDK for Python to manage your cloud services and storage accounts.
+The Azure classic deployment model provides programmatic access to much of the service management functionality available through the [Azure portal]. You can use the Azure SDK for Python to manage your cloud services and storage accounts.
-To use the Service Management API, you need to [create an Azure account](https://azure.microsoft.com/pricing/free-trial/).
+To use the classic deployment model, you need to [create an Azure account](https://azure.microsoft.com/pricing/free-trial/).
## <a name="Concepts"> </a>Concepts
-The Azure SDK for Python wraps the [Service Management API][svc-mgmt-rest-api], which is a REST API. All API operations are performed over TLS and mutually authenticated by using X.509 v3 certificates. The management service can be accessed from within a service running in Azure. It also can be accessed directly over the Internet from any application that can send an HTTPS request and receive an HTTPS response.
+The Azure SDK for Python wraps the [classic deployment model][svc-mgmt-rest-api], which is a REST API. All API operations are performed over Transport Layer Security (TLS) and mutually authenticated by using X.509 v3 certificates. The management service can be accessed from within a service running in Azure. It also can be accessed directly over the Internet from any application that can send an HTTPS request and receive an HTTPS response.
## <a name="Installation"> </a>Installation All the features described in this article are available in the `azure-servicemanagement-legacy` package, which you can install by using pip. For more information about installation (for example, if you're new to Python), see [Install Python and the Azure SDK](/azure/developer/python/sdk/azure-sdk-install).
image_name = 'OpenLogic__OpenLogic-CentOS-62-20120531-en-us-30GB.vhd'
# will be created media_link = 'url_to_target_storage_blob_for_vm_hd'
-# Linux VM configuration, you can use WindowsConfigurationSet
+# Linux virtual machine (VM) configuration, you can use WindowsConfigurationSet
# for a Windows VM instead linux_config = LinuxConfigurationSet('myhostname', 'myuser', 'mypassword', True)
sms.delete_hosted_service(service_name='myvm')
``` ## Create a virtual machine from a captured virtual machine image
-To capture a VM image, you first call the **capture\_vm\_image** method.
+To capture a virtual machine (VM) image, you first call the **capture\_vm\_image** method.
```python from azure import *
To learn more about how to capture a Linux virtual machine in the classic deploy
To learn more about how to capture a Windows virtual machine in the classic deployment model, see [Capture a Windows virtual machine](/previous-versions/azure/virtual-machines/windows/classic/capture-image-classic). ## <a name="What's Next"> </a>Next steps
-Now that you've learned the basics of service management, you can access the [Complete API reference documentation for the Azure Python SDK](https://azure-sdk-for-python.readthedocs.org/) and perform complex tasks easily to manage your Python application.
+Now that you learned the basics of service management, you can access the [Complete API reference documentation for the Azure Python SDK](https://azure-sdk-for-python.readthedocs.org/) and perform complex tasks easily to manage your Python application.
For more information, see the [Python Developer Center](https://azure.microsoft.com/develop/python/).
cloud-services Cloud Services Python Ptvs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-python-ptvs.md
Title: Get started with Python and Azure Cloud Services (classic)| Microsoft Doc
description: Overview of using Python Tools for Visual Studio to create Azure cloud services including web roles and worker roles. Previously updated : 02/21/2023 Last updated : 07/23/2024
This article provides an overview of using Python web and worker roles using [Py
## Prerequisites * [Visual Studio 2013, 2015, or 2017](https://www.visualstudio.com/) * [Python Tools for Visual Studio][Python Tools for Visual Studio] (PTVS)
-* [Azure SDK Tools for VS 2013][Azure SDK Tools for VS 2013] or
+* [Azure SDK Tools for Visual Studio (VS) 2013][Azure SDK Tools for VS 2013] or
[Azure SDK Tools for VS 2015][Azure SDK Tools for VS 2015] or [Azure SDK Tools for VS 2017][Azure SDK Tools for VS 2017] * [Python 2.7 32-bit][Python 2.7 32-bit] or [Python 3.8 32-bit][Python 3.8 32-bit]
This article provides an overview of using Python web and worker roles using [Py
[!INCLUDE [create-account-and-websites-note](../../includes/create-account-and-websites-note.md)] ## What are Python web and worker roles?
-Azure provides three compute models for running applications: [Web Apps feature in Azure App Service][execution model-web sites], [Azure Virtual Machines][execution model-vms], and [Azure Cloud Services][execution model-cloud services]. All three models support Python. Cloud Services, which include web and worker roles, provide *Platform as a Service (PaaS)*. Within a cloud service, a web role provides a dedicated Internet Information Services (IIS) web server to host front end web applications, while a worker role can run asynchronous, long-running, or perpetual tasks independent of user interaction or input.
+Azure provides three compute models for running applications: [Web Apps feature in Azure App Service][execution model-web sites], [Azure Virtual Machines][execution model-vms], and [Azure Cloud Services][execution model-cloud services]. All three models support Python. Cloud Services, which include web and worker roles, provide *Platform as a Service (PaaS)*. Within a cloud service, a web role provides a dedicated Internet Information Services (IIS) web server to host front end web applications. A worker role can run asynchronous, long-running, or perpetual tasks independent of user interaction or input.
For more information, see [What is a Cloud Service?].
The worker role template comes with boilerplate code to connect to an Azure stor
![Cloud Service Solution](./media/cloud-services-python-ptvs/worker.png)
-You can add web or worker roles to an existing cloud service at any time. You can choose to add existing projects in your solution, or create new ones.
+You can add web or worker roles to an existing cloud service at any time. You can choose to add existing projects in your solution, or create new ones.
![Add Role Command](./media/cloud-services-python-ptvs/add-new-or-existing-role.png)
-Your cloud service can contain roles implemented in different languages. For example, you can have a Python web role implemented using Django, with Python, or with C# worker roles. You can easily communicate between your roles using Service Bus queues or storage queues.
+Your cloud service can contain roles implemented in different languages. For example, you can have a Python web role implemented using Django, with Python, or with C# worker roles. You can easily communicate between your roles using Service Bus queues or storage queues.
## Install Python on the cloud service > [!WARNING]
Your cloud service can contain roles implemented in different languages. For ex
> >
-The main problem with the setup scripts is that they do not install python. First, define two [startup tasks](cloud-services-startup-tasks.md) in the [ServiceDefinition.csdef](cloud-services-model-and-package.md#servicedefinitioncsdef) file. The first task (**PrepPython.ps1**) downloads and installs the Python runtime. The second task (**PipInstaller.ps1**) runs pip to install any dependencies you may have.
+The main problem with the setup scripts is that they don't install Python. First, define two [startup tasks](cloud-services-startup-tasks.md) in the [ServiceDefinition.csdef](cloud-services-model-and-package.md#servicedefinitioncsdef) file. The first task (**PrepPython.ps1**) downloads and installs the Python runtime. The second task (**PipInstaller.ps1**) runs pip to install any dependencies you may have.
The following scripts were written targeting Python 3.8. If you want to use the version 2.x of python, set the **PYTHON2** variable file to **on** for the two startup tasks and the runtime task: `<Variable name="PYTHON2" value="<mark>on</mark>" />`.
if (-not $is_emulated){
> >
-The **bin\LaunchWorker.ps1** was originally created to do a lot of prep work but it doesn't really work. Replace the contents in that file with the following script.
+The **bin\LaunchWorker.ps1** was originally created to do a lot of prep work, but it doesn't really work. Replace the contents in that file with the following script.
This script calls the **worker.py** file from your Python project. If the **PYTHON2** environment variable is set to **on**, then Python 2.7 is used, otherwise Python 3.8 is used.
else
``` #### ps.cmd
-The Visual Studio templates should have created a **ps.cmd** file in the **./bin** folder. This shell script calls out the PowerShell wrapper scripts above and provides logging based on the name of the PowerShell wrapper called. If this file wasn't created, here is what should be in it.
+The Visual Studio templates probably created a **ps.cmd** file in the **./bin** folder. This shell script calls out the preceding PowerShell wrapper scripts and provides logging based on the name of the PowerShell wrapper called. If this file wasn't created, the following script would be in it:
```cmd @echo off
if not exist "%DiagnosticStore%\LogFiles" mkdir "%DiagnosticStore%\LogFiles"
%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe -ExecutionPolicy Unrestricted -File %* >> "%DiagnosticStore%\LogFiles\%~n1.txt" 2>> "%DiagnosticStore%\LogFiles\%~n1.err.txt" ``` -- ## Run locally If you set your cloud service project as the startup project and press F5, the cloud service runs in the local Azure emulator.
-Although PTVS supports launching in the emulator, debugging (for example, breakpoints) does not work.
+Although PTVS supports launching in the emulator, debugging (for example, breakpoints) doesn't work.
-To debug your web and worker roles, you can set the role project as the startup project and debug that instead. You can also set multiple startup projects. Right-click the solution and then select **Set StartUp Projects**.
+To debug your web and worker roles, you can set the role project as the startup project and debug that instead. You can also set multiple startup projects. Right-click the solution and then select **Set StartUp Projects**.
![Solution Startup Project Properties](./media/cloud-services-python-ptvs/startup.png)
To publish, right-click the cloud service project in the solution and then selec
Follow the wizard. If you need to, enable remote desktop. Remote desktop is helpful when you need to debug something.
-When you are done configuring settings, click **Publish**.
+When you finish configuring settings, choose **Publish**.
-Some progress appears in the output window, then you'll see the Microsoft Azure Activity Log window.
+Some progress appears in the output window, then you see the Microsoft Azure Activity Log window.
![Microsoft Azure Activity Log Window](./media/cloud-services-python-ptvs/publish-activity-log.png)
communication-services Actions For Call Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/actions-for-call-control.md
To connect to any 1:1 or group call, use the ServerCallLocator. If you started a
```csharp Uri callbackUri = new Uri("https://<myendpoint>/Events"); //the callback endpoint where you want to receive subsequent events CallLocator serverCallLocator = new ServerCallLocator("<ServerCallId>");
-ConnctCallResult response = await client.ConnectAsync(serverCallLocator, callbackUri);
+ConnectCallResult response = await client.ConnectCallAsync(serverCallLocator, callbackUri);
``` ### [Java](#tab/java)
To connect to a Rooms call, use RoomCallLocator which takes RoomId.
```csharp Uri callbackUri = new Uri("https://<myendpoint>/Events"); //the callback endpoint where you want to receive subsequent events CallLocator roomCallLocator = new RoomCallLocator("<RoomId>");
-ConnctCallResult response = await client.ConnectAsync(roomCallLocator, callbackUri);
+ConnectCallResult response = await client.ConnectCallAsync(roomCallLocator, callbackUri);
``` ### [Java](#tab/java)
communication-services Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email.md
This quickstart describes how to send email using our Email SDKs.
::: zone-end ::: zone pivot="programming-language-csharp" ::: zone-end ::: zone pivot="programming-language-javascript"
container-apps Connect Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/connect-apps.md
Previously updated : 07/12/2024 Last updated : 07/23/2024
The following diagram shows how these values are used to compose a container app
[!INCLUDE [container-apps-get-fully-qualified-domain-name](../../includes/container-apps-get-fully-qualified-domain-name.md)]
-## Dapr location
+### Dapr location
Developing microservices often requires you to implement patterns common to distributed architecture. Dapr allows you to secure microservices with mutual Transport Layer Security (TLS) (client certificates), trigger retries when errors occur, and take advantage of distributed tracing when Azure Application Insights is enabled.
A microservice that uses Dapr is available through the following URL pattern:
:::image type="content" source="media/connect-apps/azure-container-apps-location-dapr.png" alt-text="Azure Container Apps container app location with Dapr.":::
+## Call a container app by name
+
+You can call a container app by doing by sending a request to `http://<CONTAINER_APP_NAME>` from another app in the environment.
+ ## Next steps > [!div class="nextstepaction"]
container-apps Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/firewall-integration.md
The following tables describe how to configure a collection of NSG allow rules.
#### Considerations - If you're running HTTP servers, you might need to add ports `80` and `443`.-- Don't explicitly deny the Azure DNS address `168.63.128.16` in the outgoing NSG rules, or your Container Apps environment won't be able to function.
+- Don't explicitly deny the Azure DNS address `168.63.129.16` in the outgoing NSG rules, or your Container Apps environment won't be able to function.
cosmos-db Index Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/index-policy.md
Here are some rules for included and excluded paths precedence in Azure Cosmos D
## Vector indexes
+> [!NOTE]
+> You must enroll in the [Azure Cosmos DB NoSQL Vector Index preview feature](nosql/vector-search.md#enroll-in-the-vector-search-preview-feature) to specify a vector indexing policy.>
+ **Vector** indexes increase the efficiency when performing vector searches using the `VectorDistance` system function. Vectors searches will have significantly lower latency, higher throughput, and less RU consumption when leveraging a vector index. You can specify the following types of vector index policies: | Type | Description | Max dimensions |
Here's an example of an indexing policy with a vector index:
} ```
-> [!NOTE]
-> You must enroll in the [Azure Cosmos DB NoSQL Vector Index preview feature](nosql/vector-search.md#enroll-in-the-vector-search-preview-feature) to specify a vector indexing policy.>
- > [!IMPORTANT] > A vector indexing policy must be on the path defined in the container's vector policy. [Learn more about container vector policies](nosql/vector-search.md#container-vector-policies). > Vector indexes must also be defined at the time of Container creation and cannot be modified once created. In a future release, vector indexes will be modifiable. -
+>[!IMPORTANT]
+> The vector path added to the "excludedPaths" section of the indexing policy to ensure optimized performance for insertion. Not adding the vector path to "excludedPaths" will result in higher RU charge and latency for vector insertions.
## Spatial indexes
cosmos-db How To Dotnet Vector Index Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-vector-index-query.md
For our example with book details, the vector policy can look like the example J
Once the vector embedding paths are decided, vector indexes need to be added to the indexing policy. Currently, the vector search feature for Azure Cosmos DB for NoSQL is supported only on new containers so you need to apply the vector policy during the time of container creation and it canΓÇÖt be modified later. For this example, the indexing policy would look something like this: ```csharp
- Collection<Embedding> collection = new Collection<Embedding>(embeddings);
- ContainerProperties properties = new ContainerProperties(id: "vector-container", partitionKeyPath: "/id")
- {
- VectorEmbeddingPolicy = new(collection),
- IndexingPolicy = new IndexingPolicy()
- {
- VectorIndexes = new()
- {
- new VectorIndexPath()
- {
- Path = "/vector",
- Type = VectorIndexType.QuantizedFlat,
- }
- }
- },
- };
+ Collection<Embedding> collection = new Collection<Embedding>(embeddings);
+ ContainerProperties properties = new ContainerProperties(id: "vector-container", partitionKeyPath: "/id")
+ {
+ VectorEmbeddingPolicy = new(collection),
+ IndexingPolicy = new IndexingPolicy()
+ {
+ VectorIndexes = new()
+ {
+ new VectorIndexPath()
+ {
+ Path = "/vector",
+ Type = VectorIndexType.QuantizedFlat,
+ }
+ }
+ },
+ };
+ properties.IndexingPolicy.IncludedPaths.Add(new IncludedPath { Path = "/*" });
+ properties.IndexingPolicy.ExcludedPaths.Add(new ExcludedPath { Path = "/vector/*" });
```
+>[!IMPORTANT]
+> The vector path added to the "excludedPaths" section of the indexing policy to ensure optimized performance for insertion. Not adding the vector path to "excludedPaths" will result in higher RU charge and latency for vector insertions.
> [!IMPORTANT] > Currently vector search in Azure Cosmos DB for NoSQL is supported on new containers only. You need to set both the container vector policy and any vector indexing policy during the time of container creation as it canΓÇÖt be modified later. Both policies will be modifiable in a future improvement to the preview feature.
cosmos-db How To Java Vector Index Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-java-vector-index-query.md
Once the vector embedding paths are decided, vector indexes need to be added to
```java IndexingPolicy indexingPolicy = new IndexingPolicy(); indexingPolicy.setIndexingMode(IndexingMode.CONSISTENT);
-ExcludedPath excludedPath = new ExcludedPath("/*");
-indexingPolicy.setExcludedPaths(Collections.singletonList(excludedPath));
+ExcludedPath excludedPath1 = new ExcludedPath("/coverImageVector/*");
+ExcludedPath excludedPath2 = new ExcludedPath("/contentVector/*");
+indexingPolicy.setExcludedPaths(ImmutableList.of(excludedPath1, excludedPath2));
-IncludedPath includedPath1 = new IncludedPath("/name/?");
-IncludedPath includedPath2 = new IncludedPath("/description/?");
-indexingPolicy.setIncludedPaths(ImmutableList.of(includedPath1, includedPath2));
+IncludedPath includedPath1 = new IncludedPath("/*");
+indexingPolicy.setIncludedPaths(Collections.singletonList(includedPath1));
// Creating vector indexes CosmosVectorIndexSpec cosmosVectorIndexSpec1 = new CosmosVectorIndexSpec();
database.createContainer(collectionDefinition).block();
``` +
+>[!IMPORTANT]
+> The vector path added to the "excludedPaths" section of the indexing policy to ensure optimized performance for insertion. Not adding the vector path to "excludedPaths" will result in higher RU charge and latency for vector insertions.
+ > [!IMPORTANT] > Currently vector search in Azure Cosmos DB for NoSQL is supported on new containers only. You need to set both the container vector policy and any vector indexing policy during the time of container creation as it canΓÇÖt be modified later. Both policies will be modifiable in a future improvement to the preview feature.
cosmos-db How To Manage Indexing Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-manage-indexing-policy.md
In addition to including or excluding paths for individual properties, you can a
> You must enroll in the [Azure Cosmos DB NoSQL Vector Index preview feature](vector-search.md#enroll-in-the-vector-search-preview-feature) to use vector search in Azure Cosmos DB for NoSQL.> >[!IMPORTANT]
-> A vector indexing policy must be on the path defined in the container's vector policy. [Learn more about container vector policies](vector-search.md#container-vector-policies).)
+> A vector indexing policy must be on the same path defined in the container's vector policy. [Learn more about container vector policies](vector-search.md#container-vector-policies).)
```json {
In addition to including or excluding paths for individual properties, you can a
"excludedPaths": [ { "path": "/_etag/?"
+ },
+ {
+ "path": "/vector/*"
} ], "vectorIndexes": [
In addition to including or excluding paths for individual properties, you can a
} ```
+>[!IMPORTANT]
+> The vector path added to the "excludedPaths" section of the indexing policy to ensure optimized performance for insertion. Not adding the vector path to "excludedPaths" will result in higher RU charge and latency for vector insertions.
++ You can define the following types of vector index policies: | Type | Description | Max dimensions |
cosmos-db How To Python Vector Index Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-python-vector-index-query.md
vector_embedding_policy = {
ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "distanceFunction": "cosine", ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "dimensions": 10 ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
-ΓÇ» ΓÇ» ]
+ ]
} ``` ++ ## Creating a vector index in the indexing policy Once the vector embedding paths are decided, vector indexes need to be added to the indexing policy. For this example, the indexing policy would look something like this:
indexing_policy = {
ΓÇ» ΓÇ» ], ΓÇ» ΓÇ» "excludedPaths": [ ΓÇ» ΓÇ» ΓÇ» ΓÇ» {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "path": "/\"_etag\"/?"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "path": "/\"_etag\"/?",
+ "path": "/coverImageVector/*",
+ "path": "/contentVector/*"
+
ΓÇ» ΓÇ» ΓÇ» ΓÇ» } ΓÇ» ΓÇ» ], ΓÇ» ΓÇ» "vectorIndexes": [
indexing_policy = {
} ```
+>[!IMPORTANT]
+> The vector path added to the "excludedPaths" section of the indexing policy to ensure optimized performance for insertion. Not adding the vector path to "excludedPaths" will result in higher RU charge and latency for vector insertions.
++ > [!IMPORTANT] > Currently vector search in Azure Cosmos DB for NoSQL is supported on new containers only. You need to set both the container vector policy and any vector indexing policy during the time of container creation as it canΓÇÖt be modified later. Both policies will be modifiable in a future improvement to the preview feature.
cosmos-db Vector Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/vector-database.md
Title: Vector database description: Vector database functionalities, implementation, and comparison.--++ - build-2024 Previously updated : 03/30/2024 Last updated : 07/23/2024 # Vector database
DiskANN enables you to perform highly accurate, low latency queriers at any scal
- [Vector indexing in Azure Cosmos DB for NoSQL](index-policy.md#vector-indexes) - [VectorDistance system function NoSQL queries](nosql/query/vectordistance.md) - [How to setup vector database capabilities in Azure Cosmos DB NoSQL](nosql/vector-search.md)-- [Python notebook tutorial](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples)-- [C# Solution accelerator for building AI apps](https://aka.ms/BuildModernAiAppsSolution)-- [C# Azure Cosmos DB Chatbot with Azure OpenAI](https://aka.ms/cosmos-chatgpt-sample)
+- [Python - Notebook tutorial](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples)
+- [C# - Build Your Own Copilot Complete Solution Accelerator with AKS and Semantic Kernel](https://aka.ms/cdbcopilot)
+- [C# - Build Your Own Copilot Sample App and Hands-on-Lab](https://github.com/AzureCosmosDB/cosmosdb-nosql-copilot)
+- [Python - Movie Chatbot](https://github.com/AzureCosmosDB/Fabric-Conf-2024-Build-AI-Apps/tree/main/AzureCosmosDBforNoSQL)
-### API for MongoDB
+### Azure Cosmos DB for MongoDB
Use the natively [integrated vector database in Azure Cosmos DB for MongoDB](mongodb/vcore/vector-search.md) (vCore architecture), which offers an efficient way to store, index, and search high-dimensional vector data directly alongside other application data. This approach removes the necessity of migrating your data to costlier alternative vector databases and provides a seamless integration of your AI-driven applications. #### Code samples -- [.NET RAG Pattern retail reference solution](https://github.com/Azure/Vector-Search-AI-Assistant-MongoDBvCore)
+- [Build Your Own Copilot for Azure Cosmos DB for MongoDB in C# with Semantic Kernel](https://github.com/AzureCosmosDB/cosmosdb-mongo-copilot)
- [.NET tutorial - recipe chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-MongoDBvCore) - [C# RAG pattern - Integrate OpenAI Services with Cosmos](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-MongoDBvCore) - [Python RAG pattern - Azure product chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-MongoDB-vCore)-- [Python notebook tutorial - Vector database integration through LangChain](https://python.langchain.com/docs/integrations/vectorstores/azure_cosmos_db)-- [Python notebook tutorial - LLM Caching integration through LangChain](https://python.langchain.com/docs/integrations/llms/llm_caching#azure-cosmos-db-semantic-cache)
+- [Python Notebook - Vector database integration through LangChain tutorial](https://python.langchain.com/docs/integrations/vectorstores/azure_cosmos_db)
+- [Python Notebook - LLM Caching integration through LangChain tutorial](https://python.langchain.com/docs/integrations/llms/llm_caching#azure-cosmos-db-semantic-cache)
- [Python - LlamaIndex integration](https://docs.llamaindex.ai/en/stable/examples/vector_stores/AzureCosmosDBMongoDBvCoreDemo.html) - [Python - Semantic Kernel memory integration](https://github.com/microsoft/semantic-kernel/tree/main/python/semantic_kernel/connectors/memory/azure_cosmosdb)
+- [Python Notebook - Movie Chatbot](https://github.com/AzureCosmosDB/Fabric-Conf-2024-Build-AI-Apps/tree/main/AzureCosmosDBforMongoDB)
> [!div class="nextstepaction"] > [Use Azure Cosmos DB for MongoDB lifetime free tier](mongodb/vcore/free-tier.md)
cost-management-billing Programmatically Create Subscription Enterprise Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
resource subToMG 'Microsoft.Management/managementGroups/subscriptions@2020-05-01
* Now that you created a subscription, you can grant that ability to other users and service principals. For more information, see [Grant access to create Azure Enterprise subscriptions (preview)](grant-access-to-create-subscription.md). * For more information about managing large numbers of subscriptions using management groups, see [Organize your resources with Azure management groups](../../governance/management-groups/overview.md).
-* To change the management group for a subscription, see [Move subscriptions](../../governance/management-groups/manage.md#move-subscriptions).
+* To change the management group for a subscription, see [Move subscriptions](../../governance/management-groups/manage.md#move-management-groups-and-subscriptions).
* For advanced subscription creation scenarios using REST API, see [Alias - Create](/rest/api/subscription/2021-10-01/alias/create).
cost-management-billing Programmatically Create Subscription Microsoft Customer Agreement Across Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement-across-tenants.md
Content-Type: application/json
* Now that you've created a subscription, you can grant that ability to other users and service principals. For more information, see [Grant access to create Azure Enterprise subscriptions (preview)](grant-access-to-create-subscription.md). * For more information about managing large numbers of subscriptions using management groups, see [Organize your resources with Azure management groups](../../governance/management-groups/overview.md).
-* To change the management group for a subscription, see [Move subscriptions](../../governance/management-groups/manage.md#move-subscriptions).
+* To change the management group for a subscription, see [Move subscriptions](../../governance/management-groups/manage.md#move-management-groups-and-subscriptions).
cost-management-billing Programmatically Create Subscription Microsoft Customer Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md
resource subToMG 'Microsoft.Management/managementGroups/subscriptions@2020-05-01
* Now that you've created a subscription, you can grant that ability to other users and service principals. For more information, see [Grant access to create Azure Enterprise subscriptions (preview)](grant-access-to-create-subscription.md). * For more information about managing large numbers of subscriptions using management groups, see [Organize your resources with Azure management groups](../../governance/management-groups/overview.md).
-* To change the management group for a subscription, see [Move subscriptions](../../governance/management-groups/manage.md#move-subscriptions).
+* To change the management group for a subscription, see [Move subscriptions](../../governance/management-groups/manage.md#move-management-groups-and-subscriptions).
* For advanced subscription creation scenarios using REST API, see [Alias - Create](/rest/api/subscription/2021-10-01/alias/create).
data-factory Self Hosted Integration Runtime Proxy Ssis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-proxy-ssis.md
The cloud staging tasks that run on your Azure-SSIS IR are not be billed separat
## Enforce TLS 1.2
-If you need to access data stores that have been configured to use only the strongest cryptography/most secure network protocol (TLS 1.2), including your Azure Blob Storage for staging, you must enable only TLS 1.2 and disable older SSL/TLS versions at the same time on your self-hosted IR. To do so, you can download and run the *main.cmd* script that we provide in the *CustomSetupScript/UserScenarios/TLS 1.2* folder of our public preview blob container. Using [Azure Storage Explorer](https://storageexplorer.com/), you can connect to our public preview blob container by entering the following SAS URI:
-
-`https://ssisazurefileshare.blob.core.windows.net/publicpreview?sp=rl&st=2020-03-25T04:00:00Z&se=2025-03-25T04:00:00Z&sv=2019-02-02&sr=c&sig=WAD3DATezJjhBCO3ezrQ7TUZ8syEUxZZtGIhhP6Pt4I%3D`
+If you need to access data stores that have been configured to use only the strongest cryptography/most secure network protocol (TLS 1.2), including your Azure Blob Storage for staging, you must enable only TLS 1.2 and disable older SSL/TLS versions at the same time on your self-hosted IR. To do so, you can download and run the *main.cmd* script from https://github.com/Azure/Azure-DataFactory/tree/main/SamplesV2/SQLServerIntegrationServices/publicpreview/CustomSetupScript/UserScenarios/TLS%201.2.
## Current limitations
defender-for-cloud Devops Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/devops-support.md
The following tables summarize the availability and prerequisites for each featu
| [Security recommendations to fix code vulnerabilities](defender-for-devops-introduction.md#manage-your-devops-environments-in-defender-for-cloud)| ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | [GitHub Advanced Security for Azure DevOps](/azure/devops/repos/security/configure-github-advanced-security-features?view=azure-devops&tabs=yaml&preserve-view=true) for CodeQL findings, [Microsoft Security DevOps extension](azure-devops-extension.yml) | | [Security recommendations to discover exposed secrets](defender-for-devops-introduction.md#manage-your-devops-environments-in-defender-for-cloud) | ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | [GitHub Advanced Security for Azure DevOps](/azure/devops/repos/security/configure-github-advanced-security-features?view=azure-devops&tabs=yaml&preserve-view=true) | | [Security recommendations to fix open source vulnerabilities](defender-for-devops-introduction.md#manage-your-devops-environments-in-defender-for-cloud) | ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | [GitHub Advanced Security for Azure DevOps](/azure/devops/repos/security/configure-github-advanced-security-features?view=azure-devops&tabs=yaml&preserve-view=true) |
-| [Security recommendations to fix infrastructure as code misconfigurations](iac-vulnerabilities.md#configure-iac-scanning-and-view-the-results-in-azure-devops) | ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | [Microsoft Security DevOps extension](azure-devops-extension.yml) |
+| [Security recommendations to fix infrastructure as code misconfigurations](iac-vulnerabilities.md) | ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | [Microsoft Security DevOps extension](azure-devops-extension.yml) |
| [Security recommendations to fix DevOps environment misconfigurations](concept-devops-posture-management-overview.md) | ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | N/A | | [Pull request annotations](review-pull-request-annotations.md) | | ![Yes Icon](./medi) | | [Code to cloud mapping for Containers](container-image-mapping.md) | | ![Yes Icon](./media/icons/yes-icon.png) | [Microsoft Security DevOps extension](azure-devops-extension.yml#configure-the-microsoft-security-devops-azure-devops-extension) |
The following tables summarize the availability and prerequisites for each featu
| Feature | Foundational CSPM | Defender CSPM | Prerequisites | |-|:--:|:--:|| | [Connect GitHub repositories](quickstart-onboard-github.md) | ![Yes Icon](./medi#prerequisites) |
-| [Security recommendations to fix code vulnerabilities](defender-for-devops-introduction.md#manage-your-devops-environments-in-defender-for-cloud)| ![Yes Icon](./medi) |
+| [Security recommendations to fix code vulnerabilities](defender-for-devops-introduction.md#manage-your-devops-environments-in-defender-for-cloud)| ![Yes Icon](./medi) |
| [Security recommendations to discover exposed secrets](defender-for-devops-introduction.md#manage-your-devops-environments-in-defender-for-cloud) | ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | [GitHub Advanced Security](https://docs.github.com/en/get-started/learning-about-github/about-github-advanced-security) | | [Security recommendations to fix open source vulnerabilities](defender-for-devops-introduction.md#manage-your-devops-environments-in-defender-for-cloud) | ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | [GitHub Advanced Security](https://docs.github.com/en/get-started/learning-about-github/about-github-advanced-security) |
-| [Security recommendations to fix infrastructure as code misconfigurations](iac-vulnerabilities.md#configure-iac-scanning-and-view-the-results-in-azure-devops) | ![Yes Icon](./medi) |
+| [Security recommendations to fix infrastructure as code misconfigurations](iac-vulnerabilities.md) | ![Yes Icon](./medi) |
| [Security recommendations to fix DevOps environment misconfigurations](concept-devops-posture-management-overview.md) | ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | N/A | | [Code to cloud mapping for Containers](container-image-mapping.md) | | ![Yes Icon](./medi) |
-| [Code to cloud mapping for Infrastructure as Code templates](iac-template-mapping.md) | | ![Yes Icon](./medi) |
| [Attack path analysis](how-to-manage-attack-path.md) | | ![Yes Icon](./media/icons/yes-icon.png) | Enable Defender CSPM on an Azure Subscription, AWS Connector, or GCP connector in the same tenant as the DevOps Connector | | [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | | ![Yes Icon](./media/icons/yes-icon.png) | Enable Defender CSPM on an Azure Subscription, AWS Connector, or GCP connector in the same tenant as the DevOps Connector |
defender-for-cloud Gain End User Context Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/gain-end-user-context-ai.md
If a fieldΓÇÖs name is misspelled, the Azure OpenAI API call will still succeed.
The provided schema consists of the `SecurityContext` objects that contains several parameters that describe the application itself, and the end user that interacts with the application. These fields assist your security operations teams to investigate and mitigate security incidents by providing a comprehensive approach to protecting your AI applications. -- End used ID
+- End user ID
- End user type - End user tenant's ID - Source IP address.
defender-for-cloud Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/github-action.md
Microsoft Security DevOps uses the following Open Source tools:
- [Connect your GitHub repositories](quickstart-onboard-github.md). -- Follow the guidance to set up [GitHub Advanced Security](https://docs.github.com/en/organizations/keeping-your-organization-secure/managing-security-settings-for-your-organization/managing-security-and-analysis-settings-for-your-organization) to view the DevOps posture assessments in Defender for Cloud.- - Open the [Microsoft Security DevOps GitHub action](https://github.com/marketplace/actions/security-devops-action) in a new window. - Ensure that [Workflow permissions are set to Read and Write](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#setting-the-permissions-of-the-github_token-for-your-repository) on the GitHub repository. This includes setting "id-token: write" permissions in the GitHub Workflow for federation with Defender for Cloud.
Microsoft Security DevOps uses the following Open Source tools:
on: push: branches:
- - master
+ - main
jobs: sample: name: Microsoft Security DevOps
- # MSDO runs on windows-latest.
- # ubuntu-latest also supported
+ # Windows and Linux agents are supported
runs-on: windows-latest permissions: contents: read id-token: write actions: read
+ # Write access for security-events is only required for customers looking for MSDO results to appear in the codeQL security alerts tab on GitHub (Requires GHAS)
security-events: write steps:
Microsoft Security DevOps uses the following Open Source tools:
- uses: actions/checkout@v3 # Run analyzers
- - name: Run Microsoft Security DevOps Analysis
+ - name: Run Microsoft Security DevOps
uses: microsoft/security-devops-action@latest id: msdo # with:
Microsoft Security DevOps uses the following Open Source tools:
# languages: string. Optional. A comma-separated list of languages to analyze. Example: 'javascript,typescript'. Defaults to all. # tools: string. Optional. A comma-separated list of analyzer tools to run. Values: 'bandit', 'binskim', 'checkov', 'eslint', 'templateanalyzer', 'terrascan', 'trivy'.
- # Upload alerts to the Security tab
- - name: Upload alerts to Security tab
- uses: github/codeql-action/upload-sarif@v2
- with:
- sarif_file: ${{ steps.msdo.outputs.sarifFile }}
-
- # Upload alerts file as a workflow artifact
- - name: Upload alerts file as a workflow artifact
- uses: actions/upload-artifact@v3
- with:
- name: alerts
- path: ${{ steps.msdo.outputs.sarifFile }}
+ # Upload alerts to the Security tab - required for MSDO results to appear in the codeQL security alerts tab on GitHub (Requires GHAS)
+ # - name: Upload alerts to Security tab
+ # uses: github/codeql-action/upload-sarif@v3
+ # with:
+ # sarif_file: ${{ steps.msdo.outputs.sarifFile }}
+
+ # Upload alerts file as a workflow artifact - required for MSDO results to appear in the codeQL security alerts tab on GitHub (Requires GHAS)
+ # - name: Upload alerts file as a workflow artifact
+ # uses: actions/upload-artifact@v3
+ # with:
+ # name: alerts
+ # path: ${{ steps.msdo.outputs.sarifFile }}
```-
- > [!NOTE]
- > **For additional tool configuration options and instructions, see [the Microsoft Security DevOps wiki](https://github.com/microsoft/security-devops-action/wiki)**
+ > [!NOTE]
+ > **For additional tool configuration options and instructions, see [the Microsoft Security DevOps wiki](https://github.com/microsoft/security-devops-action/wiki)**
1. Select **Start commit**
- :::image type="content" source="media/msdo-github-action/start-commit.png" alt-text="Screenshot showing you where to select start commit.":::
-
-1. Select **Commit new file**.
+ :::image type="content" source="media/msdo-github-action/start-commit.png" alt-text="Screenshot showing you where to select start commit.":::
+
+1. Select **Commit new file**. Note that the process can take up to one minute to complete.
- :::image type="content" source="media/msdo-github-action/commit-new.png" alt-text="Screenshot showing you how to commit a new file.":::
-
- The process can take up to one minute to complete.
+ :::image type="content" source="media/msdo-github-action/commit-new.png" alt-text="Screenshot showing you how to commit a new file.":::
1. Select **Actions** and verify the new action is running.
- :::image type="content" source="media/msdo-github-action/verify-actions.png" alt-text="Screenshot showing you where to navigate to, to see that your new action is running." lightbox="media/msdo-github-action/verify-actions.png":::
+ :::image type="content" source="media/msdo-github-action/verify-actions.png" alt-text="Screenshot showing you where to navigate to, to see that your new action is running." lightbox="media/msdo-github-action/verify-actions.png":::
## View Scan Results **To view your scan results**:
-1. Sign in to [GitHub](https://www.github.com).
-
-1. Navigate to **Security** > **Code scanning alerts** > **Tool**.
+1. Sign in to Azure.
-1. From the dropdown menu, select **Filter by tool**.
+1. Navigate to Defender for Cloud > DevOps Security.
-Code scanning findings will be filtered by specific MSDO tools in GitHub. These code scanning results are also pulled into Defender for Cloud recommendations.
+1. From the DevOps security blade, you should begin seeing the same MSDO security results developers see in their CI logs within minutes for the associated repository. Customers with GitHub Advanced Security will see the findings ingested from these tools as well.
## Learn more
Code scanning findings will be filtered by specific MSDO tools in GitHub. These
- Learn how to [deploy apps from GitHub to Azure](/azure/developer/github/deploy-to-azure).
-## Related content
+## Next steps
Learn more about [DevOps security in Defender for Cloud](defender-for-devops-introduction.md).
defender-for-cloud Iac Vulnerabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/iac-vulnerabilities.md
This article shows you how to apply a template YAML configuration file to scan y
- If you manage your source code in Azure DevOps, set up the [Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.yml). - Ensure that you have an IaC template in your repository.
-<a name="configure-iac-scanning-and-view-the-results-in-github"></a>
- ## Set up and run a GitHub action to scan your connected IaC source code To set up an action and view scan results in GitHub:
To set up an action and view scan results in GitHub:
1. Select the workflow to see the action status.
-1. To view the results of the scan, go to **Security** > **Code scanning alerts**.
-
- You can filter by tool to see only the IaC findings.
-
-<a name="configure-iac-scanning-and-view-the-results-in-azure-devops"></a>
+1. To view the results of the scan, go to **Defender for Cloud** > **DevOps security** (No GHAS pre-requisite) or **Security** > **Code scanning alerts** natively in GitHub (Requires GHAS license).
## Set up and run an Azure DevOps extension to scan your connected IaC source code
To set up an extension and view scan results in Azure DevOps:
## View details and remediation information for applied IaC rules
-The IaC scanning tools that are included with Microsoft Security DevOps are [Template Analyzer](https://github.com/Azure/template-analyzer) ([PSRule](https://aka.ms/ps-rule-azure) is included in Template Analyzer) and [Terrascan](https://github.com/tenable/terrascan).
+The IaC scanning tools that are included with Microsoft Security DevOps are [Template Analyzer](https://github.com/Azure/template-analyzer) ([PSRule](https://aka.ms/ps-rule-azure) is included in Template Analyzer), [Checkov](https://www.checkov.io/) and [Terrascan](https://github.com/tenable/terrascan).
Template Analyzer runs rules on Azure Resource Manager templates (ARM templates) and Bicep templates. For more information, see the [Template Analyzer rules and remediation details](https://github.com/Azure/template-analyzer/blob/main/docs/built-in-rules.md#built-in-rules). Terrascan runs rules on ARM templates and templates for CloudFormation, Docker, Helm, Kubernetes, Kustomize, and Terraform. For more information, see the [Terrascan rules](https://runterrascan.io/docs/policies/).
+Chekov runs rules on ARM templates and templates for CloudFormation, Docker, Helm, Kubernetes, Kustomize, and Terraform. For more information, see the [Checkov rules](https://www.checkov.io/5.Policy%20Index/all.html).
+ To learn more about the IaC scanning tools that are included with Microsoft Security DevOps, see: - [Template Analyzer](https://github.com/Azure/template-analyzer)-- [PSRule](https://aka.ms/ps-rule-azure)
+- [Checkov](https://www.checkov.io/)
- [Terrascan](https://runterrascan.io/) ## Related content
defender-for-cloud Quickstart Onboard Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-github.md
To complete this quick start, you need:
- An Azure account with Defender for Cloud onboarded. If you don't already have an Azure account, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- GitHub Enterprise with GitHub Advanced Security enabled for posture assessments of secrets, dependencies, Infrastructure-as-Code misconfigurations, and code quality analysis within GitHub repositories.- ## Availability | Aspect | Details |
To complete this quick start, you need:
> [!NOTE] > **Security Reader** role can be applied on the Resource Group/GitHub connector scope to avoid setting highly privileged permissions on a Subscription level for read access of DevOps security posture assessments.
-## Connect your GitHub account
+## Connect your GitHub environment
-To connect your GitHub account to Microsoft Defender for Cloud:
+To connect your GitHub environment to Microsoft Defender for Cloud:
1. Sign in to the [Azure portal](https://portal.azure.com/).
To connect your GitHub account to Microsoft Defender for Cloud:
1. Select **Install**.
-1. Select the organizations to install the GitHub application. It's recommended to grant access to **all repositories** to ensure Defender for Cloud can secure your entire GitHub environment.
-
- This step grants Defender for Cloud access to the selected organizations.
-
-1. For Organizations, select one of the following:
-
- - Select **all existing organizations** to autodiscover all repositories in GitHub organizations where the DevOps security GitHub application is installed.
- - Select **all existing and future organizations** to autodiscover all repositories in GitHub organizations where the DevOps security GitHub application is installed and future organizations where the DevOps security GitHub application is installed.
+1. Select the organizations to install the Defender for Cloud GitHub application. It's recommended to grant access to **all repositories** to ensure Defender for Cloud can secure your entire GitHub environment.
+ This step grants Defender for Cloud access to organizations that you wish to onboard.
+
+1. All organizations with the Defender for Cloud GitHub application installed will be onboarded to Defender for Cloud. To change the behavior going forward, select one of the following:
+
+ - Select **all existing organizations** to automatically discover all repositories in GitHub organizations where the DevOps security GitHub application is installed.
+
+ - Select **all existing and future organizations** to automatically discover all repositories in GitHub organizations where the DevOps security GitHub application is installed and future organizations where the DevOps security GitHub application is installed.
+ > [!NOTE]
+ > Organizations can be removed from your connector after the connector creation is complete. See the [editing your DevOps connector](edit-devops-connector.md) page for more information.
+
1. Select **Next: Review and generate**. 1. Select **Create**.
defender-for-cloud Recommendations Reference Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-devops.md
DevOps recommendations don't affect your [secure score](secure-score-security-co
**Severity**: High
+### [(Preview) Azure DevOps projects should have creation of classic pipelines disabled](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/9f4a17ee-7a02-4978-b968-8c36b74ac8e3)
+
+**Description**: Disabling the creation of classic build and release pipelines prevents a security concern that stems from YAML and classic pipelines sharing the same resources, for example the same service connections. Potential attackers can leverage classic pipelines to create processes that evade typical defense mechanisms set up around modern YAML pipelines.
+
+**Severity**: High
+ ## GitHub recommendations ### [GitHub organizations should not make action secrets accessible to all repositories](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6331fad3-a7a2-497d-b616-52672057e0f3)
DevOps recommendations don't affect your [secure score](secure-score-security-co
**Severity**: High
+### [(Preview) GitHub organizations should block Copilot suggestions that match public code](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/98e858ed-6e88-4698-b538-f51b31ad57f6)
+
+**Description**: Enabling GitHub Copilot's filter to block code suggestions matching public code on GitHub enhances security and legal compliance. It prevents the unintentional incorporation of public or open-source code, reducing the risk of legal issues and ensuring adherence to licensing terms. Additionally, it helps avoid introducing potential vulnerabilities from public code into the organization's projects, thereby maintaining higher code quality and security. When the filter is enabled, GitHub Copilot checks code suggestions with their surrounding code of about 150 characters against public code on GitHub. If there is a match or near match, the suggestion will not be shown.
+
+**Severity**: High
+
+### [(Preview) GitHub organizations should enforce multifactor authentication for outside collaborators](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/a9621d26-9d8c-4cd6-8ad0-84501eb88f17)
+
+**Description**: Enforcing multifactor authentication for outside collaborators in a GitHub organization is a security measure that requires collaborators to use an additional form of identification besides their password to access the organization's repositories and resources. This enhances security by protecting against unauthorized access, even if a password is compromised, and helps ensure compliance with industry standards. It involves informing collaborators about the requirement and providing support for the transition, ultimately reducing the risk of data breaches.
+
+**Severity**: High
+
+### [(Preview) GitHub repositories should require minimum two-reviewer approval for code pushes](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/20be7df7-9ebb-4fb4-95a9-3ae19b78b80a)
+
+**Description**: To prevent unintended or malicious changes from being directly committed, it's important to implement protection policies for the default branch in GitHub repositories. We recommend requiring at least two code reviewers to approve pull requests before the code is merged with the default branch. By requiring approval from a minimum number of two reviewers, you can reduce the risk of unauthorized modifications, which could lead to system instability or security vulnerabilities.
+
+**Severity**: High
+ ### GitLab recommendations ### [GitLab projects should have secret scanning findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/867001c3-2d01-4db7-b513-5cb97638f23d/showSecurityCenterCommandBar~/false)
DevOps recommendations don't affect your [secure score](secure-score-security-co
## Related content - [Learn about security recommendations](security-policy-concept.md)-- [Review security recommendations](review-security-recommendations.md)
+- [Review security recommendations](review-security-recommendations.md)
dev-box Concept Dev Box Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-deployment-guide.md
When you have the following requirements, you need to use Azure network connecti
When connecting to resources on-premises through Microsoft Entra hybrid joins, work with your Azure network topology expert. Best practice is to implement a [hub-and-spoke network topology](/azure/cloud-adoption-framework/ready/azure-best-practices/hub-spoke-network-topology). The hub is the central point that connects to your on-premises network; you can use an Express Route, a site-to-site VPN, or a point-to-site VPN. The spoke is the virtual network that contains the dev boxes. You peer the dev box virtual network to the on-premises connected virtual network to provide access to on-premises resources. Hub and spoke topology can help you manage network traffic and security.
+Network planning should include an estimate of the number of IP addresses you'll need, and their distribution across VNETs. Additional free IP addresses are necessary for the Azure Network connection health check. You need 1 additional IP address per dev box, and two IP addresses for the health check and Dev Box infrastructure.
+ Learn more about [Microsoft Dev Box networking requirements](./concept-dev-box-network-requirements.md?tabs=W365). ### Step 3: Configure security groups for role-based access control
dev-box Concept Dev Box Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-network-requirements.md
These FQDNs and endpoints only correspond to client sites and resources. This li
## Troubleshooting
+This section covers some common connection and network issues.
+ ### Connection issues - **Logon attempt failed**
These FQDNs and endpoints only correspond to client sites and resources. This li
For more information about troubleshooting group policy issues, see [Applying Group Policy troubleshooting guidance](/troubleshoot/windows-server/group-policy/applying-group-policy-troubleshooting-guidance). - ### IPv6 addressing issues If you're experiencing IPv6 issues, check that the *Microsoft.AzureActiveDirectory* service endpoint is not enabled on the virtual network or subnet. This service endpoint converts the IPv4 to IPv6. For more information, see [Virtual Network service endpoints](/azure/virtual-network/virtual-network-service-endpoints-overview).
+### Updating dev box definition image issues
+
+When you update the image used in a dev box definition, you must ensure that you have sufficient IP addresses available in your virtual network. Additional free IP addresses are necessary for the Azure Network connection health check. If the health check fails the dev box definition will not update. You need 1 additional IP address per dev box, and two IP addresses for the health check and Dev Box infrastructure.
+
+For more information about updating dev box definition images, see [Update a dev box definition](how-to-manage-dev-box-definitions.md#update-a-dev-box-definition).
## Related content
dev-box How To Manage Dev Box Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-definitions.md
The following steps show you how to create a dev box definition by using an exis
Over time, your needs for dev boxes can change. You might want to move from a Windows 10 base operating system to a Windows 11 base operating system, or increase the default compute specification for your dev boxes. Your initial dev box definitions might no longer be appropriate for your needs. You can update a dev box definition so new dev boxes use the new configuration.
+When you update the image used in a dev box definition, you must ensure that you have sufficient IP addresses available in your virtual network. Additional free IP addresses are necessary for the Azure Network connection health check. If the health check fails the dev box definition will not update. You need 1 additional IP address per dev box, and two IP addresses for the health check and Dev Box infrastructure.
+ You can update the image, image version, compute, and storage settings for a dev box definition: 1. Sign in to the [Azure portal](https://portal.azure.com).
event-hubs Monitor Event Hubs Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/monitor-event-hubs-reference.md
Title: Monitoring Azure Event Hubs data reference
-description: Important reference material needed when you monitor Azure Event Hubs.
+ Title: Monitoring data reference for Azure Event Hubs
+description: This article contains important reference material you need when you monitor Azure Event Hubs by using Azure Monitor.
Last updated : 06/20/2024+ - Previously updated : 10/06/2022+
+# Azure Event Hubs monitoring data reference
-# Monitoring Azure Event Hubs data reference
-See [Monitoring Azure Event Hubs](monitor-event-hubs.md) for details on collecting and analyzing monitoring data for Azure Event Hubs.
+
+See [Monitor Azure Event Hubs](monitor-event-hubs.md) for details on the data you can collect for Event Hubs and how to use it.
+
+Azure Event Hubs creates monitoring data using [Azure Monitor](../azure-monitor/overview.md), which is a full stack monitoring service in Azure. Azure Monitor provides a complete set of features to monitor your Azure resources. It can also monitor resources in other clouds and on-premises.
+
+Azure Event Hubs collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
++
+### Supported metrics for Microsoft.EventHub/clusters
+
+The following table lists the metrics available for the Microsoft.EventHub/clusters resource type.
++
+### Supported metrics for Microsoft.EventHub/Namespaces
+
+The following table lists the metrics available for the Microsoft.EventHub/Namespaces resource type.
++
+The following tables list all the automatically collected platform metrics collected for Azure Event Hubs. The resource provider for these metrics is `Microsoft.EventHub/clusters` or `Microsoft.EventHub/namespaces`.
+
+*Request metrics* count the number of data and management operations requests. This table provides more information about values from the preceding tables.
+
+| Metric name | Description |
+|:--|:|
+| Incoming Requests | The number of requests made to the Event Hubs service over a specified period. This metric includes all the data and management plane operations. |
+| Successful Requests | The number of successful requests made to the Event Hubs service over a specified period. |
+| Throttled Requests | The number of requests that were throttled because the usage was exceeded. |
+
+This table provides more information for message metrics from the preceding tables.
+
+| Metric name | Description |
+|:|:|
+| Incoming Messages | The number of events or messages sent to Event Hubs over a specified period. |
+| Outgoing Messages | The number of events or messages received from Event Hubs over a specified period. |
+| Captured Messages | The number of captured messages. |
+| Incoming Bytes | Incoming bytes for an event hub over a specified period. |
+| Outgoing Bytes | Outgoing bytes for an event hub over a specified period. |
+| Size | Size of an event hub in bytes. |
> [!NOTE]
-> Azure Monitor doesn't include dimensions in the exported metrics data, that's sent to a destination like Azure Storage, Azure Event Hubs, Log Analytics, etc.
+> - These values are point-in-time values. Incoming messages that are consumed immediately after that point-in-time might not be reflected in these metrics.
+> - The Incoming Requests metric includes all the data and management plane operations. The Incoming Messages metric gives you the total number of events that are sent to the event hub. For example, if you send a batch of 100 events to an event hub, it counts as 1 incoming request and 100 incoming messages.
+
+This table provides more information for capture metrics from the preceding tables.
+| Metric name | Description |
+|:|:|
+| Captured Messages | The number of captured messages. |
+| Captured Bytes | Captured bytes for an event hub. |
+| Capture Backlog | Capture backlog for an event hub. |
-## Metrics
-This section lists all the automatically collected platform metrics collected for Azure Event Hubs. The resource provider for these metrics is `Microsoft.EventHub/clusters` or `Microsoft.EventHub/namespaces`.
+This table provides more information for connection metrics from the preceding tables.
-### Request metrics
-Counts the number of data and management operations requests.
+| Metric name | Description |
+|:|:|
+| Active Connections | The number of active connections on a namespace and on an entity (event hub) in the namespace. Value for this metric is a point-in-time value. Connections that were active immediately after that point-in-time might not be reflected in the metric. |
+| Connections Opened | The number of open connections. |
+| Connections Closed | The number of closed connections. |
-| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions |
-| - | - | -- | | | |
-| Incoming Requests| Yes | Count | Count | The number of requests made to the Event Hubs service over a specified period. This metric includes all the data and management plane operations. | Entity name|
-| Successful Requests| No | Count | Count | The number of successful requests made to the Event Hubs service over a specified period. | Entity name<br/><br/>Operation Result |
-| Throttled Requests| No | Count | Count | The number of requests that were throttled because the usage was exceeded. | Entity name<br/><br/>Operation Result |
+This table provides more information for error metrics from the preceding tables.
-The following two types of errors are classified as **user errors**:
+| Metric name | Description |
+|:|:|
+| Server Errors | The number of requests not processed because of an error in the Event Hubs service over a specified period. |
+| User Errors | The number of requests not processed because of user errors over a specified period. |
+| Quota Exceeded Errors | The number of errors caused by exceeding quotas over a specified period. |
+
+The following two types of errors are classified as *user errors*:
1. Client-side errors (In HTTP that would be 400 errors). 2. Errors that occur while processing messages.
+> [!NOTE]
+> Logic Apps creates epoch receivers. Receivers can be moved from one node to another depending on the service load. During those moves, `ReceiverDisconnection` exceptions might occur. They are counted as user errors on the Event Hubs service side. Logic Apps can collect failures from Event Hubs clients so that you can view them in user logs.
-### Message metrics
-| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions |
-| - | - | -- | | | |
-|Incoming Messages| Yes | Count | Count | The number of events or messages sent to Event Hubs over a specified period. | Entity name|
-|Outgoing Messages| Yes | Count | Count | The number of events or messages received from Event Hubs over a specified period. | Entity name |
-| Captured Messages| No | Count| Count | The number of captured messages. | Entity name |
-|Incoming Bytes | Yes | Bytes | Count | Incoming bytes for an event hub over a specified period. | Entity name|
-|Outgoing Bytes | Yes | Bytes | Count | Outgoing bytes for an event hub over a specified period. | Entity name |
-| Size | No | Bytes | Average | Size of an event hub in bytes.|Entity name |
-> [!NOTE]
-> - These values are point-in-time values. Incoming messages that were consumed immediately after that point-in-time may not be reflected in these metrics.
-> - The **Incoming requests** metric includes all the data and management plane operations. The **Incoming messages** metric gives you the total number of events that are sent to the event hub. For example, if you send a batch of 100 events to an event hub, it'll count as 1 incoming request and 100 incoming messages.
-
-### Capture metrics
-| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions |
-| - | -- | | | | |
-| Captured Messages| No | Count| Count | The number of captured messages. | Entity name |
-| Captured Bytes | No | Bytes | Count | Captured bytes for an event hub | Entity name |
-| Capture Backlog | No | Count| Count | Capture backlog for an event hub | Entity name |
--
-### Connection metrics
-| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions |
-| - | -- | | | | |
-|Active Connections| No | Count | Average | The number of active connections on a namespace and on an entity (event hub) in the namespace. Value for this metric is a point-in-time value. Connections that were active immediately after that point-in-time may not be reflected in the metric.| Entity name |
-|Connections Opened | No | Count | Average | The number of open connections. | Entity name |
-|Connections Closed | No | Count | Average| The number of closed connections. | Entity name |
-
-### Error metrics
-| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions |
-| - | -- | | | | |
-|Server Errors| No | Count | Count | The number of requests not processed because of an error in the Event Hubs service over a specified period. | Entity name<br/><br/>Operation Result |
-|User Errors | No | Count | Count | The number of requests not processed because of user errors over a specified period. | Entity name<br/><br/>Operation Result|
-|Quota Exceeded Errors | No |Count | Count | The number of errors caused by exceeding quotas over a specified period. | Entity name<br/><br/>Operation Result|
+| Dimension name | Description |
+|:-|:|
+| EntityName | Name of the event hub. With the 'Incoming Requests' metric, the Entity Name dimension has a value of `-NamespaceOnlyMetric-` in addition to all your event hubs. It represents the requests that were made at the namespace level. Examples include a request to list all event hubs in the namespace or requests to entities that failed authentication or authorization. |
+| OperationResult | Either indicates `success` or the appropriate error state, such as `serverbusy`, `clienterror` or `quotaexceeded`. |
+
+Adding dimensions to your metrics is optional. If you don't add dimensions, metrics are specified at the namespace level.
> [!NOTE]
-> Logic Apps creates epoch receivers and receivers may be moved from one node to another depending on the service load. During those moves, `ReceiverDisconnection` exceptions may occur. They are counted as user errors on the Event Hubs service side. Logic Apps may collect failures from Event Hubs clients so that you can view them in user logs.
+> When you enable metrics in a diagnostic setting, dimension information isn't currently included as part of the information sent to a storage account, event hub, or log analytics.
+
-## Metric dimensions
+### Supported resource logs for Microsoft.EventHub/Namespaces
-Azure Event Hubs supports the following dimensions for metrics in Azure Monitor. Adding dimensions to your metrics is optional. If you don't add dimensions, metrics are specified at the namespace level.
-|Dimension name|Description|
-| - | -- |
-|Entity Name| Name of the event hub. With the 'Incoming Requests' metric, the Entity Name dimension has a value of '-NamespaceOnlyMetric-' in addition to all your event hubs. It represents the requests that were made at the namespace level. Examples include a request to list all event hubs in the namespace or requests to entities that failed authentication or authorization.|
+### Event Hubs Microsoft.EventHub/namespaces
-## Resource logs
+- [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity#columns)
+- [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics#columns)
+- [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics#columns)
+- [AZMSApplicationMetricLogs](/azure/azure-monitor/reference/tables/azmsapplicationmetriclogs#columns)
+- [AZMSOperationalLogs](/azure/azure-monitor/reference/tables/azmsoperationallogs#columns)
+- [AZMSRunTimeAuditLogs](/azure/azure-monitor/reference/tables/azmsruntimeauditlogs#columns)
+- [AZMSDiagnosticErrorLogs](/azure/azure-monitor/reference/tables/azmsdiagnosticerrorlogs#columns)
+- [AZMSVnetConnectionEvents](/azure/azure-monitor/reference/tables/azmsvnetconnectionevents#columns)
+- [AZMSArchiveLogs](/azure/azure-monitor/reference/tables/azmsarchivelogs#columns)
+- [AZMSAutoscaleLogs](/azure/azure-monitor/reference/tables/azmsautoscalelogs#columns)
+- [AZMSKafkaCoordinatorLogs](/azure/azure-monitor/reference/tables/azmskafkacoordinatorlogs#columns)
+- [AZMSKafkaUserErrorLogs](/azure/azure-monitor/reference/tables/azmskafkausererrorlogs#columns)
+- [AZMSCustomerManagedKeyUserLogs](/azure/azure-monitor/reference/tables/azmscustomermanagedkeyuserlogs#columns)
-Azure Event Hubs now has the capability to dispatch logs to either of two destination tables - Azure Diagnostic or [Resource specific tables](~/articles/azure-monitor/essentials/resource-logs.md) in Log Analytics. You could use the toggle available on Azure portal to choose destination tables.
+### Event Hubs resource logs
+
+Azure Event Hubs now has the capability to dispatch logs to either of two destination tables: Azure Diagnostic or [Resource specific tables](~/articles/azure-monitor/essentials/resource-logs.md) in Log Analytics. You could use the toggle available on Azure portal to choose destination tables.
:::image type="content" source="media/monitor-event-hubs-reference/destination-table-toggle.png" alt-text="Screenshot of dialog box to set destination table." lightbox="media/monitor-event-hubs-reference/destination-table-toggle.png":::
+Azure Event Hubs uses Kusto tables from Azure Monitor Logs. You can query these tables with Log Analytics. For a list of Kusto tables the service uses, see [Azure Monitor Logs table reference](/azure/azure-monitor/reference/tables/tables-resourcetype#event-hubs).
+
+You can view our sample queries to get started with different log categories.
+
+> [!IMPORTANT]
+> Dimensions aren't exported to a Log Analytics workspace.
+ [!INCLUDE [event-hubs-diagnostic-log-schema](./includes/event-hubs-diagnostic-log-schema.md)]
+### Runtime audit logs
-## Runtime audit logs
-Runtime audit logs capture aggregated diagnostic information for all data plane access operations (such as send or receive events) in Event Hubs.
+Runtime audit logs capture aggregated diagnostic information for all data plane access operations (such as send or receive events) in Event Hubs.
-> [!NOTE]
+> [!NOTE]
> Runtime audit logs are available only in **premium** and **dedicated** tiers. Runtime audit logs include the elements listed in the following table: - | Name | Description | Supported in Azure Diagnostics | Supported in Resource Specific table |
-| - | -| --| --|
+|:- |:-|:--|:--|
| `ActivityId` | A randomly generated UUID that ensures uniqueness for the audit activity. | Yes | Yes | | `ActivityName` | Runtime operation name.| Yes | Yes | | `ResourceId` | Resource associated with the activity. | Yes | Yes | | `Timestamp` | Aggregation time. | Yes | No |
-| `TimeGenerated [UTC]`|Time of executed operation (in UTC)| No | Yes |
+| `TimeGenerated [UTC]`|Time of executed operation (in UTC) | No | Yes |
| `Status` | Status of the activity (success or failure). | Yes | Yes | | `Protocol` | Type of the protocol associated with the operation. | Yes | Yes |
-| `AuthType` | Type of authentication (Azure Active Directory or SAS Policy). | Yes | Yes |
-| `AuthKey` | Azure Active Directory application ID or SAS policy name that's used to authenticate to a resource. | Yes | Yes |
+| `AuthType` | Type of authentication (Microsoft Entra ID or SAS Policy). | Yes | Yes |
+| `AuthKey` | Microsoft Entra ID application ID or SAS policy name that's used to authenticate to a resource. | Yes | Yes |
| `NetworkType` | Type of the network access: `Public` or `Private`. | Yes | Yes | | `ClientIP` | IP address of the client application. | Yes | Yes | | `Count` | Total number of operations performed during the aggregated period of 1 minute. | Yes | Yes | | `Properties` | Metadata that are specific to the data plane operation. | Yes | Yes |
-| `Category` | Log category | Yes | NO |
-| `Provider`|Name of Service emitting the logs, such as Eventhub | No | Yes |
+| `Category` | Log category | Yes | No |
+| `Provider`|Name of Service emitting the logs, such as EventHubs | No | Yes |
| `Type` | Type of logs emitted | No | Yes | Here's an example of a runtime audit log entry:
-AzureDiagnostics :
+AzureDiagnostics:
+ ```json { "ActivityId": "<activity id>",
AzureDiagnostics :
} ```+ Resource specific table entry:+ ```json { "ActivityId": "<activity id>",
Resource specific table entry:
```
-## Application metrics logs
-Application metrics logs capture the aggregated information on certain metrics related to data plane operations. The captured information includes the following runtime metrics.
+### Application metrics logs
-> [!NOTE]
-> Application metrics logs are available only in **premium** and **dedicated** tiers.
+Application metrics logs capture the aggregated information on certain metrics related to data plane operations. The captured information includes the following runtime metrics.
+
+> [!NOTE]
+> Application metrics logs are available only in **premium** and **dedicated** tiers.
| Name | Description |
-| - | - |
+|:-|:- |
| `ConsumerLag` | Indicate the lag between consumers and producers. | | `NamespaceActiveConnections` | Details of active connections established from a client to the event hub. | | `GetRuntimeInfo` | Obtain run time information from Event Hubs. |
Application metrics logs capture the aggregated information on certain metrics r
| `OffsetCommit` | Number of offset commit calls made to the event hub | | `OffsetFetch` | Number of offset fetch calls made to the event hub. |
-## Diagnostic Error Logs
-Diagnostic error logs capture error messages for any client side, throttling and Quota exceeded errors. They provide detailed diagnostics for error identification.
+### Diagnostic Error Logs
+
+Diagnostic error logs capture error messages for any client side, throttling, and Quota exceeded errors. They provide detailed diagnostics for error identification.
-Diagnostic Error Logs include elements listed in below table:
+Diagnostic Error Logs include elements listed in following table:
| Name | Description | Supported in Azure Diagnostics | Supported in AZMSDiagnosticErrorLogs (Resource specific table) |
-| ||| |
+|:|:|:|:|
| `ActivityId` | A randomly generated UUID that ensures uniqueness for the audit activity. | Yes | Yes | | `ActivityName` | Operation name | Yes | Yes | | `NamespaceName` | Name of Namespace | Yes | yes |
Here's an example of Diagnostic error log entry:
} ```+ Resource specific table entry:+ ```json { "ActivityId": "0000000000-0000-0000-0000-00000000000000",
Resource specific table entry:
```
-## Azure Monitor Logs tables
-Azure Event Hubs uses Kusto tables from Azure Monitor Logs. You can query these tables with Log Analytics. For a list of Kusto tables the service uses, see [Azure Monitor Logs table reference](/azure/azure-monitor/reference/tables/tables-resourcetype#event-hubs).
-You can view our sample queries to get started with different log categories.
-
-> [!IMPORTANT]
-> Dimensions aren't exported to a Log Analytics workspace.
+- [Microsoft.EventHub resource provider operations](/azure/role-based-access-control/permissions/integration#microsofteventhub)
+## Related content
-## Next steps
-- For details on monitoring Azure Event Hubs, see [Monitoring Azure Event Hubs](monitor-event-hubs.md).-- For details on monitoring Azure resources, see [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
+- See [Monitor Azure Event Hubs](monitor-event-hubs.md) for a description of monitoring Event Hubs.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
event-hubs Monitor Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/monitor-event-hubs.md
Title: Monitoring Azure Event Hubs
+ Title: Monitor Azure Event Hubs
description: Learn how to use Azure Monitor to view, analyze, and create alerts on metrics from Azure Event Hubs. Last updated : 06/20/2024+ - Previously updated : 04/05/2024+ # Monitor Azure Event Hubs
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by Azure Event Hubs and how to analyze and alert on this data with Azure Monitor.
-## What is Azure Monitor?
-Azure Event Hubs creates monitoring data using [Azure Monitor](../azure-monitor/overview.md), which is a full stack monitoring service in Azure. Azure Monitor provides a complete set of features to monitor your Azure resources. It can also monitor resources in other clouds and on-premises.
-Start with the article [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md), which describes the following concepts:
+Azure Monitor documentation describes the following concepts:
- What is Azure Monitor? - Costs associated with monitoring
Start with the article [Monitoring Azure resources with Azure Monitor](../azure-
- Configuring data collection - Standard tools in Azure for analyzing and alerting on monitoring data
-The following sections build on this article by describing the specific data gathered for Azure Event Hubs. These sections also provide examples for configuring data collection and analyzing this data with Azure tools.
+The following sections describe the specific data gathered for Azure Event Hubs. These sections also provide examples for configuring data collection and analyzing this data with Azure tools.
> [!TIP] > To understand costs associated with Azure Monitor, see [Azure Monitor cost and usage](../azure-monitor/cost-usage.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md).
-## Monitoring data from Azure Event Hubs
-Azure Event Hubs collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
+For more information about the resource types for Event Hubs, see [Azure Event Hubs monitoring data reference](monitor-event-hubs-reference.md).
-See [Azure Event Hubs monitoring data reference](monitor-event-hubs-reference.md) for a detailed reference of the logs and metrics created by Azure Event Hubs.
-## Collection and routing
-Platform metrics and the activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+- Azure Storage
-Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
+ If you use Azure Storage to store the diagnostic logging information, the information is stored in containers named *insights-logs-operationlogs* and *insights-metrics-pt1m*. Sample URL for an operation log: `https://<Azure Storage account>.blob.core.windows.net/insights-logs-operationallogs/resourceId=/SUBSCRIPTIONS/<Azure subscription ID>/RESOURCEGROUPS/<Resource group name>/PROVIDERS/MICROSOFT.EVENTHUB/NAMESPACES/<Namespace name>/y=<YEAR>/m=<MONTH-NUMBER>/d=<DAY-NUMBER>/h=<HOUR>/m=<MINUTE>/PT1H.json`. The URL for a metric log is similar.
-See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure Event Hubs are listed in [Azure Event Hubs monitoring data reference](monitor-event-hubs-reference.md#resource-logs).
+- Azure Event Hubs
-> [!NOTE]
-> Azure Monitor doesn't include dimensions in the exported metrics data, that's sent to a destination like Azure Storage, Azure Event Hubs, Log Analytics, etc.
--
-### Azure Storage
-If you use **Azure Storage** to store the diagnostic logging information, the information is stored in containers named **insights-logs-operationlogs** and **insights-metrics-pt1m**. Sample URL for an operation log: `https://<Azure Storage account>.blob.core.windows.net/insights-logs-operationallogs/resourceId=/SUBSCRIPTIONS/<Azure subscription ID>/RESOURCEGROUPS/<Resource group name>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<Namespace name>/y=<YEAR>/m=<MONTH-NUMBER>/d=<DAY-NUMBER>/h=<HOUR>/m=<MINUTE>/PT1H.json`. The URL for a metric log is similar.
+ If you use Azure Event Hubs to store the diagnostic logging information, the information is stored in Event Hubs instances named *insights-logs-operationlogs* and *insights-metrics-pt1m*. You can also select an existing event hub except for the event hub for which you're configuring diagnostic settings.
-### Azure Event Hubs
-If you use **Azure Event Hubs** to store the diagnostic logging information, the information is stored in Event Hubs instances named **insights-logs-operationlogs** and **insights-metrics-pt1m**. You can also select an existing event hub except for the event hub for which you're configuring diagnostic settings.
+- Log Analytics
-### Log Analytics
-If you use **Log Analytics** to store the diagnostic logging information, the information is stored in tables named **AzureDiagnostics** / **AzureMetrics** or **resource specific tables**
+ If you use Log Analytics to store the diagnostic logging information, the information is stored in tables named *AzureDiagnostics / AzureMetrics* or resource specific tables.
> [!IMPORTANT]
-> Enabling these settings requires additional Azure services (storage account, event hub, or Log Analytics), which may increase your cost. To calculate an estimated cost, visit the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator).
+> Enabling these settings requires additional Azure
> [!NOTE] > When you enable metrics in a diagnostic setting, dimension information is not currently included as part of the information sent to a storage account, event hub, or log analytics.
-The metrics and logs you can collect are discussed in the following sections.
+
+Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure Event Hubs are listed in [Azure Event Hubs monitoring data reference](monitor-event-hubs-reference.md#resource-logs).
+
+> [!NOTE]
+> Azure Monitor doesn't include dimensions in the exported metrics data that's sent to a destination like Azure Storage, Azure Event Hubs, and Log Analytics.
+
+For a list of available metrics for Event Hubs, see [Azure Event Hubs monitoring data reference](monitor-event-hubs-reference.md#metrics).
+
+### Analyze metrics
-## Analyze metrics
You can analyze metrics for Azure Event Hubs, along with metrics from other Azure services, by selecting **Metrics** from the **Azure Monitor** section on the home page for your Event Hubs namespace. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool. For a list of the platform metrics collected, see [Monitoring Azure Event Hubs data reference metrics](monitor-event-hubs-reference.md#metrics). :::image type="content" source="./media/monitor-event-hubs/metrics.png" alt-text="Screenshot showing the Metrics Explorer for an Event Hubs namespace." lightbox="./media/monitor-event-hubs/metrics.png":::
For reference, you can see a list of [all resource metrics supported in Azure Mo
> Azure Monitor metrics data is available for 90 days. However, when creating charts only 30 days can be visualized. For example, if you want to visualize a 90 day period, you must break it into three charts of 30 days within the 90 day period. ### Filter and split+ For metrics that support dimensions, you can apply filters using a dimension value. For example, add a filter with `EntityName` set to the name of an event hub. You can also split a metric by dimension to visualize how different segments of the metric compare with each other. For more information of filtering and splitting, see [Advanced features of Azure Monitor](../azure-monitor/essentials/metrics-charts.md). :::image type="content" source="./media/monitor-event-hubs/metrics-filter-split.png" alt-text="Screenshot showing the Metrics Explorer for an Event Hubs namespace with a filter." lightbox="./media/monitor-event-hubs/metrics-filter-split.png":::
-## Analyze logs
-Using Azure Monitor Log Analytics requires you to create a diagnostic configuration and enable __Send information to Log Analytics__. For more information, see the [Collection and routing](#collection-and-routing) section. Data in Azure Monitor Logs is stored in tables, with each table having its own set of unique properties. Azure Event Hubs has the capability to dispatch logs to either of two destination tables - Azure Diagnostic or Resource specific tables in Log Analytics.For a detailed reference of the logs and metrics, see [Azure Event Hubs monitoring data reference](monitor-event-hubs-reference.md).
+
+For the available resource log categories, their associated Log Analytics tables, and the log schemas for Event Hubs, see [Azure Event Hubs monitoring data reference](monitor-event-hubs-reference.md#resource-logs).
+
+### Analyze logs
+
+Using Azure Monitor Log Analytics requires you to create a diagnostic configuration and enable **Send information to Log Analytics**. For more information, see the [Metrics](#azure-monitor-platform-metrics) section. Data in Azure Monitor Logs is stored in tables, with each table having its own set of unique properties. Azure Event Hubs has the capability to dispatch logs to either of two destination tables: Azure Diagnostic or Resource specific tables in Log Analytics. For a detailed reference of the logs and metrics, see [Azure Event Hubs monitoring data reference](monitor-event-hubs-reference.md).
> [!IMPORTANT] > When you select **Logs** from the Azure Event Hubs menu, Log Analytics is opened with the query scope set to the current workspace. This means that log queries will only include data from that resource. If you want to run a query that includes data from other databases or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
-### Sample Kusto queries
+### Use runtime logs
-> [!IMPORTANT]
-> When you select **Logs** from the Azure Event Hubs menu, Log Analytics is opened with the query scope set to the current Azure Event Hubs namespace. This means that log queries will only include data from that resource. If you want to run a query that includes data from other workspaces or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
+Azure Event Hubs allows you to monitor and audit data plane interactions of your client applications using runtime audit logs and application metrics logs.
-Following are sample queries that you can use to help you monitor your Azure Event Hubs resources:
+Using *Runtime audit logs* you can capture aggregated diagnostic information for all data plane access operations such as publishing or consuming events. *Application metrics logs* capture the aggregated data on certain runtime metrics (such as consumer lag and active connections) related to client applications are connected to Event Hubs.
-### [AzureDiagnostics](#tab/AzureDiagnostics)
+> [!NOTE]
+> Runtime audit logs are available only in **premium** and **dedicated** tiers.
-+ Get errors from the past seven days
-
- ```Kusto
- AzureDiagnostics
- | where TimeGenerated > ago(7d)
- | where ResourceProvider =="MICROSOFT.EVENTHUB"
- | where Category == "OperationalLogs"
- | summarize count() by "EventName"
-
-+ Get runtime audit logs generated in the last one hour.
-
- ```Kusto
- AzureDiagnostics
- | where TimeGenerated > ago(1h)
- | where ResourceProvider =="MICROSOFT.EVENTHUB"
- | where Category == "RuntimeAuditLogs"
- ```
-+ Get access attempts to a key vault that resulted in "key not found" error.
-
- ```Kusto
- AzureDiagnostics
- | where ResourceProvider == "MICROSOFT.EVENTHUB"
- | where Category == "Error" and OperationName == "wrapkey"
- | project Message
- ```
-
-+ Get operations performed with a key vault to disable or restore the key.
-
- ```Kusto
- AzureDiagnostics
- | where ResourceProvider == "MICROSOFT.EVENTHUB"
- | where Category == "info" and OperationName == "disable" or OperationName == "restore"
- | project Message
- ```
-+ Get capture failures and their duration in seconds
-
- ```kusto
- AzureDiagnostics
- | where ResourceProvider == "MICROSOFT.EVENTHUB"
- | where Category == "ArchiveLogs"
- | summarize count() by "failures", "durationInSeconds"
- ```
-
-### [Resource Specific Table](#tab/Resourcespecifictable)
+### Enable runtime logs
+You can enable either runtime audit or application metrics logging by selecting *Diagnostic settings* from the *Monitoring* section on the Event Hubs namespace page in Azure portal. Select **Add diagnostic setting** as shown in the following image.
-+ Get Operational Logs for event hub resource for last 7 days
- ```Kusto
- AZMSOperationalLogs
- | where Timegenerated > ago(7d)
- | where Provider == "EVENTHUB"
- | where resourceId == "<Resource Id>" // Replace your resource Id
- ```
+Then you can enable log categories *RuntimeAuditLogs* or *ApplicationMetricsLogs* as needed.
-+ Get capture logs for event hub for last 7 days
- ```Kusto
- AZMSArchiveLogs
- | where EventhubName == "<Event Hub Name>" //Enter event hub entity name
- | where TimeGenerated > ago(7d)
- ```
+Once runtime logs are enabled, Event Hubs start collecting and storing them according to the diagnostic setting configuration.
+### Publish and consume sample data
-
-## Use runtime logs
+To collect sample runtime audit logs in your Event Hubs namespace, you can publish and consume sample data using client applications that are based on the [Event Hubs SDK](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md). That SDK uses Advanced Message Queuing Protocol (AMQP). Or you can use any [Apache Kafka client application](../event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs.md).
-Azure Event Hubs allows you to monitor and audit data plane interactions of your client applications using runtime audit logs and application metrics logs.
+Application metrics include the following runtime metrics.
-Using *Runtime audit logs* you can capture aggregated diagnostic information for all data plane access operations such as publishing or consuming events.
-*Application metrics logs* capture the aggregated data on certain runtime metrics (such as consumer lag and active connections) related to client applications are connected to Event Hubs.
-> [!NOTE]
-> Runtime audit logs are available only in **premium** and **dedicated** tiers.
+Therefore you can use application metrics to monitor runtime metrics such as consumer lag or active connection from a given client application. Fields associated with runtime audit logs are defined in [application metrics logs reference](../event-hubs/monitor-event-hubs-reference.md#runtime-audit-logs).
-### Enable runtime logs
-You can enable either runtime audit or application metrics logging by selecting *Diagnostic settings* from the *Monitoring* section on the Event Hubs namespace page in Azure portal. Select **Add diagnostic setting** as shown in the following image.
-Then you can enable log categories *RuntimeAuditLogs* or *ApplicationMetricsLogs* as needed.
+
+### Sample Kusto queries
+
+Following are sample queries that you can use to help you monitor your Azure Event Hubs resources:
+
+### [AzureDiagnostics](#tab/AzureDiagnostics)
+
+- Get errors from the past seven days.
+
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated > ago(7d)
+ | where ResourceProvider =="MICROSOFT.EVENTHUB"
+ | where Category == "OperationalLogs"
+ | summarize count() by "EventName"
+
+- Get runtime audit logs generated in the last one hour.
+
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated > ago(1h)
+ | where ResourceProvider =="MICROSOFT.EVENTHUB"
+ | where Category == "RuntimeAuditLogs"
+ ```
+
+- Get access attempts to a key vault that resulted in "key not found" error.
+
+ ```Kusto
+ AzureDiagnostics
+ | where ResourceProvider == "MICROSOFT.EVENTHUB"
+ | where Category == "Error" and OperationName == "wrapkey"
+ | project Message
+ ```
+
+- Get operations performed with a key vault to disable or restore the key.
+
+ ```Kusto
+ AzureDiagnostics
+ | where ResourceProvider == "MICROSOFT.EVENTHUB"
+ | where Category == "info" and OperationName == "disable" or OperationName == "restore"
+ | project Message
+ ```
+
+- Get capture failures and their duration in seconds.
+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider == "MICROSOFT.EVENTHUB"
+ | where Category == "ArchiveLogs"
+ | summarize count() by "failures", "durationInSeconds"
+ ```
-Once runtime logs are enabled, Event Hubs start collecting and storing them according to the diagnostic setting configuration.
+### [Resource Specific Table](#tab/Resourcespecifictable)
+
+- Get Operational Logs for event hub resource for last seven days.
+
+ ```Kusto
+ AZMSOperationalLogs
+ | where Timegenerated > ago(7d)
+ | where Provider == "EVENTHUB"
+ | where resourceId == "<Resource Id>" // Replace your resource Id
+ ```
+
+- Get capture logs for event hub for last seven days.
-### Publish and consume sample data
-To collect sample runtime audit logs in your Event Hubs namespace, you can publish and consume sample data using client applications, which are based on [Event Hubs SDK](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md), which uses Advanced Message Queuing Protocol (AMQP) or using any [Apache Kafka client application](../event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs.md).
+ ```Kusto
+ AZMSArchiveLogs
+ | where EventhubName == "<Event Hub Name>" //Enter event hub entity name
+ | where TimeGenerated > ago(7d)
+ ```
+ ### Analyze runtime audit logs
-You can analyze the collected runtime audit logs using the following sample query.
+
+You can analyze the collected runtime audit logs using the following sample query.
### [AzureDiagnostics](#tab/AzureDiagnosticsforRuntimeAudit)
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.EVENTHUB" | where Category == "RuntimeAuditLogs" ```+ ### [Resource Specific Table](#tab/ResourcespecifictableforRuntimeAudit) ```kusto
AZMSRuntimeAuditLogs
| where TimeGenerated > ago(1h) | where Provider == "EVENTHUB" ```+
-Up on the execution of the query you should be able to obtain corresponding audit logs in the following format.
+
+Up on the execution of the query you should be able to obtain corresponding audit logs in the following format.
+ :::image type="content" source="./media/monitor-event-hubs/runtime-audit-logs.png" alt-text="Image showing the result of a sample query to analyze runtime audit logs." lightbox="./media/monitor-event-hubs/runtime-audit-logs.png":::
-By analyzing these logs, you should be able to audit how each client application interacts with Event Hubs. Each field associated with runtime audit logs is defined in [runtime audit logs reference](../event-hubs/monitor-event-hubs-reference.md#runtime-audit-logs).
+By analyzing these logs, you should be able to audit how each client application interacts with Event Hubs. Each field associated with runtime audit logs is defined in [runtime audit logs reference](../event-hubs/monitor-event-hubs-reference.md#runtime-audit-logs).
+### Analyze application metrics
-### Analyze application metrics
-You can analyze the collected application metrics logs using the following sample query.
+You can analyze the collected application metrics logs using the following sample query.
### [AzureDiagnostics](#tab/AzureDiagnosticsforAppMetrics)
AZMSApplicationMetricLogs
| where TimeGenerated > ago(1h) | where Provider == "EVENTHUB" ```-
-Application metrics include the following runtime metrics.
-Therefore you can use application metrics to monitor runtime metrics such as consumer lag or active connection from a given client application. Fields associated with runtime audit logs are defined in [application metrics logs reference](../event-hubs/monitor-event-hubs-reference.md#runtime-audit-logs).
+
-## Alerts
You can access alerts for Azure Event Hubs by selecting **Alerts** from the **Azure Monitor** section on the home page for your Event Hubs namespace. See [Create, view, and manage metric alerts using Azure Monitor](../azure-monitor/alerts/alerts-metric.md) for details on creating alerts.
+### Event Hubs alert rules
+
+The following table lists some suggested alert rules for Event Hubs. These alerts are just examples. You can set alerts for any metric, log entry, or activity log entry listed in the [Azure Event Hubs monitoring data reference](monitor-event-hubs-reference.md).
+
+| Alert type | Condition | Description |
+|:|:|:|
+| Metric | CPU | When CPU utilization exceeds a set value. |
+| Metric | Available Memory | When Available Memory drops below a set value. |
+| Metric | Capture Backlog | When Capture Backlog is above a certain value. |
+
-## Next steps
+## Related content
-- For a reference of the logs and metrics, see [Monitoring Azure Event Hubs data reference](monitor-event-hubs-reference.md).-- For details on monitoring Azure resources, see [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
+- See [Azure Event Hubs monitoring data reference](monitor-event-hubs-reference.md) for a reference of the metrics, logs, and other important values created for Event Hubs.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources.
event-hubs Resource Governance With App Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/resource-governance-with-app-groups.md
You can create an application group using the Azure portal by following these st
1. Confirm that **Enabled** is selected. To have the application group in the disabled state first, clear the **Enabled** option. This flag determines whether the clients of an application group can access Event Hubs or not. 1. For **Security context type**, select **Namespace Shared access policy**, **event hub Shared Access Policy** or **Microsoft Entra application**.Application group supports the selection of SAS key at either namespace or at entity (event hub) level. When you create the application group, you should associate with either a shared access signatures (SAS) or Microsoft Entra application ID, which is used by client applications. 1. If you selected **Namespace Shared access policy**:
- 1. For **SAS key name**, select the SAS policy that can be used as a security context for this application group.You can select **Add SAS Policy** to add a new policy and then associate with the application group.
+ 1. For **SAS key name**, select the SAS policy that can be used as a security context for this application group. You can select **Add SAS Policy** to add a new policy and then associate with the application group.
:::image type="content" source="./media/resource-governance-with-app-groups/create-application-groups-with-namespace-shared-access-key.png" alt-text="Screenshot of the Add application group page with Namespace Shared access policy option selected."::: 1. If you selected **Event Hubs Shared access policy**:
The following ARM template shows how to update an existing namespace (`contosona
### Decide threshold value for throttling policies
-Azure Event Hubs supports [Application Metric Logs ](monitor-event-hubs-reference.md#application-metrics-logs) functionality to observe usual throughput within your system and accordingly decide on the threshold value for application group. You can follow these steps to decide on a threshold value:
+Azure Event Hubs supports [Application Metric Logs](monitor-event-hubs-reference.md#application-metrics-logs) functionality to observe usual throughput within your system and accordingly decide on the threshold value for application group. You can follow these steps to decide on a threshold value:
-1. Turn on [diagnostic settings](monitor-event-hubs.md#collection-and-routing) in Event Hubs with **Application Metric logs** as selected category and choose **Log Analytics** as destination.
+1. Turn on [diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md) in Event Hubs with **Application Metric logs** as selected category and choose **Log Analytics** as destination.
2. Create an empty application group without any throttling policy. 3. Continue sending messages/events to event hub at usual throughput. 4. Go to **Log Analytics workspace** and query for the right activity name (based on the (resource-governance-overview.md#throttling-policythreshold-limits)) in **AzureDiagnostics** table. The following sample query is set to track threshold value for incoming messages:
You can use the below example query to find out all the throttled requests in ce
| where Outcome_s =="Throttled" ```
-Due to restrictions at protocol level, throttled request logs are not generated for consumer operations within event hub ( `OutgoingMessages` or `OutgoingBytes`). when requests are throttled at consumer side, you would observe sluggish egress throughput.
+Due to restrictions at protocol level, throttled request logs are not generated for consumer operations within event hub ( `OutgoingMessages` or `OutgoingBytes`). When requests are throttled at consumer side, you would observe sluggish egress throughput.
## Next steps
expressroute Design Architecture For Resiliency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/design-architecture-for-resiliency.md
ExpressRoute uses [Azure Service Health](../service-health/overview.md) to notif
#### Configure gateway health monitoring & alerting
-[Setup monitoring](expressroute-monitoring-metrics-alerts.md#expressroute-gateways) using Azure Monitor for ExpressRoute Gateway availability, performance, and scalability. When you deploy an ExpressRoute gateway, Azure manages the compute and functions of your gateway. There are multiple [gateway metrics](expressroute-monitoring-metrics-alerts.md#expressroute-virtual-network-gateway-metrics) available to you to better understand the performance of your gateway.
+[Setup monitoring](monitor-expressroute-reference.md#supported-metrics-for-microsoftnetworkexpressroutegateways) using Azure Monitor for ExpressRoute Gateway availability, performance, and scalability. When you deploy an ExpressRoute gateway, Azure manages the compute and functions of your gateway. There are multiple [gateway metrics](expressroute-monitoring-metrics-alerts.md#expressroute-virtual-network-gateway-metrics) available to you to better understand the performance of your gateway.
expressroute Expressroute Howto Linkvnet Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-arm.md
Previously updated : 08/31/2023 Last updated : 07/23/2023
With Virtual Network Peering and UDR support, FastPath will send traffic directl
With FastPath and Private Link, Private Link traffic sent over ExpressRoute bypasses the ExpressRoute virtual network gateway in the data path. With both of these features enabled, FastPath will directly send traffic to a Private Endpoint deployed in a "spoke" Virtual Network.
-These scenarios are Generally Available for limited scenarios with connections associated to 100 Gbps ExpressRoute Direct circuits. To enable, follow the below guidance:
+These scenarios are Generally Available for limited scenarios with connections associated to 10 Gbps and 100 Gbps ExpressRoute Direct circuits. To enable, follow the below guidance:
1. Complete this [Microsoft Form](https://aka.ms/fastpathlimitedga) to request to enroll your subscription. Requests may take up to 4 weeks to complete, so plan deployments accordingly. 2. Once you receive a confirmation from Step 1, run the following Azure PowerShell command in the target Azure subscription. ```azurepowershell-interactive
expressroute Expressroute Monitoring Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-monitoring-metrics-alerts.md
- Title: 'Azure ExpressRoute: Monitoring, Metrics, and Alerts'
-description: Learn about Azure ExpressRoute monitoring, metrics, and alerts using Azure Monitor, the one stop shop for all metrics, alerting, diagnostic logs across Azure.
--- Previously updated : 03/31/2024---
-# ExpressRoute monitoring, metrics, and alerts
-
-This article helps you understand ExpressRoute monitoring, metrics, and alerts using Azure Monitor. Azure Monitor is one stop shop for all metrics, alerting, diagnostic logs across all of Azure.
-
-> [!NOTE]
-> Using **Classic Metrics** is not recommended.
->
-
-## ExpressRoute metrics
-
-To view **Metrics**, go to the *Azure Monitor* page and select *Metrics*. To view **ExpressRoute** metrics, filter by Resource Type *ExpressRoute circuits*. To view **Global Reach** metrics, filter by Resource Type *ExpressRoute circuits* and select an ExpressRoute circuit resource that has Global Reach enabled. To view **ExpressRoute Direct** metrics, filter Resource Type by *ExpressRoute Ports*.
-
-Once a metric is selected, the default aggregation is applied. Optionally, you can apply splitting, which shows the metric with different dimensions.
-
-> [!IMPORTANT]
-> When viewing ExpressRoute metrics in the Azure portal, select a time granularity of **5 minutes or greater** for best possible results.
->
-> :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/metric-granularity.png" alt-text="Screenshot of time granularity options.":::
-
-### Aggregation Types:
-
-Metrics explorer supports sum, maximum, minimum, average and count as [aggregation types](../azure-monitor/essentials/metrics-charts.md#aggregation). You should use the recommended Aggregation type when reviewing the insights for each ExpressRoute metric.
-
-* Sum: The sum of all values captured during the aggregation interval.
-* Count: The number of measurements captured during the aggregation interval.
-* Average: The average of the metric values captured during the aggregation interval.
-* Min: The smallest value captured during the aggregation interval.
-* Max: The largest value captured during the aggregation interval.
-
-### ExpressRoute circuit
-
-| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? |
-| | | | | | | |
-| [ARP Availability](#arp) | Availability | Percent | Average | ARP Availability from MSEE towards all peers. | Peering Type, Peer | Yes |
-| [BGP Availability](#bgp) | Availability | Percent | Average | BGP Availability from MSEE towards all peers. | Peering Type, Peer | Yes |
-| [BitsInPerSecond](#circuitbandwidth) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | Peering Type | Yes |
-| [BitsOutPerSecond](#circuitbandwidth) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second | Peering Type | Yes |
-| DroppedInBitsPerSecond | Traffic | BitsPerSecond | Average | Ingress bits of data dropped per second | Peering Type | Yes |
-| DroppedOutBitsPerSecond | Traffic | BitPerSecond | Average | Egress bits of data dropped per second | Peering Type | Yes |
-| GlobalReachBitsInPerSecond | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | PeeredCircuitSKey | Yes |
-| GlobalReachBitsOutPerSecond | Traffic | BitsPerSecond | Average | Bits egressing Azure per second | PeeredCircuitSKey | Yes |
-| [FastPathRoutesCount](#fastpath-routes-count-at-circuit-level) | Fastpath | Count | Maximum | Count of FastPath routes configured on the circuit | None | Yes |
-
->[!NOTE]
->Using *GlobalGlobalReachBitsInPerSecond* and *GlobalGlobalReachBitsOutPerSecond* will only be visible if at least one Global Reach connection is established.
->
-
-### ExpressRoute gateways
-
-| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? |
-| | | | | | | |
-| [Bits received per second](#gwbits) | Performance | BitsPerSecond | Average | Total bits received on ExpressRoute gateway per second | roleInstance | Yes |
-| [CPU utilization](#cpu) | Performance | Count | Average | CPU Utilization of the ExpressRoute Gateway | roleInstance | Yes |
-| [Packets per second](#packets) | Performance | CountPerSecond | Average | Total Packets received on ExpressRoute Gateway per second | roleInstance | Yes |
-| [Count of routes advertised to peer](#advertisedroutes) | Availability | Count | Maximum | Count Of Routes Advertised To Peer by ExpressRouteGateway | roleInstance | Yes |
-| [Count of routes learned from peer](#learnedroutes)| Availability | Count | Maximum | Count Of Routes Learned From Peer by ExpressRouteGateway | roleInstance | Yes |
-| [Frequency of routes changed](#frequency) | Availability | Count | Total | Frequency of Routes change in ExpressRoute Gateway | roleInstance | Yes |
-| [Number of VMs in virtual network](#vm) | Availability | Count | Maximum | Estimated number of VMs in the virtual network | No Dimensions | Yes |
-| [Active flows](#activeflows) | Scalability | Count | Average | Number of active flows on ExpressRoute Gateway | roleInstance | Yes |
-| [Max flows created per second](#maxflows) | Scalability | FlowsPerSecond | Maximum | Maximum number of flows created per second on ExpressRoute Gateway | roleInstance, direction | Yes |
-
-### ExpressRoute Gateway connections
-
-| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? |
-| | | | | | | |
-| [BitsInPerSecond](#connectionbandwidth) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second through ExpressRoute gateway | ConnectionName | Yes |
-| [BitsOutPerSecond](#connectionbandwidth) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second through ExpressRoute gateway | ConnectionName | Yes |
-
-### ExpressRoute Direct
-
-| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? |
-| | | | | | | |
-| [BitsInPerSecond](#directin) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | Link | Yes |
-| [BitsOutPerSecond](#directout) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second | Link | Yes |
-| DroppedInBitsPerSecond | Traffic | BitsPerSecond | Average | Ingress bits of data dropped per second | Link | Yes |
-| DroppedOutBitsPerSecond | Traffic | BitPerSecond | Average | Egress bits of data dropped per second | Link | Yes |
-| [AdminState](#admin) | Physical Connectivity | Count | Average | Admin state of the port | Link | Yes |
-| [LineProtocol](#line) | Physical Connectivity | Count | Average | Line protocol status of the port | Link | Yes |
-| [RxLightLevel](#rxlight) | Physical Connectivity | Count | Average | Rx Light level in dBm | Link, Lane | Yes |
-| [TxLightLevel](#txlight) | Physical Connectivity | Count | Average | Tx light level in dBm | Link, Lane | Yes |
-| [FastPathRoutesCount](#fastpath-routes-count-at-port-level) | FastPath | Count | Maximum | Count of FastPath routes configured on the port | None | Yes |
-
-### ExpressRoute Traffic Collector
-
-| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? |
-| | | | | | | |
-| CPU utilization | Performance | Count | Average | CPU Utilization of the ExpressRoute Traffic Collector | roleInstance | Yes |
-| Memory Utilization | Performance | CountPerSecond | Average | Memory Utilization of the ExpressRoute Traffic Collector | roleInstance | Yes |
-| Count of flow records processed | Availability | Count | Maximum | Count of number of flow records processed or ingested | roleInstance, ExpressRoute Circuit | Yes |
-
-## Circuits metrics
-
-### <a name = "circuitbandwidth"></a>Bits In and Out - Metrics across all peerings
-
-Aggregation type: *Avg*
-
-You can view metrics across all peerings on a given ExpressRoute circuit.
--
-### Bits In and Out - Metrics per peering
-
-Aggregation type: *Avg*
-
-You can view metrics for private, public, and Microsoft peering in bits/second.
--
-### <a name = "bgp"></a>BGP Availability - Split by Peer
-
-Aggregation type: *Avg*
-
-You can view near to real-time availability of BGP (Layer-3 connectivity) across peerings and peers (Primary and Secondary ExpressRoute routers). This dashboard shows the Primary BGP session status is up for private peering and the Second BGP session status is down for private peering.
--
->[!NOTE]
->During maintenance between the Microsoft edge and core network, BGP availability will appear down even if the BGP session between the customer edge and Microsoft edge remains up. For information about maintenance between the Microsoft edge and core network, make sure to have your [maintenance alerts turned on and configured](./maintenance-alerts.md).
->
-
-### FastPath routes count (at circuit level)
-
-Aggregation type: *Max*
-
-This metric shows the number of FastPath routes configured on a circuit. Set an alert for when the number of FastPath routes on a circuit goes beyond the threshold limit. For more information, see [ExpressRoute FastPath limits](about-fastpath.md#ip-address-limits).
--
-### <a name = "arp"></a>ARP Availability - Split by Peering
-
-Aggregation type: *Avg*
-
-You can view near to real-time availability of [ARP](./expressroute-troubleshooting-arp-resource-manager.md) (Layer-2 connectivity) across peerings and peers (Primary and Secondary ExpressRoute routers). This dashboard shows the Private Peering ARP session status is up across both peers, but down for Microsoft peering for both peers. The default aggregation (Average) was utilized across both peers.
--
-## ExpressRoute Direct Metrics
-
-### <a name = "admin"></a>Admin State - Split by link
-
-Aggregation type: *Avg*
-
-You can view the Admin state for each link of the ExpressRoute Direct port pair. The Admin state represents if the physical port is on or off. This state is required to pass traffic across the ExpressRoute Direct connection.
--
-### <a name = "directin"></a>Bits In Per Second - Split by link
-
-Aggregation type: *Avg*
-
-You can view the bits in per second across both links of the ExpressRoute Direct port pair. Monitor this dashboard to compare inbound bandwidth for both links.
--
-### <a name = "directout"></a>Bits Out Per Second - Split by link
-
-Aggregation type: *Avg*
-
-You can also view the bits out per second across both links of the ExpressRoute Direct port pair. Monitor this dashboard to compare outbound bandwidth for both links.
--
-### <a name = "line"></a>Line Protocol - Split by link
-
-Aggregation type: *Avg*
-
-You can view the line protocol across each link of the ExpressRoute Direct port pair. The Line Protocol indicates if the physical link is up and running over ExpressRoute Direct. Monitor this dashboard and set alerts to know when the physical connection goes down.
--
-### <a name = "rxlight"></a>Rx Light Level - Split by link
-
-Aggregation type: *Avg*
-
-You can view the Rx light level (the light level that the ExpressRoute Direct port is **receiving**) for each port. Healthy Rx light levels generally fall within a range of -10 dBm to 0 dBm. Set alerts to be notified if the Rx light level falls outside of the healthy range.
--
->[!NOTE]
-> ExpressRoute Direct connectivity is hosted across different device platforms. Some ExpressRoute Direct connections will support a split view for Rx light levels by lane. However, this is not supported on all deployments.
->
-
-### <a name = "txlight"></a>Tx Light Level - Split by link
-
-Aggregation type: *Avg*
-
-You can view the Tx light level (the light level that the ExpressRoute Direct port is **transmitting**) for each port. Healthy Tx light levels generally fall within a range of -10 dBm to 0 dBm. Set alerts to be notified if the Tx light level falls outside of the healthy range.
--
->[!NOTE]
-> ExpressRoute Direct connectivity is hosted across different device platforms. Some ExpressRoute Direct connections will support a split view for Tx light levels by lane. However, this is not supported on all deployments.
->
-
-### FastPath routes count (at port level)
-
-Aggregation type: *Max*
-
-This metric shows the number of FastPath routes configured on an ExpressRoute Direct port.
-
-*Guidance:* Set an alert for when the number of FastPath routes on the port goes beyond the threshold limit. For more information, see [ExpressRoute FastPath limits](about-fastpath.md#ip-address-limits).
--
-## ExpressRoute Virtual Network Gateway Metrics
-
-Aggregation type: *Avg*
-
-When you deploy an ExpressRoute gateway, Azure manages the compute and functions of your gateway. There are six gateway metrics available to you to better understand the performance of your gateway:
-
-* Bits received per second
-* CPU Utilization
-* Packets per seconds
-* Count of routes advertised to peers
-* Count of routes learned from peers
-* Frequency of routes changed
-* Number of VMs in the virtual network
-* Active flows
-* Max flows created per second
-
-We highly recommended you set alerts for each of these metrics so that you're aware of when your gateway could be seeing performance issues.
-
-### <a name = "gwbits"></a>Bits received per second - Split by instance
-
-Aggregation type: *Avg*
-
-This metric captures inbound bandwidth utilization on the ExpressRoute virtual network gateway instances. Set an alert for how frequent the bandwidth utilization exceeds a certain threshold. If you need more bandwidth, increase the size of the ExpressRoute virtual network gateway.
--
-### <a name = "cpu"></a>CPU Utilization - Split by instance
-
-Aggregation type: *Avg*
-
-You can view the CPU utilization of each gateway instance. The CPU utilization might spike briefly during routine host maintenance but prolong high CPU utilization could indicate your gateway is reaching a performance bottleneck. Increasing the size of the ExpressRoute gateway might resolve this issue. Set an alert for how frequent the CPU utilization exceeds a certain threshold.
--
-### <a name = "packets"></a>Packets Per Second - Split by instance
-
-Aggregation type: *Avg*
-
-This metric captures the number of inbound packets traversing the ExpressRoute gateway. You should expect to see a consistent stream of data here if your gateway is receiving traffic from your on-premises network. Set an alert for when the number of packets per second drops below a threshold indicating that your gateway is no longer receiving traffic.
--
-### <a name = "advertisedroutes"></a>Count of Routes Advertised to Peer - Split by instance
-
-Aggregation type: *Max*
-
-This metric shows the number of routes the ExpressRoute gateway is advertising to the circuit. The address spaces might include virtual networks that are connected using virtual network peering and uses remote ExpressRoute gateway. You should expect the number of routes to remain consistent unless there are frequent changes to the virtual network address spaces. Set an alert for when the number of advertised routes drop below the threshold for the number of virtual network address spaces you're aware of.
--
-### <a name = "learnedroutes"></a>Count of routes learned from peer - Split by instance
-
-Aggregation type: *Max*
-
-This metric shows the number of routes the ExpressRoute gateway is learning from peers connected to the ExpressRoute circuit. These routes can be either from another virtual network connected to the same circuit or learned from on-premises. Set an alert for when the number of learned routes drop below a certain threshold. This metric can indicate either the gateway is seeing a performance problem or remote peers are no longer advertising routes to the ExpressRoute circuit.
--
-### <a name = "frequency"></a>Frequency of routes change - Split by instance
-
-Aggregation type: *Sum*
-
-This metric shows the frequency of routes being learned from or advertised to remote peers. You should first investigate your on-premises devices to understand why the network is changing so frequently. A high frequency in routes change could indicate a performance problem on the ExpressRoute gateway where scaling the gateway SKU up might resolve the problem. Set an alert for a frequency threshold to be aware of when your ExpressRoute gateway is seeing abnormal route changes.
--
-### <a name = "vm"></a>Number of VMs in the virtual network
-
-Aggregation type: *Max*
-
-This metric shows the number of virtual machines that are using the ExpressRoute gateway. The number of virtual machines might include VMs from peered virtual networks that use the same ExpressRoute gateway. Set an alert for this metric if the number of VMs goes above a certain threshold that could affect the gateway performance.
--
->[!NOTE]
-> To maintain reliability of the service, Microsoft often performs platform or OS maintenance on the gateway service. During this time, this metric may fluctuate and report inaccurately.
->
-
-## <a name = "activeflows"></a>Active flows
-
-Aggregation type: *Avg*
-
-Split by: Gateway Instance
--
-This metric displays a count of the total number of active flows on the ExpressRoute Gateway. Only inbound traffic from on-premises is captured for active flows. Through split at instance level, you can see active flow count per gateway instance. For more information, see [understand network flow limits](../virtual-network/virtual-machine-network-throughput.md#network-flow-limits).
--
-## <a name = "maxflows"></a>Max flows created per second
-
-Aggregation type: *Max*
-
-Split by: Gateway Instance and Direction (Inbound/Outbound)
-
-This metric displays the maximum number of flows created per second on the ExpressRoute Gateway. Through split at instance level and direction, you can see max flow creation rate per gateway instance and inbound/outbound direction respectively. For more information, see [understand network flow limits](../virtual-network/virtual-machine-network-throughput.md#network-flow-limits).
--
-## <a name = "connectionbandwidth"></a>ExpressRoute gateway connections in bits/seconds
-
-Aggregation type: *Avg*
-
-This metric shows the bits per second for ingress and egress to Azure through the ExpressRoute gateway. You can split this metric further to see specific connections to the ExpressRoute circuit.
--
-## ExpressRoute Traffic Collector metrics
-
-### CPU Utilization - Split by instance
-
-Aggregation type:ΓÇ»*Avg* (of percentage of total utilized CPU)
-
-*Granularity: 5 min*
-
-You can view the CPU utilization of each ExpressRoute Traffic Collector instance. The CPU utilization might spike briefly during routine host maintenance, but prolonged high CPU utilization could indicate your ExpressRoute Traffic Collector is reaching a performance bottleneck.
-
-**Guidance:** Set an alert for when avg CPU utilization exceeds a certain threshold.
--
-### Memory Utilization - Split by instance
-
-Aggregation type:ΓÇ»*Avg* (of percentage of total utilized Memory)
-
-*Granularity: 5 min*
-
-You can view the memory utilization of each ExpressRoute Traffic Collector instance. Memory utilization might spike briefly during routine host maintenance, but prolonged high memory utilization could indicate your Azure Traffic Collector is reaching a performance bottleneck.
-
-**Guidance:** Set an alert for when avg memory utilization exceeds a certain threshold.
--
-### Count of flow records processed - Split by instances or ExpressRoute circuit
-
-Aggregation type: *Count*
-
-*Granularity: 5 min*
-
-You can view the count of number of flow records processed by ExpressRoute Traffic Collector, aggregated across ExpressRoute Circuits. Customer can split the metrics across each ExpressRoute Traffic Collector instance or ExpressRoute circuit when multiple circuits are associated to the ExpressRoute Traffic Collector. Monitoring this metric helps you understand if you need to deploy more ExpressRoute Traffic Collector instances or migrate ExpressRoute circuit association from one ExpressRoute Traffic Collector deployment to another.
-
-**Guidance:** Splitting by circuits is recommended when multiple ExpressRoute circuits are associated with an ExpressRoute Traffic Collector deployment. This metric helps determine the flow count of each ExpressRoute circuit and ExpressRoute Traffic Collector utilization by each ExpressRoute circuit.
--
-## Alerts for ExpressRoute gateway connections
-
-1. To configure alerts, navigate to **Azure Monitor**, then select **Alerts**.
-
- :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/monitor-overview.png" alt-text="Screenshot of the alerts option from the monitor overview page.":::
-
-1. Select **+ Create > Alert rule** and select the ExpressRoute gateway connection resource. Select **Next: Condition >** to configure the signal.
-
- :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/select-expressroute-gateway.png" alt-text="Screenshot of the selecting ExpressRoute virtual network gateway from the select a resource page.":::
-
-1. On the *Select a signal* page, select a metric, resource health, or activity log that you want to be alerted. Depending on the signal you select, you might need to enter additional information such as a threshold value. You can also combine multiple signals into a single alert. Select **Next: Actions >** to define who and how they get notify.
-
- :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/signal.png" alt-text="Screenshot of list of signals that can be alerted for ExpressRoute gateways.":::
-
-1. Select **+ Select action groups** to choose an existing action group you previously created or select **+ Create action group** to define a new one. In the action group, you determine how notifications get sent and who receives them.
-
- :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/action-group.png" alt-text="Screenshot of add action groups page.":::
-
-1. Select **Review + create** and then **Create** to deploy the alert into your subscription.
-
-### Alerts based on each peering
-
-After you select a metric, certain metric allow you to set up dimensions based on peering or a specific peer (virtual networks).
--
-### Configure alerts for activity logs on circuits
-
-When selecting signals to be alerted on, you can select **Activity Log** signal type.
--
-## More metrics in Log Analytics
-
-You can also view ExpressRoute metrics by going to your ExpressRoute circuit resource and selecting the *Logs* tab. For any metrics you query, the output contains the following columns.
-
-| **Column** | **Type** | **Description** |
-| | | |
-| TimeGrain | string | PT1M (metric values are pushed every minute) |
-| Count | real | Usually is 2 (each MSEE pushes a single metric value every minute) |
-| Minimum | real | The minimum of the two metric values pushed by the two MSEEs |
-| Maximum | real | The maximum of the two metric values pushed by the two MSEEs |
-| Average | real | Equal to (Minimum + Maximum)/2 |
-| Total | real | Sum of the two metric values from both MSEEs (the main value to focus on for the metric queried) |
-
-## Next steps
-
-Set up your ExpressRoute connection.
-
-* [Create and modify a circuit](expressroute-howto-circuit-arm.md)
-* [Create and modify peering configuration](expressroute-howto-routing-arm.md)
-* [Link a virtual network to an ExpressRoute circuit](expressroute-howto-linkvnet-arm.md)
expressroute Monitor Expressroute Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/monitor-expressroute-reference.md
Title: Monitoring ExpressRoute data reference
-description: Important reference material needed when you monitor ExpressRoute
-
+ Title: Monitoring data reference for Azure ExpressRoute
+description: This article contains important reference material you need when you monitor Azure ExpressRoute by using Azure Monitor.
Last updated : 07/11/2024+ + - Previously updated : 06/22/2021
+# Azure ExpressRoute monitoring data reference
-# Monitoring ExpressRoute data reference
-This article provides a reference of log and metric data collected to analyze the performance and availability of ExpressRoute.
-See [Monitoring ExpressRoute](monitor-expressroute.md) for details on collecting and analyzing monitoring data for ExpressRoute.
+See [Monitor Azure ExpressRoute](monitor-expressroute.md) for details on the data you can collect for ExpressRoute and how to use it.
-## Metrics
-This section lists all the automatically collected platform metrics for ExpressRoute. For more information, see a list of [all platform metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
+>[!NOTE]
+> Using *GlobalGlobalReachBitsInPerSecond* and *GlobalGlobalReachBitsOutPerSecond* are only visible if at least one Global Reach connection is established.
+>
-| Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
-|-|--|
-| ExpressRoute circuit | [Microsoft.Network/expressRouteCircuits](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkexpressroutecircuits) |
-| ExpressRoute circuit peering | [Microsoft.Network/expressRouteCircuits/peerings](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkexpressroutecircuitspeerings) |
-| ExpressRoute Gateways | [Microsoft.Network/expressRouteGateways](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkexpressroutegateways) |
-| ExpressRoute Direct | [Microsoft.Network/expressRoutePorts](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkexpressrouteports) |
+### Supported metrics for Microsoft.Network/expressRouteCircuits
->[!NOTE]
+The following table lists the metrics available for the Microsoft.Network/expressRouteCircuits resource type.
++
+### Supported metrics for Microsoft.Network/expressRouteCircuits/peerings
+
+The following table lists the metrics available for the Microsoft.Network/expressRouteCircuits/peerings resource type.
++
+### Supported metrics for microsoft.network/expressroutegateways
+
+The following table lists the metrics available for the microsoft.network/expressroutegateways resource type.
++
+### Supported metrics for Microsoft.Network/expressRoutePorts
+
+The following table lists the metrics available for the Microsoft.Network/expressRoutePorts resource type.
++
+### Metrics information
+
+Follow links in these lists for more information about metrics from the preceding tables.
+
+ExpressRoute circuits metrics:
+
+- [ARP Availability](#arp)
+- [BGP Availability](#bgp)
+- [BitsInPerSecond](#circuitbandwidth)
+- [BitsOutPerSecond](#circuitbandwidth)
+- DroppedInBitsPerSecond
+- DroppedOutBitsPerSecond
+- GlobalReachBitsInPerSecond
+- GlobalReachBitsOutPerSecond
+- [FastPathRoutesCount](#fastpath-routes-count-at-circuit-level)
+
+> [!NOTE]
> Using *GlobalGlobalReachBitsInPerSecond* and *GlobalGlobalReachBitsOutPerSecond* will only be visible if at least one Global Reach connection is established.+
+ExpressRoute gateways metrics:
+
+- [Bits received per second](#gwbits)
+- [CPU utilization](#cpu)
+- [Packets per second](#packets)
+- [Count of routes advertised to peer](#advertisedroutes)
+- [Count of routes learned from peer](#learnedroutes)
+- [Frequency of routes changed](#frequency)
+- [Number of VMs in virtual network](#vm)
+- [Active flows](#activeflows)
+- [Max flows created per second](#maxflows)
+
+ExpressRoute gateway connections metrics:
+
+- [BitsInPerSecond](#connectionbandwidth)
+- [BitsOutPerSecond](#connectionbandwidth)
+
+ExpressRoute Direct metrics:
+
+- [BitsInPerSecond](#directin)
+- [BitsOutPerSecond](#directout)
+- DroppedInBitsPerSecond
+- DroppedOutBitsPerSecond
+- [AdminState](#admin)
+- [LineProtocol](#line)
+- [RxLightLevel](#rxlight)
+- [TxLightLevel](#txlight)
+- [FastPathRoutesCount](#fastpath-routes-count-at-port-level)
+
+ExpressRoute Traffic Collector metrics:
+
+- [CPU utilization](#cpu-utilizationsplit-by-instance-1)
+- [Memory Utilization](#memory-utilizationsplit-by-instance)
+- [Count of flow records processed](#count-of-flow-records-processedsplit-by-instances-or-expressroute-circuit)
+
+### Circuits metrics
+
+#### <a name = "arp"></a>ARP Availability - Split by Peering
+
+Aggregation type: *Avg*
+
+You can view near to real-time availability of [ARP](./expressroute-troubleshooting-arp-resource-manager.md) (Layer-2 connectivity) across peerings and peers (Primary and Secondary ExpressRoute routers). This dashboard shows the Private Peering ARP session status is up across both peers, but down for Microsoft peering for both peers. The default aggregation (Average) was utilized across both peers.
++
+#### <a name = "bgp"></a>BGP Availability - Split by Peer
+
+Aggregation type: *Avg*
+
+You can view near to real-time availability of BGP (Layer-3 connectivity) across peerings and peers (Primary and Secondary ExpressRoute routers). This dashboard shows the Primary BGP session status is up for private peering and the Second BGP session status is down for private peering.
++
+>[!NOTE]
+>During maintenance between the Microsoft edge and core network, BGP availability will appear down even if the BGP session between the customer edge and Microsoft edge remains up. For information about maintenance between the Microsoft edge and core network, make sure to have your [maintenance alerts turned on and configured](./maintenance-alerts.md).
+>
+
+#### <a name = "circuitbandwidth"></a>Bits In and Out - Metrics across all peerings
+
+Aggregation type: *Avg*
+
+You can view metrics across all peerings on a given ExpressRoute circuit.
++
+#### Bits In and Out - Metrics per peering
+
+Aggregation type: *Avg*
+
+You can view metrics for private, public, and Microsoft peering in bits/second.
++
+#### FastPath routes count (at circuit level)
+
+Aggregation type: *Max*
+
+This metric shows the number of FastPath routes configured on a circuit. Set an alert for when the number of FastPath routes on a circuit goes beyond the threshold limit. For more information, see [ExpressRoute FastPath limits](about-fastpath.md#ip-address-limits).
++
+### Virtual network gateway metrics
+
+Aggregation type: *Avg*
+
+When you deploy an ExpressRoute gateway, Azure manages the compute and functions of your gateway. There are six gateway metrics available to you to better understand the performance of your gateway:
+
+- Bits received per second
+- CPU Utilization
+- Packets per seconds
+- Count of routes advertised to peers
+- Count of routes learned from peers
+- Frequency of routes changed
+- Number of VMs in the virtual network
+- Active flows
+- Max flows created per second
+
+We highly recommended you set alerts for each of these metrics so that you're aware of when your gateway could be seeing performance issues.
+
+#### <a name = "gwbits"></a>Bits received per second - Split by instance
+
+Aggregation type: *Avg*
+
+This metric captures inbound bandwidth utilization on the ExpressRoute virtual network gateway instances. Set an alert for how frequent the bandwidth utilization exceeds a certain threshold. If you need more bandwidth, increase the size of the ExpressRoute virtual network gateway.
++
+#### <a name = "cpu"></a>CPU Utilization - Split by instance
+
+Aggregation type: *Avg*
+
+You can view the CPU utilization of each gateway instance. The CPU utilization might spike briefly during routine host maintenance but prolong high CPU utilization could indicate your gateway is reaching a performance bottleneck. Increasing the size of the ExpressRoute gateway might resolve this issue. Set an alert for how frequent the CPU utilization exceeds a certain threshold.
++
+#### <a name = "packets"></a>Packets Per Second - Split by instance
+
+Aggregation type: *Avg*
+
+This metric captures the number of inbound packets traversing the ExpressRoute gateway. You should expect to see a consistent stream of data here if your gateway is receiving traffic from your on-premises network. Set an alert for when the number of packets per second drops below a threshold indicating that your gateway is no longer receiving traffic.
++
+#### <a name = "advertisedroutes"></a>Count of Routes Advertised to Peer - Split by instance
+
+Aggregation type: *Max*
+
+This metric shows the number of routes the ExpressRoute gateway is advertising to the circuit. The address spaces might include virtual networks that are connected using virtual network peering and uses remote ExpressRoute gateway. You should expect the number of routes to remain consistent unless there are frequent changes to the virtual network address spaces. Set an alert for when the number of advertised routes drop below the threshold for the number of virtual network address spaces you're aware of.
++
+#### <a name = "learnedroutes"></a>Count of routes learned from peer - Split by instance
+
+Aggregation type: *Max*
+
+This metric shows the number of routes the ExpressRoute gateway is learning from peers connected to the ExpressRoute circuit. These routes can be either from another virtual network connected to the same circuit or learned from on-premises. Set an alert for when the number of learned routes drop below a certain threshold. This metric can indicate either the gateway is seeing a performance problem or remote peers are no longer advertising routes to the ExpressRoute circuit.
++
+#### <a name = "frequency"></a>Frequency of routes change - Split by instance
+
+Aggregation type: *Sum*
+
+This metric shows the frequency of routes being learned from or advertised to remote peers. You should first investigate your on-premises devices to understand why the network is changing so frequently. A high frequency in routes change could indicate a performance problem on the ExpressRoute gateway where scaling the gateway SKU up might resolve the problem. Set an alert for a frequency threshold to be aware of when your ExpressRoute gateway is seeing abnormal route changes.
++
+#### <a name = "vm"></a>Number of VMs in the virtual network
+
+Aggregation type: *Max*
+
+This metric shows the number of virtual machines that are using the ExpressRoute gateway. The number of virtual machines might include VMs from peered virtual networks that use the same ExpressRoute gateway. Set an alert for this metric if the number of VMs goes above a certain threshold that could affect the gateway performance.
++
+>[!NOTE]
+> To maintain reliability of the service, Microsoft often performs platform or OS maintenance on the gateway service. During this time, this metric may fluctuate and report inaccurately.
+>
+
+#### <a name = "activeflows"></a>Active flows
+
+Aggregation type: *Avg*
+
+Split by: Gateway Instance
+
+This metric displays a count of the total number of active flows on the ExpressRoute Gateway. Only inbound traffic from on-premises is captured for active flows. Through split at instance level, you can see active flow count per gateway instance. For more information, see [understand network flow limits](../virtual-network/virtual-machine-network-throughput.md#network-flow-limits).
++
+#### <a name = "maxflows"></a>Max flows created per second
+
+Aggregation type: *Max*
+
+Split by: Gateway Instance and Direction (Inbound/Outbound)
+
+This metric displays the maximum number of flows created per second on the ExpressRoute Gateway. Through split at instance level and direction, you can see max flow creation rate per gateway instance and inbound/outbound direction respectively. For more information, see [understand network flow limits](../virtual-network/virtual-machine-network-throughput.md#network-flow-limits).
++
+### <a name = "connectionbandwidth"></a>Gateway connections in bits/seconds
+
+Aggregation type: *Avg*
+
+This metric shows the bits per second for ingress and egress to Azure through the ExpressRoute gateway. You can split this metric further to see specific connections to the ExpressRoute circuit.
++
+### ExpressRoute Direct metrics
+
+#### <a name = "directin"></a>Bits In Per Second - Split by link
+
+Aggregation type: *Avg*
+
+You can view the bits in per second across both links of the ExpressRoute Direct port pair. Monitor this dashboard to compare inbound bandwidth for both links.
++
+#### <a name = "directout"></a>Bits Out Per Second - Split by link
+
+Aggregation type: *Avg*
+
+You can also view the bits out per second across both links of the ExpressRoute Direct port pair. Monitor this dashboard to compare outbound bandwidth for both links.
++
+#### <a name = "admin"></a>Admin State - Split by link
+
+Aggregation type: *Avg*
+
+You can view the Admin state for each link of the ExpressRoute Direct port pair. The Admin state represents if the physical port is on or off. This state is required to pass traffic across the ExpressRoute Direct connection.
++
+#### <a name = "line"></a>Line Protocol - Split by link
+
+Aggregation type: *Avg*
+
+You can view the line protocol across each link of the ExpressRoute Direct port pair. The Line Protocol indicates if the physical link is up and running over ExpressRoute Direct. Monitor this dashboard and set alerts to know when the physical connection goes down.
++
+#### <a name = "rxlight"></a>Rx Light Level - Split by link
+
+Aggregation type: *Avg*
+
+You can view the Rx light level (the light level that the ExpressRoute Direct port is **receiving**) for each port. Healthy Rx light levels generally fall within a range of -10 dBm to 0 dBm. Set alerts to be notified if the Rx light level falls outside of the healthy range.
++
+>[!NOTE]
+> ExpressRoute Direct connectivity is hosted across different device platforms. Some ExpressRoute Direct connections will support a split view for Rx light levels by lane. However, this is not supported on all deployments.
+>
+
+#### <a name = "txlight"></a>Tx Light Level - Split by link
+
+Aggregation type: *Avg*
+
+You can view the Tx light level (the light level that the ExpressRoute Direct port is **transmitting**) for each port. Healthy Tx light levels generally fall within a range of -10 dBm to 0 dBm. Set alerts to be notified if the Tx light level falls outside of the healthy range.
++
+>[!NOTE]
+> ExpressRoute Direct connectivity is hosted across different device platforms. Some ExpressRoute Direct connections will support a split view for Tx light levels by lane. However, this is not supported on all deployments.
>
-## Metric dimensions
+#### FastPath routes count (at port level)
+
+Aggregation type: *Max*
+
+This metric shows the number of FastPath routes configured on an ExpressRoute Direct port.
+
+*Guidance:* Set an alert for when the number of FastPath routes on the port goes beyond the threshold limit. For more information, see [ExpressRoute FastPath limits](about-fastpath.md#ip-address-limits).
++
+### ExpressRoute Traffic Collector metrics
-For more information on what metric dimensions are, see [Multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics).
+#### CPU Utilization - Split by instance
-ExpressRoute has the following dimensions associated with its metrics.
+Aggregation type:ΓÇ»*Avg* (of percentage of total utilized CPU)
-### Dimension for ExpressRoute circuit
+*Granularity: 5 min*
+
+You can view the CPU utilization of each ExpressRoute Traffic Collector instance. The CPU utilization might spike briefly during routine host maintenance, but prolonged high CPU utilization could indicate your ExpressRoute Traffic Collector is reaching a performance bottleneck.
+
+**Guidance:** Set an alert for when avg CPU utilization exceeds a certain threshold.
++
+#### Memory Utilization - Split by instance
+
+Aggregation type:ΓÇ»*Avg* (of percentage of total utilized Memory)
+
+*Granularity: 5 min*
+
+You can view the memory utilization of each ExpressRoute Traffic Collector instance. Memory utilization might spike briefly during routine host maintenance, but prolonged high memory utilization could indicate your Azure Traffic Collector is reaching a performance bottleneck.
+
+**Guidance:** Set an alert for when avg memory utilization exceeds a certain threshold.
++
+#### Count of flow records processed - Split by instances or ExpressRoute circuit
+
+Aggregation type: *Count*
+
+*Granularity: 5 min*
+
+You can view the count of number of flow records processed by ExpressRoute Traffic Collector, aggregated across ExpressRoute Circuits. Customer can split the metrics across each ExpressRoute Traffic Collector instance or ExpressRoute circuit when multiple circuits are associated to the ExpressRoute Traffic Collector. Monitoring this metric helps you understand if you need to deploy more ExpressRoute Traffic Collector instances or migrate ExpressRoute circuit association from one ExpressRoute Traffic Collector deployment to another.
+
+**Guidance:** Splitting by circuits is recommended when multiple ExpressRoute circuits are associated with an ExpressRoute Traffic Collector deployment. This metric helps determine the flow count of each ExpressRoute circuit and ExpressRoute Traffic Collector utilization by each ExpressRoute circuit.
++++
+Dimension for ExpressRoute circuit:
| Dimension Name | Description |
-| - | -- |
-| **PeeringType** | The type of peering configured. The supported values are Microsoft and Private peering. |
-| **Peering** | The supported values are Primary and Secondary. |
-| **PeeredCircuitSkey** | The remote ExpressRoute circuit service key connected using Global Reach. |
+|:|:|
+| PeeringType | The type of peering configured. The supported values are Microsoft and Private peering. |
+| Peering | The supported values are Primary and Secondary. |
+| DeviceRole | |
+| PeeredCircuitSkey | The remote ExpressRoute circuit service key connected using Global Reach. |
-### Dimension for ExpressRoute gateway
+Dimension for ExpressRoute gateway:
| Dimension Name | Description |
-| - | -- |
-| **roleInstance** | The gateway instance. Each ExpressRoute gateway is comprised of multiple instances, and the supported values are GatewayTenantWork_IN_X (where X is a minimum of 0 and a maximum of the number of gateway instances -1). |
+|:-- |:-- |
+| BgpPeerAddress | |
+| ConnectionName | |
+| direction | |
+| roleInstance | The gateway instance. Each ExpressRoute gateway is composed of multiple instances. The supported values are `GatewayTenantWork_IN_X`, where X is a minimum of 0 and a maximum of the number of gateway instances -1. |
-### Dimension for Express Direct
+Dimension for Express Direct:
| Dimension Name | Description |
-| - | -- |
-| **Link** | The physical link. Each ExpressRoute Direct port pair is comprised of two physical links for redundancy, and the supported values are link1 and link2. |
+|:|:|
+| Lane | |
+| Link | The physical link. Each ExpressRoute Direct port pair is composed of two physical links for redundancy, and the supported values are link1 and link2. |
+
-## Resource logs
+### Supported resource logs for Microsoft.Network/expressRouteCircuits
-This section lists the types of resource logs you can collect for ExpressRoute.
-|Resource Log Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
-|-|--|
-| ExpressRoute Circuit | [Microsoft.Network/expressRouteCircuits](../azure-monitor/essentials/resource-logs-categories.md#microsoftnetworkexpressroutecircuits) |
-For reference, see a list of [all resource logs category types supported in Azure Monitor](../azure-monitor/essentials/resource-logs-schema.md).
+### ExpressRoute Microsoft.Network/expressRouteCircuits
-## Azure Monitor Logs tables
+- [AzureActivity](/azure/azure-monitor/reference/tables/AzureActivity#columns)
+- [AzureMetrics](/azure/azure-monitor/reference/tables/AzureMetrics#columns)
+- [AzureDiagnostics](/azure/azure-monitor/reference/tables/AzureDiagnostics#columns)
-Azure ExpressRoute uses Kusto tables from Azure Monitor Logs. You can query these tables with Log analytics. For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype).
-## Activity log
+- [Microsoft.Network resource provider operations](/azure/role-based-access-control/resource-provider-operations#microsoftnetwork)
-The following table lists the operations related to ExpressRoute that may be created in the Activity log.
+The following table lists the operations related to ExpressRoute that might be created in the Activity log.
| Operation | Description | |:|:|
-| All Administrative operations | All administrative operations including create, update and delete of an ExpressRoute circuit. |
+| All Administrative operations | All administrative operations including create, update, and delete of an ExpressRoute circuit. |
| Create or update ExpressRoute circuit | An ExpressRoute circuit was created or updated. | | Deletes ExpressRoute circuit | An ExpressRoute circuit was deleted.|
-For more information on the schema of Activity Log entries, see [Activity Log schema](../azure-monitor/essentials/activity-log-schema.md).
+For more information on the schema of Activity Log entries, see [Activity Log schema](../azure-monitor/essentials/activity-log-schema.md).
## Schemas For detailed description of the top-level diagnostic logs schema, see [Supported services, schemas, and categories for Azure Diagnostic Logs](../azure-monitor/essentials/resource-logs-schema.md).
-When reviewing any metrics through Log Analytics, the output will contain the following columns:
+When you review any metrics through Log Analytics, the output contains the following columns:
-|**Column**|**Type**|**Description**|
-| | | |
-|TimeGrain|string|PT1M (metric values are pushed every minute)|
-|Count|real|Usually equal to 2 (each MSEE pushes a single metric value every minute)|
-|Minimum|real|The minimum of the two metric values pushed by the two MSEEs|
-|Maximum|real|The maximum of the two metric values pushed by the two MSEEs|
-|Average|real|Equal to (Minimum + Maximum)/2|
-|Total|real|Sum of the two metric values from both MSEEs (the main value to focus on for the metric queried)|
+| Column | Type | Description |
+|:-|:--|:|
+| TimeGrain | string | PT1M (metric values are pushed every minute) |
+| Count | real | Usually equal to 2 (each MSEE pushes a single metric value every minute) |
+| Minimum | real | The minimum of the two metric values pushed by the two MSEEs |
+| Maximum | real | The maximum of the two metric values pushed by the two MSEEs |
+| Average | real | Equal to (Minimum + Maximum)/2 |
+| Total | real | Sum of the two metric values from both MSEEs (the main value to focus on for the metric queried) |
-## See also
+## Related content
-- See [Monitoring Azure ExpressRoute](monitor-expressroute.md) for a description of monitoring Azure ExpressRoute.-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Monitor Azure ExpressRoute](monitor-expressroute.md) for a description of monitoring ExpressRoute.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
expressroute Monitor Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/monitor-expressroute.md
Title: Monitoring Azure ExpressRoute
-description: Start here to learn how to monitor Azure ExpressRoute.
-
+ Title: Monitor Azure ExpressRoute
+description: Start here to learn how to monitor Azure ExpressRoute by using Azure Monitor. This article includes links to other resources.
Last updated : 07/11/2024++ -- Previously updated : 03/31/2024
-# Monitoring Azure ExpressRoute
+# Monitor Azure ExpressRoute
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
-This article describes the monitoring data generated by Azure ExpressRoute. Azure ExpressRoute uses [Azure Monitor](../azure-monitor/overview.md). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
-
-## ExpressRoute insights
-
-Some services in Azure have a special focused prebuilt monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called *insights*.
ExpressRoute uses Network insights to provide a detailed topology mapping of all ExpressRoute components (peerings, connections, gateways) in relation with one another. Network insights for ExpressRoute also have preloaded metrics dashboard for availability, throughput, packet drops, and gateway metrics. For more information, see [Azure ExpressRoute Insights using Networking Insights](expressroute-network-insights.md).
-## Monitoring data
-
-Azure ExpressRoute collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
-
-See [Monitoring Azure ExpressRoute data reference](monitor-expressroute-reference.md) for detailed information on the metrics and logs metrics created by Azure ExpressRoute.
-
-## Collection and routing
+For more information about the resource types for ExpressRoute, see [Azure ExpressRoute monitoring data reference](monitor-expressroute-reference.md).
-Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
See [Create diagnostic setting to collect platform logs and metrics in Azure](..
> [!IMPORTANT] > Enabling these settings requires additional Azure services (storage account, event hub, or Log Analytics), which may increase your cost. To calculate an estimated cost, visit the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator).
-The metrics and logs you can collect are discussed in the following sections.
+
+For a list of available metrics for ExpressRoute, see [Azure ExpressRoute monitoring data reference](monitor-expressroute-reference.md#metrics).
+
+> [!NOTE]
+> Using **Classic Metrics** is not recommended.
+>
## Analyzing metrics
-You can analyze metrics for *Azure ExpressRoute* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
+You can analyze metrics for *Azure ExpressRoute* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/metrics-page.png" alt-text="Screenshot of the metrics dashboard for ExpressRoute."::: For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
-* To view **ExpressRoute** metrics, filter by Resource Type *ExpressRoute circuits*.
-* To view **Global Reach** metrics, filter by Resource Type *ExpressRoute circuits* and select an ExpressRoute circuit resource that has Global Reach enabled.
-* To view **ExpressRoute Direct** metrics, filter Resource Type by *ExpressRoute Ports*.
+- To view **ExpressRoute** metrics, filter by Resource Type *ExpressRoute circuits*.
+- To view **Global Reach** metrics, filter by Resource Type *ExpressRoute circuits* and select an ExpressRoute circuit resource that has Global Reach enabled.
+- To view **ExpressRoute Direct** metrics, filter Resource Type by *ExpressRoute Ports*.
Once a metric is selected, the default aggregation is applied. Optionally, you can apply splitting, which shows the metric with different dimensions.
+### ExpressRoute metrics
+
+To view **Metrics**, go to the *Azure Monitor* page and select *Metrics*. To view **ExpressRoute** metrics, filter by Resource Type *ExpressRoute circuits*. To view **Global Reach** metrics, filter by Resource Type *ExpressRoute circuits* and select an ExpressRoute circuit resource that has Global Reach enabled. To view **ExpressRoute Direct** metrics, filter Resource Type by *ExpressRoute Ports*.
+
+After a metric is selected, the default aggregation is applied. Optionally, you can apply splitting, which shows the metric with different dimensions.
+
+> [!IMPORTANT]
+> When viewing ExpressRoute metrics in the Azure portal, select a time granularity of **5 minutes or greater** for best possible results.
+>
+> :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/metric-granularity.png" alt-text="Screenshot of time granularity options.":::
+
+For the ExpressRoute metrics, see [Azure ExpressRoute monitoring data reference](monitor-expressroute-reference.md).
+
+### Aggregation Types
+
+Metrics explorer supports sum, maximum, minimum, average and count as [aggregation types](../azure-monitor/essentials/metrics-charts.md#aggregation). You should use the recommended Aggregation type when reviewing the insights for each ExpressRoute metric.
+
+- Sum: The sum of all values captured during the aggregation interval.
+- Count: The number of measurements captured during the aggregation interval.
+- Average: The average of the metric values captured during the aggregation interval.
+- Min: The smallest value captured during the aggregation interval.
+- Max: The largest value captured during the aggregation interval.
++
+For the available resource log categories, their associated Log Analytics tables, and the log schemas for ExpressRoute, see [Azure ExpressRoute monitoring data reference](monitor-expressroute-reference.md#resource-logs).
++
+### More metrics in Log Analytics
+
+You can also view ExpressRoute metrics by going to your ExpressRoute circuit resource and selecting the *Logs* tab. For any metrics you query, the output contains the following columns.
+
+| Column | Type | Description |
+|:-|:--|:|
+| TimeGrain | string | PT1M (metric values are pushed every minute) |
+| Count | real | Usually is 2 (each MSEE pushes a single metric value every minute) |
+| Minimum | real | The minimum of the two metric values pushed by the two MSEEs |
+| Maximum | real | The maximum of the two metric values pushed by the two MSEEs |
+| Average | real | Equal to (Minimum + Maximum)/2 |
+| Total | real | Sum of the two metric values from both MSEEs (the main value to focus on for the metric queried) |
+
+<a name="collection-and-routing"></a>
+ ## Analyzing logs Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). The schema for ExpressRoute resource logs is found in the [Azure ExpressRoute Data Reference](monitor-expressroute-reference.md#schemas).
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). The schema for ExpressRoute resource logs is found in the [Azure ExpressRoute Data Reference](monitor-expressroute-reference.md#schemas).
The [Activity log](../azure-monitor/essentials/activity-log.md) is a platform logging that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics. ExpressRoute stores data in the following tables. | Table | Description |
-| -- | -- |
+|:|:|
| AzureDiagnostics | Common table used by multiple services to store Resource logs. Resource logs from ExpressRoute can be identified with `MICROSOFT.NETWORK`. | | AzureMetrics | Metric data emitted by ExpressRoute that measure their health and performance.
To view these tables, navigate to your ExpressRoute circuit resource and select
> [!NOTE] > Azure diagnostic logs, such as BGP route table log are updated every 24 hours. +++ ### Sample Kusto queries
-Here are some queries that you can enter into the Log search bar to help you monitor your Azure ExpressRoute resources. These queries work with the [new language](../azure-monitor/logs/log-query-overview.md).
-
-* To query for Border Gateway Protocol (BGP) route table learned over the last 12 hours.
-
- ```Kusto
- AzureDiagnostics
- | where TimeGenerated > ago(12h)
- | where ResourceType == "EXPRESSROUTECIRCUITS"
- | project TimeGenerated, ResourceType , network_s, path_s, OperationName
- ```
-
-* To query for BGP informational messages by level, resource type, and network.
-
- ```Kusto
- AzureDiagnostics
- | where Level == "Informational"
- | where ResourceType == "EXPRESSROUTECIRCUITS"
- | project TimeGenerated, ResourceId , Level, ResourceType , network_s, path_s
- ```
-
-* To query for Traffic graph BitInPerSeconds in the last one hour.
-
- ```Kusto
- AzureMetrics
- | where MetricName == "BitsInPerSecond"
- | summarize by Average, bin(TimeGenerated, 1h), Resource
- | render timechart
- ```
-
-* To query for Traffic graph BitOutPerSeconds in the last one hour.
-
- ```Kusto
- AzureMetrics
- | where MetricName == "BitsOutPerSecond"
- | summarize by Average, bin(TimeGenerated, 1h), Resource
- | render timechart
- ```
-
-* To query for graph of ArpAvailability in 5-minute intervals.
-
- ```Kusto
- AzureMetrics
- | where MetricName == "ArpAvailability"
- | summarize by Average, bin(TimeGenerated, 5m), Resource
- | render timechart
- ```
-
-* To query for graph of BGP availability in 5-minute intervals.
-
- ```Kusto
- AzureMetrics
- | where MetricName == "BGPAvailability"
- | summarize by Average, bin(TimeGenerated, 5m), Resource
- | render timechart
- ```
-
-## Alerts
-
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks.
-
-The following table lists common and recommended alert rules for ExpressRoute.
+These queries work with the [new language](../azure-monitor/logs/log-query-overview.md).
+
+- Query for Border Gateway Protocol (BGP) route table learned over the last 12 hours.
+
+ ```kusto
+ AzureDiagnostics
+ | where TimeGenerated > ago(12h)
+ | where ResourceType == "EXPRESSROUTECIRCUITS"
+ | project TimeGenerated, ResourceType , network_s, path_s, OperationName
+ ```
+
+- Query for BGP informational messages by level, resource type, and network.
+
+ ```kusto
+ AzureDiagnostics
+ | where Level == "Informational"
+ | where ResourceType == "EXPRESSROUTECIRCUITS"
+ | project TimeGenerated, ResourceId , Level, ResourceType , network_s, path_s
+ ```
+
+- Query for Traffic graph BitInPerSeconds in the last one hour.
+
+ ```kusto
+ AzureMetrics
+ | where MetricName == "BitsInPerSecond"
+ | summarize by Average, bin(TimeGenerated, 1h), Resource
+ | render timechart
+ ```
+
+- Query for Traffic graph BitOutPerSeconds in the last one hour.
+
+ ```kusto
+ AzureMetrics
+ | where MetricName == "BitsOutPerSecond"
+ | summarize by Average, bin(TimeGenerated, 1h), Resource
+ | render timechart
+ ```
+
+- Query for graph of ArpAvailability in 5-minute intervals.
+
+ ```kusto
+ AzureMetrics
+ | where MetricName == "ArpAvailability"
+ | summarize by Average, bin(TimeGenerated, 5m), Resource
+ | render timechart
+ ```
+
+- Query for graph of BGP availability in 5-minute intervals.
+
+ ```kusto
+ AzureMetrics
+ | where MetricName == "BGPAvailability"
+ | summarize by Average, bin(TimeGenerated, 5m), Resource
+ | render timechart
+ ```
++
+> [!NOTE]
+> During maintenance between the Microsoft edge and core network, BGP availability appears down even if the BGP session between the customer edge and Microsoft edge remains up. For information about maintenance between the Microsoft edge and core network, make sure to have your [maintenance alerts turned on and configured](./maintenance-alerts.md).
++
+### ExpressRoute alert rules
+
+The following table lists some suggested alert rules for ExpressRoute. These alerts are just examples. You can set alerts for any metric, log entry, or activity log entry listed in the [Azure ExpressRoute monitoring data reference](monitor-expressroute-reference.md).
| Alert type | Condition | Description | |:|:|:| | ARP availability down | Dimension name: Peering Type, Aggregation type: Avg, Operator: Less than, Threshold value: 100% | When ARP availability is down for a peering type. | | BGP availability down | Dimension name: Peer, Aggregation type: Avg, Operator: Less than, Threshold value: 100% | When BGP availability is down for a peer. |
->[!NOTE]
->During maintenance between the Microsoft edge and core network, BGP availability will appear down even if the BGP session between the customer edge and Microsoft edge remains up. For information about maintenance between the Microsoft edge and core network, make sure to have your [maintenance alerts turned on and configured](./maintenance-alerts.md).
->
- ### Alerts for ExpressRoute gateway connections 1. To configure alerts, navigate to **Azure Monitor**, then select **Alerts**. :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/monitor-overview.png" alt-text="Screenshot of the alerts option from the monitor overview page.":::
-1. Select **+ Create > Alert rule** and select the ExpressRoute gateway connection resource. Select **Next: Condition >** to configure the signal.
+1. Select **+ Create** > **Alert rule** and select the ExpressRoute gateway connection resource. Select **Next: Condition >** to configure the signal.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/select-expressroute-gateway.png" alt-text="Screenshot of the selecting ExpressRoute virtual network gateway from the select a resource page.":::
The following table lists common and recommended alert rules for ExpressRoute.
1. Select **Review + create** and then **Create** to deploy the alert into your subscription.
-## Next steps
+
+### Alerts based on each peering
+
+After you select a metric, certain metric allow you to set up dimensions based on peering or a specific peer (virtual networks).
++
+### Configure alerts for activity logs on circuits
+
+When selecting signals to be alerted on, you can select **Activity Log** signal type.
++
+## Related content
-* See [Monitoring ExpressRoute data reference](monitor-expressroute-reference.md) for a reference of the metrics, logs, and other important values created by ExpressRoute.
-* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/overview.md) for details on monitoring Azure resources.
+- See [Azure ExpressRoute monitoring data reference](monitor-expressroute-reference.md) for a reference of the metrics, logs, and other important values created for ExpressRoute.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources.
governance Create Management Group Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-azure-cli.md
directory. You receive a notification when the process is complete. For more inf
- Any Microsoft Entra ID user in the tenant can create a management group without the management group write permission assigned to that user if
- [hierarchy protection](./how-to/protect-resource-hierarchy.md#settingrequire-authorization)
+ [hierarchy protection](./how-to/protect-resource-hierarchy.md#setting-require-authorization)
isn't enabled. This new management group becomes a child of the Root Management Group or the
- [default management group](./how-to/protect-resource-hierarchy.md#settingdefault-management-group)
+ [default management group](./how-to/protect-resource-hierarchy.md#setting-define-the-default-management-group)
and the creator is given an "Owner" role assignment. Management group service allows this ability so that role assignments aren't needed at the root level. No users have access to the Root Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to
governance Create Management Group Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-dotnet.md
directory. You receive a notification when the process is complete. For more inf
- Any Microsoft Entra ID user in the tenant can create a management group without the management group write permission assigned to that user if
- [hierarchy protection](./how-to/protect-resource-hierarchy.md#settingrequire-authorization)
+ [hierarchy protection](./how-to/protect-resource-hierarchy.md#setting-require-authorization)
isn't enabled. This new management group becomes a child of the Root Management Group or the
- [default management group](./how-to/protect-resource-hierarchy.md#settingdefault-management-group)
+ [default management group](./how-to/protect-resource-hierarchy.md#setting-define-the-default-management-group)
and the creator is given an "Owner" role assignment. Management group service allows this ability so that role assignments aren't needed at the root level. No users have access to the Root Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to
governance Create Management Group Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-go.md
directory. You receive a notification when the process is complete. For more inf
- Any Microsoft Entra ID user in the tenant can create a management group without the management group write permission assigned to that user if
- [hierarchy protection](./how-to/protect-resource-hierarchy.md#settingrequire-authorization)
+ [hierarchy protection](./how-to/protect-resource-hierarchy.md#setting-require-authorization)
isn't enabled. This new management group becomes a child of the Root Management Group or the
- [default management group](./how-to/protect-resource-hierarchy.md#settingdefault-management-group)
+ [default management group](./how-to/protect-resource-hierarchy.md#setting-define-the-default-management-group)
and the creator is given an "Owner" role assignment. Management group service allows this ability so that role assignments aren't needed at the root level. No users have access to the Root Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to
that can create a management group.
package main import (
- "context"
- "fmt"
- "os"
+ "context"
+ "fmt"
+ "os"
- mg "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2020-05-01/managementgroups"
- "github.com/Azure/go-autorest/autorest/azure/auth"
+ mg "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2020-05-01/managementgroups"
+ "github.com/Azure/go-autorest/autorest/azure/auth"
) func main() {
- // Get variables from command line arguments
- var mgName = os.Args[1]
-
- // Create and authorize a client
- mgClient := mg.NewClient()
- authorizer, err := auth.NewAuthorizerFromCLI()
- if err == nil {
- mgClient.Authorizer = authorizer
- } else {
- fmt.Printf(err.Error())
- }
-
- // Create the request
- Request := mg.CreateManagementGroupRequest{
- Name: &mgName,
- }
-
- // Run the query and get the results
- var results, queryErr = mgClient.CreateOrUpdate(context.Background(), mgName, Request, "no-cache")
- if queryErr == nil {
- fmt.Printf("Results: " + fmt.Sprint(results) + "\n")
- } else {
- fmt.Printf(queryErr.Error())
- }
+ // Get variables from command line arguments
+ var mgName = os.Args[1]
+
+ // Create and authorize a client
+ mgClient := mg.NewClient()
+ authorizer, err := auth.NewAuthorizerFromCLI()
+ if err == nil {
+ mgClient.Authorizer = authorizer
+ } else {
+ fmt.Printf(err.Error())
+ }
+
+ // Create the request
+ Request := mg.CreateManagementGroupRequest{
+ Name: &mgName,
+ }
+
+ // Run the query and get the results
+ var results, queryErr = mgClient.CreateOrUpdate(context.Background(), mgName, Request, "no-cache")
+ if queryErr == nil {
+ fmt.Printf("Results: " + fmt.Sprint(results) + "\n")
+ } else {
+ fmt.Printf(queryErr.Error())
+ }
} ```
governance Create Management Group Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-javascript.md
directory. You receive a notification when the process is complete. For more inf
- Any Microsoft Entra ID user in the tenant can create a management group without the management group write permission assigned to that user if
- [hierarchy protection](./how-to/protect-resource-hierarchy.md#settingrequire-authorization)
+ [hierarchy protection](./how-to/protect-resource-hierarchy.md#setting-require-authorization)
isn't enabled. This new management group becomes a child of the Root Management Group or the
- [default management group](./how-to/protect-resource-hierarchy.md#settingdefault-management-group)
+ [default management group](./how-to/protect-resource-hierarchy.md#setting-define-the-default-management-group)
and the creator is given an "Owner" role assignment. Management group service allows this ability so that role assignments aren't needed at the root level. No users have access to the Root Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to
governance Create Management Group Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-portal.md
directory. You receive a notification when the process is complete. For more inf
- Any Microsoft Entra ID user in the tenant can create a management group without the management group write permission assigned to that user if
- [hierarchy protection](./how-to/protect-resource-hierarchy.md#settingrequire-authorization)
+ [hierarchy protection](./how-to/protect-resource-hierarchy.md#setting-require-authorization)
isn't enabled. This new management group becomes a child of the Root Management Group or the
- [default management group](./how-to/protect-resource-hierarchy.md#settingdefault-management-group)
+ [default management group](./how-to/protect-resource-hierarchy.md#setting-define-the-default-management-group)
and the creator is given an "Owner" role assignment. Management group service allows this ability so that role assignments aren't needed at the root level. No users have access to the Root Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to
governance Create Management Group Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-powershell.md
directory. You receive a notification when the process is complete. For more inf
- Any Microsoft Entra ID user in the tenant can create a management group without the management group write permission assigned to that user if
- [hierarchy protection](./how-to/protect-resource-hierarchy.md#settingrequire-authorization)
+ [hierarchy protection](./how-to/protect-resource-hierarchy.md#setting-require-authorization)
isn't enabled. This new management group becomes a child of the Root Management Group or the
- [default management group](./how-to/protect-resource-hierarchy.md#settingdefault-management-group)
+ [default management group](./how-to/protect-resource-hierarchy.md#setting-define-the-default-management-group)
and the creator is given an "Owner" role assignment. Management group service allows this ability so that role assignments aren't needed at the root level. No users have access to the Root Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to
governance Create Management Group Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-python.md
directory. You receive a notification when the process is complete. For more inf
- Any Microsoft Entra ID user in the tenant can create a management group without the management group write permission assigned to that user if
- [hierarchy protection](./how-to/protect-resource-hierarchy.md#settingrequire-authorization)
+ [hierarchy protection](./how-to/protect-resource-hierarchy.md#setting-require-authorization)
isn't enabled. This new management group becomes a child of the Root Management Group or the
- [default management group](./how-to/protect-resource-hierarchy.md#settingdefault-management-group)
+ [default management group](./how-to/protect-resource-hierarchy.md#setting-define-the-default-management-group)
and the creator is given an "Owner" role assignment. Management group service allows this ability so that role assignments aren't needed at the root level. No users have access to the Root Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to
governance Create Management Group Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-rest-api.md
directory. You receive a notification when the process is complete. For more inf
- Any Microsoft Entra ID user in the tenant can create a management group without the management group write permission assigned to that user if
- [hierarchy protection](./how-to/protect-resource-hierarchy.md#settingrequire-authorization)
+ [hierarchy protection](./how-to/protect-resource-hierarchy.md#setting-require-authorization)
isn't enabled. This new management group becomes a child of the Root Management Group or the
- [default management group](./how-to/protect-resource-hierarchy.md#settingdefault-management-group)
+ [default management group](./how-to/protect-resource-hierarchy.md#setting-define-the-default-management-group)
and the creator is given an "Owner" role assignment. Management group service allows this ability so that role assignments aren't needed at the root level. No users have access to the Root Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to
governance Protect Resource Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/how-to/protect-resource-hierarchy.md
Title: How to protect your resource hierarchy - Azure Governance
-description: Learn how to protect your resource hierarchy with hierarchy settings that include setting the default management group.
Previously updated : 08/17/2021
+ Title: Protect your resource hierarchy - Azure Governance
+description: Learn how to help protect your resource hierarchy by using hierarchy settings that include defining the default management group.
Last updated : 07/23/2024
-# How to protect your resource hierarchy
+# Protect your resource hierarchy
-Your resources, resource groups, subscriptions, management groups, and tenant collectively make up
-your resource hierarchy. Settings at the root management group, such as Azure custom roles or Azure
-Policy policy assignments, can impact every resource in your resource hierarchy. It's important to
-protect the resource hierarchy from changes that could negatively impact all resources.
+Your resources, resource groups, subscriptions, management groups, and tenant compose
+your resource hierarchy. Settings at the root management group, such as Azure custom roles or
+policy assignments, can affect every resource in your resource hierarchy. It's important to
+protect the resource hierarchy from changes that could negatively affect all resources.
-Management groups now have hierarchy settings that enable the tenant administrator to control these
+Management groups have hierarchy settings that enable the tenant administrator to control these
behaviors. This article covers each of the available hierarchy settings and how to set them. ## Azure RBAC permissions for hierarchy settings
-Configuring any of the hierarchy settings requires the following two resource provider operations on
+Configuring hierarchy settings requires the following resource provider operations on
the root management group: - `Microsoft.Management/managementgroups/settings/write` - `Microsoft.Management/managementgroups/settings/read`
-These operations only allow a user to read and update the hierarchy settings. The operations don't
-provide any other access to the management group hierarchy or resources in the hierarchy. Both of
-these operations are available in the Azure built-in role **Hierarchy Settings Administrator**.
+These operations represent Azure role-based access control (Azure RBAC) permissions.
+They only allow a user to read and update the hierarchy settings. They don't
+provide any other access to the management group hierarchy or to resources in the hierarchy.
-## Setting - Default management group
+Both of
+these operations are available in the Azure built-in role Hierarchy Settings Administrator.
-By default, a new subscription added within a tenant is added as a member of the root management
-group. If policy assignments, Azure role-based access control (Azure RBAC), and other governance
-constructs are assigned to the root management group, they immediately effect these new
+## Setting: Define the default management group
+
+By default, a new subscription that you add in a tenant becomes a member of the root management
+group. If you assign policy assignments, Azure RBAC, and other governance
+constructs to the root management group, they immediately affect these new
subscriptions. For this reason, many organizations don't apply these constructs at the root
-management group even though that is the desired place to assign them. In other cases, a more
-restrictive set of controls is desired for new subscriptions, but shouldn't be assigned to all
+management group, even though that's the desired place to assign them. In other cases, an organization wants a more
+restrictive set of controls for new subscriptions but doesn't want to assign them to all
subscriptions. This setting supports both use cases.
-By allowing the default management group for new subscriptions to be defined, organization-wide
-governance constructs can be applied at the root management group, and a separate management group
-with policy assignments or Azure role assignments more suited to a new subscription can be defined.
+By allowing the default management group for new subscriptions to be defined, you can apply organization-wide
+governance constructs at the root management group. You can define a separate management group
+with policy assignments or Azure role assignments that are more suited to a new subscription.
-### Set default management group in portal
+### Define the default management group in the portal
-To configure this setting in the Azure portal, follow these steps:
+1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Use the search bar to search for and select 'Management groups'.
+1. Use the search bar to search for and select **Management groups**.
1. On the root management group, select **details** next to the name of the management group.
To configure this setting in the Azure portal, follow these steps:
1. Select the **Change default management group** button.
- > [!NOTE]
- > If the **Change default management group** button is disabled, either the management group
- > being viewed isn't the root management group or your security principal doesn't have the
- > necessary permissions to alter the hierarchy settings.
+ If the **Change default management group** button is unavailable, the cause is one of these conditions:
+
+ - The management group that you're viewing isn't the root management group.
+ - Your security principal doesn't have the necessary permissions to alter the hierarchy settings.
-1. Select a management group from your hierarchy and use the **Select** button.
+1. Select a management group from your hierarchy, and then choose the **Select** button.
-### Set default management group with REST API
+### Define the default management group by using the REST API
-To configure this setting with REST API, the
-[Hierarchy Settings](/rest/api/managementgroups/hierarchysettings) endpoint is called. To do so, use
+To define the default management group by using the REST API, you must call the
+[Hierarchy Settings](/rest/api/managementgroups/hierarchysettings) endpoint. Use
the following REST API URI and body format. Replace `{rootMgID}` with the ID of your root management
-group and `{defaultGroupID}` with the ID of the management group to become the default management
-group:
+group. Replace `{defaultGroupID}` with the ID of the management group that will become the default management
+group.
-- REST API URI
+- REST API URI:
```http PUT https://management.azure.com/providers/Microsoft.Management/managementGroups/{rootMgID}/settings/default?api-version=2020-05-01 ``` -- Request Body
+- Request body:
```json {
group:
``` To set the default management group back to the root management group, use the same endpoint and set
-**defaultManagementGroup** to a value of
+`defaultManagementGroup` to a value of
`/providers/Microsoft.Management/managementGroups/{rootMgID}`.
-## Setting - Require authorization
+## Setting: Require authorization
-Any user, by default, can create new management groups within a tenant. Admins of a tenant may wish
-to only provide these permissions to specific users to maintain consistency and conformity in the
-management group hierarchy. If enabled, a user requires the
-`Microsoft.Management/managementGroups/write` operation on the root management group to create new
-child management groups.
+Any user, by default, can create new management groups in a tenant. Admins of a tenant might want
+to provide these permissions only to specific users, to maintain consistency and conformity in the
+management group hierarchy. To create child management groups, a user requires the
+`Microsoft.Management/managementGroups/write` operation on the root management group.
-### Set require authorization in portal
+### Require authorization in the portal
-To configure this setting in the Azure portal, follow these steps:
+1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Use the search bar to search for and select 'Management groups'.
+1. Use the search bar to search for and select **Management groups**.
1. On the root management group, select **details** next to the name of the management group. 1. Under **Settings**, select **Hierarchy settings**.
-1. Toggle the **Require permissions for creating new management groups.** option to on.
+1. Turn on the **Require permissions for creating new management groups** toggle.
+
+ If the **Require permissions for creating new management groups** toggle is unavailable, the cause is one of these conditions:
- > [!NOTE]
- > If the **Require permissions for creating new management groups.** toggle is disabled, either
- > the management group being viewed isn't the root management group or your security principal
- > doesn't have the necessary permissions to alter the hierarchy settings.
+ - The management group that you're viewing isn't the root management group.
+ - Your security principal doesn't have the necessary permissions to alter the hierarchy settings.
-### Set require authorization with REST API
+### Require authorization by using the REST API
-To configure this setting with REST API, the
-[Hierarchy Settings](/rest/api/managementgroups/hierarchysettings) endpoint is called. To do so, use
-the following REST API URI and body format. This value is a _boolean_, so provide either **true** or
-**false** for the value. A value of **true** enables this method of protecting your management group
-hierarchy:
+To require authorization by using the REST API, call the
+[Hierarchy Settings](/rest/api/managementgroups/hierarchysettings) endpoint. Use
+the following REST API URI and body format. This value is a Boolean, so provide either `true` or
+`false` for the value. A value of `true` enables this method of protecting your management group
+hierarchy.
-- REST API URI
+- REST API URI:
```http PUT https://management.azure.com/providers/Microsoft.Management/managementGroups/{rootMgID}/settings/default?api-version=2020-05-01 ``` -- Request Body
+- Request body:
```json {
hierarchy:
} ```
-To turn the setting back off, use the same endpoint and set
-**requireAuthorizationForGroupCreation** to a value of **false**.
+To turn off the setting, use the same endpoint and set
+`requireAuthorizationForGroupCreation` to a value of `false`.
-## PowerShell sample
+## Azure PowerShell sample
-PowerShell doesn't have an 'Az' command to set the default management group or set require
-authorization, but as a workaround you can use the REST API with the PowerShell sample below:
+Azure PowerShell doesn't have an `Az` command to define the default management group or to require
+authorization. As a workaround, you can use the REST API with the following Azure PowerShell sample:
```powershell $root_management_group_id = "Enter the ID of root management group"
$uri = "https://management.azure.com/providers/Microsoft.Management/managementGr
Invoke-RestMethod -Method PUT -Uri $uri -Headers $headers -Body $body ```
-## Next steps
+## Related content
To learn more about management groups, see: - [Create management groups to organize Azure resources](../create-management-group-portal.md)-- [How to change, delete, or manage your management groups](../manage.md)
+- [Change, delete, or manage your management groups](../manage.md)
governance Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/manage.md
# Manage your Azure subscriptions at scale with management groups
-If your organization has many subscriptions, you may need a way to efficiently manage access,
+If your organization has many subscriptions, you might need a way to efficiently manage access,
policies, and compliance for those subscriptions. Azure management groups provide a level of scope
-above subscriptions. You organize subscriptions into containers called "management groups" and apply
+above subscriptions. You organize subscriptions into containers called *management groups* and apply
your governance conditions to the management groups. All subscriptions within a management group automatically inherit the conditions applied to the management group. Management groups give you enterprise-grade management at a large scale no matter what type of
-subscriptions you might have. To learn more about management groups, see
+subscription you have. To learn more about management groups, see
[Organize your resources with Azure management groups](./overview.md). [!INCLUDE [GDPR-related guidance](~/reusable-content/ce-skilling/azure/includes/gdpr-intro-sentence.md)] > [!IMPORTANT]
-> Azure Resource Manager user tokens and management group cache lasts for 30 minutes before they are
-> forced to refresh. After doing any action like moving a management group or subscription, it might
-> take up to 30 minutes to show. To see the updates sooner you need to update your token by
+> Azure Resource Manager user tokens and management group cache last for 30 minutes before they're
+> forced to refresh. Any action like moving a management group or subscription might
+> take up to 30 minutes to appear. To see the updates sooner, you need to update your token by
> refreshing the browser, signing in and out, or requesting a new token.
-> [!IMPORTANT]
-> AzManagementGroup related Az PowerShell cmdlets mention that the **-GroupId** is alias of **-GroupName** parameter
-> so we can use either of it to provide Management Group Id as a string value.
+For the Azure PowerShell actions in this article, keep in mind that `AzManagementGroup`-related cmdlets mention that `-GroupId` is an alias of the `-GroupName` parameter.
+You can use either of them to provide the management group ID as a string value.
## Change the name of a management group
-You can change the name of the management group by using the portal, PowerShell, or Azure CLI.
+You can change the name of the management group by using the Azure portal, Azure PowerShell, or the Azure CLI.
### Change the name in the portal
-1. Log into the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **All services** > **Management groups**.
-1. Select the management group you would like to rename.
+1. Select the management group that you want to rename.
1. Select **details**.
-1. Select the **Rename group** option at the top of the page.
+1. Select the **Rename Group** option at the top of the pane.
- :::image type="content" source="./media/detail_action_small.png" alt-text="Screenshot of the action bar and the 'Rename Group' button on the management group page." border="false":::
+ :::image type="content" source="./media/detail_action_small.png" alt-text="Screenshot of the action bar and the Rename Group button on the management group page." border="false":::
-1. When the menu opens, enter the new name you would like to have displayed.
+1. On the **Rename Group** pane, enter the new name that you want to display.
- :::image type="content" source="./media/rename_context.png" alt-text="Screenshot of the Rename Group window and options to rename a management group." border="false":::
+ :::image type="content" source="./media/rename_context.png" alt-text="Screenshot of the options to rename a management group." border="false":::
1. Select **Save**.
-### Change the name in PowerShell
+### Change the name in Azure PowerShell
-To update the display name use **Update-AzManagementGroup**. For example, to change a management
-groups display name from "Contoso IT" to "Contoso Group", you run the following command:
+To update the display name, use `Update-AzManagementGroup` in Azure PowerShell. For example, to change a management
+group's display name from **Contoso IT** to **Contoso Group**, run the following command:
```azurepowershell-interactive Update-AzManagementGroup -GroupId 'ContosoIt' -DisplayName 'Contoso Group' ```
-### Change the name in Azure CLI
+### Change the name in the Azure CLI
-For Azure CLI, use the update command.
+For the Azure CLI, use the `update` command:
```azurecli-interactive az account management-group update --name 'Contoso' --display-name 'Contoso Group'
az account management-group update --name 'Contoso' --display-name 'Contoso Grou
## Delete a management group
-To delete a management group, the following requirements must be met:
+To delete a management group, you must meet the following requirements:
-1. There are no child management groups or subscriptions under the management group. To move a
+- There are no child management groups or subscriptions under the management group. To move a
subscription or management group to another management group, see
- [Moving management groups and subscriptions in the hierarchy](#moving-management-groups-and-subscriptions).
+ [Move management groups and subscriptions](#move-management-groups-and-subscriptions) later in this article.
-1. You need write permissions on the management group ("Owner", "Contributor", or "Management Group
- Contributor"). To see what permissions you have, select the management group and then select
+- You need write permissions on the management group (Owner, Contributor, or Management Group
+ Contributor). To see what permissions you have, select the management group and then select
**IAM**. To learn more on Azure roles, see
- [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
+ [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md).
-### Delete in the portal
+### Delete a management group in the portal
-1. Log into the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **All services** > **Management groups**.
-1. Select the management group you would like to delete.
+1. Select the management group that you want to delete.
1. Select **details**.
-1. Select **Delete**
+1. Select **Delete**.
- :::image type="content" source="./media/delete.png" alt-text="Screenshot of the Management group page with the 'Delete' button highlighted." border="false":::
+ :::image type="content" source="./media/delete.png" alt-text="Screenshot of the management group page with the Delete button." border="false":::
> [!TIP]
- > If the icon is disabled, hovering your mouse selector over the icon shows you the reason.
+ > If the **Delete** button is unavailable, hovering over the button shows you the reason.
-1. There's a window that opens confirming you want to delete the management group.
+1. A dialog opens and asks you to confirm that you want to delete the management group.
- :::image type="content" source="./media/delete_confirm.png" alt-text="Screenshot of the 'Delete group' confirmation dialog for deleting a management group." border="false":::
+ :::image type="content" source="./media/delete_confirm.png" alt-text="Screenshot of the confirmation dialog for deleting a management group." border="false":::
1. Select **Yes**.
-### Delete in PowerShell
+### Delete a management group in Azure PowerShell
-Use the **Remove-AzManagementGroup** command within PowerShell to delete management groups.
+To delete a management group, use the `Remove-AzManagementGroup` command in Azure PowerShell:
```azurepowershell-interactive Remove-AzManagementGroup -GroupId 'Contoso' ```
-### Delete in Azure CLI
+### Delete a management group in the Azure CLI
-With Azure CLI, use the command az account management-group delete.
+With the Azure CLI, use the command `az account management-group delete`:
```azurecli-interactive az account management-group delete --name 'Contoso'
az account management-group delete --name 'Contoso'
## View management groups
-You can view any management group you have a direct or inherited Azure role on.
+You can view any management group if you have a direct or inherited Azure role on it.
-### View in the portal
+### View management groups in the portal
-1. Log into the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **All services** > **Management groups**.
-1. The management group hierarchy page will load. This page is where you can explore all the
- management groups and subscriptions you have access to. Selecting the group name takes you to a
+1. The page for management group hierarchy appears. On this page, you can explore all the
+ management groups and subscriptions that you have access to. Selecting the group name takes you to a
lower level in the hierarchy. The navigation works the same as a file explorer does. 1. To see the details of the management group, select the **(details)** link next to the title of the management group. If this link isn't available, you don't have permissions to view that management group.
- :::image type="content" source="./media/main.png" alt-text="Screenshot of the Management groups page showing child management groups and subscriptions." border="false":::
+ :::image type="content" source="./media/main.png" alt-text="Screenshot of the management groups page that shows child management groups and subscriptions." border="false":::
-### View in PowerShell
+### View management groups in Azure PowerShell
-You use the Get-AzManagementGroup command to retrieve all groups. See
-[Az.Resources](/powershell/module/az.resources/Get-AzManagementGroup) modules for the full list of
-management group GET PowerShell commands.
+You use the `Get-AzManagementGroup` command to retrieve all groups. For the full list of
+`GET` PowerShell commands for management groups, see the
+[Az.Resources](/powershell/module/az.resources/Get-AzManagementGroup) modules.
```azurepowershell-interactive Get-AzManagementGroup ```
-For a single management group's information, use the -GroupId parameter
+For a single management group's information, use the `-GroupId` parameter:
```azurepowershell-interactive Get-AzManagementGroup -GroupId 'Contoso' ```
-To return a specific management group and all the levels of the hierarchy under it, use **-Expand**
-and **-Recurse** parameters.
+To return a specific management group and all the levels of the hierarchy under it, use the `-Expand`
+and `-Recurse` parameters:
```azurepowershell-interactive PS C:\> $response = Get-AzManagementGroup -GroupId TestGroupParent -Expand -Recurse
DisplayName : TestRecurseChild
Children : ```
-### View in Azure CLI
+### View management groups in the Azure CLI
-You use the list command to retrieve all groups.
+You use the `list` command to retrieve all groups:
```azurecli-interactive az account management-group list ```
-For a single management group's information, use the show command
+For a single management group's information, use the `show` command:
```azurecli-interactive az account management-group show --name 'Contoso' ```
-To return a specific management group and all the levels of the hierarchy under it, use **-Expand**
-and **-Recurse** parameters.
+To return a specific management group and all the levels of the hierarchy under it, use the `-Expand`
+and `-Recurse` parameters:
```azurecli-interactive az account management-group show --name 'Contoso' -e -r ```
-## Moving management groups and subscriptions
+## Move management groups and subscriptions
One reason to create a management group is to bundle subscriptions together. Only management groups
-and subscriptions can be made children of another management group. A subscription that moves to a
-management group inherits all user access and policies from the parent management group. You can move subscriptions between management groups. Take note that a subscription can only have one parent management group.
+and subscriptions can become children of another management group. A subscription that moves to a
+management group inherits all user access and policies from the parent management group.
+
+You can move subscriptions between management groups. A subscription can have only one parent management group.
-When moving a management group or subscription to be a child of another management group, three
+When you move a management group or subscription to be a child of another management group, three
rules need to be evaluated as true. If you're doing the move action, you need permission at each of the following layers: -- Child subscription / management group
+- Child subscription or management group
- `Microsoft.management/managementgroups/write`
- - `Microsoft.management/managementgroups/subscriptions/write` (only for Subscriptions)
+ - `Microsoft.management/managementgroups/subscriptions/write` (only for subscriptions)
- `Microsoft.Authorization/roleAssignments/write` - `Microsoft.Authorization/roleAssignments/delete` - `Microsoft.Management/register/action`
If you're doing the move action, you need permission at each of the following la
- Current parent management group - `Microsoft.management/managementgroups/write`
-**Exception**: If the target or the existing parent management group is the Root management group,
-the permissions requirements don't apply. Since the Root management group is the default landing
+There's an exception: if the target or the existing parent management group is the root management group,
+the permission requirements don't apply. Because the root management group is the default landing
spot for all new management groups and subscriptions, you don't need permissions on it to move an item. If the Owner role on the subscription is inherited from the current management group, your move
-targets are limited. You can only move the subscription to another management group where you have
+targets are limited. You can move the subscription only to another management group where you have
the Owner role. You can't move the subscription to a management group where you're only a
-contributor because you would lose ownership of the subscription. If you're directly assigned to the
-Owner role for the subscription, you can move it to any management group where you're a contributor.
+Contributor because you would lose ownership of the subscription. If you're directly assigned to the
+Owner role for the subscription, you can move it to any management group where you have the Contributor role.
To see what permissions you have in the Azure portal, select the management group and then select
-**IAM**. To learn more on Azure roles, see
-[Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
-
-## Move subscriptions
+**IAM**. To learn more about Azure roles, see
+[What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md).
-### Add an existing Subscription to a management group in the portal
+### Add an existing subscription to a management group in the portal
-1. Log into the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **All services** > **Management groups**.
-1. Select the management group you're planning to be the parent.
+1. Select the management group that you want to be the parent.
1. At the top of the page, select **Add subscription**. 1. Select the subscription in the list with the correct ID.
- :::image type="content" source="./media/add_context_sub.png" alt-text="Screenshot of the 'Add subscription' options for selecting an existing subscription to add to a management group." border="false":::
+ :::image type="content" source="./media/add_context_sub.png" alt-text="Screenshot of the box for selecting an existing subscription to add to a management group." border="false":::
-1. Select "Save".
+1. Select **Save**.
### Remove a subscription from a management group in the portal
-1. Log into the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **All services** > **Management groups**.
-1. Select the management group you're planning that is the current parent.
+1. Select the management group that's the current parent.
-1. Select the ellipse at the end of the row for the subscription in the list you want to move.
+1. Select the ellipsis (`...`) at the end of the row for the subscription in the list that you want to move.
- :::image type="content" source="./media/move_small.png" alt-text="Screenshot of the alternative menu for a subscription to select the 'Move' option." border="false":::
+ :::image type="content" source="./media/move_small.png" alt-text="Screenshot of the menu that includes the move option for a subscription." border="false":::
1. Select **Move**.
-1. On the menu that opens, select the **Parent management group**.
+1. On **Move** pane, select the value for **New parent management group ID**.
- :::image type="content" source="./media/move_small_context.png" alt-text="Screenshot of the 'Move' window and options for moving a subscription to a different management group." border="false":::
+ :::image type="content" source="./media/move_small_context.png" alt-text="Screenshot of the pane for moving a subscription to a different management group." border="false":::
1. Select **Save**.
-### Move subscriptions in PowerShell
+### Move a subscription in Azure PowerShell
-To move a subscription in PowerShell, you use the New-AzManagementGroupSubscription command.
+To move a subscription in PowerShell, you use the `New-AzManagementGroupSubscription` command:
```azurepowershell-interactive New-AzManagementGroupSubscription -GroupId 'Contoso' -SubscriptionId '12345678-1234-1234-1234-123456789012' ```
-To remove the link between the subscription and the management group use the
-Remove-AzManagementGroupSubscription command.
+To remove the link between the subscription and the management group, use the
+`Remove-AzManagementGroupSubscription` command:
```azurepowershell-interactive Remove-AzManagementGroupSubscription -GroupId 'Contoso' -SubscriptionId '12345678-1234-1234-1234-123456789012' ```
-### Move subscriptions in Azure CLI
+### Move a subscription in the Azure CLI
-To move a subscription in CLI, you use the add command.
+To move a subscription in the Azure CLI, you use the `add` command:
```azurecli-interactive az account management-group subscription add --name 'Contoso' --subscription '12345678-1234-1234-1234-123456789012' ```
-To remove the subscription from the management group, use the subscription remove command.
+To remove the subscription from the management group, use the `subscription remove` command:
```azurecli-interactive az account management-group subscription remove --name 'Contoso' --subscription '12345678-1234-1234-1234-123456789012' ```
-### Move subscriptions in ARM template
+### Move a subscription in an ARM template
To move a subscription in an Azure Resource Manager template (ARM template), use the following
-template and deploy it at [tenant level](../../azure-resource-manager/templates/deploy-to-tenant.md).
+template and deploy it at the [tenant level](../../azure-resource-manager/templates/deploy-to-tenant.md):
```json {
template and deploy it at [tenant level](../../azure-resource-manager/templates/
} ```
-Or, the following Bicep file.
+Or, use the following Bicep file:
```bicep targetScope = 'managementGroup'
resource subToMG 'Microsoft.Management/managementGroups/subscriptions@2020-05-01
} ```
-## Move management groups
-
-### Move management groups in the portal
+### Move a management group in the portal
-1. Log into the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **All services** > **Management groups**.
-1. Select the management group you're planning to be the parent.
+1. Select the management group that you want to be the parent.
1. At the top of the page, select **Add management group**.
-1. In the menu that opens, select if you want a new or use an existing management group.
+1. On the **Add management group** pane, choose whether you want to use a new or existing management group:
- - Selecting new will create a new management group.
- - Selecting an existing will present you with a dropdown list of all the management groups you
+ - Selecting **Create new** creates a new management group.
+ - Selecting **Use existing** presents you with a dropdown list of all the management groups that you
can move to this management group.
- :::image type="content" source="./media/add_context_MG.png" alt-text="Screenshot of the 'Add management group' options for creating a new management group." border="false":::
+ :::image type="content" source="./media/add_context_MG.png" alt-text="Screenshot of the pane for adding a management group." border="false":::
1. Select **Save**.
-### Move management groups in PowerShell
+### Move a management group in Azure PowerShell
-Use the Update-AzManagementGroup command in PowerShell to move a management group under a different
-group.
+To move a management group under a different
+group, use the `Update-AzManagementGroup` command in Azure PowerShell:
```azurepowershell-interactive $parentGroup = Get-AzManagementGroup -GroupId ContosoIT Update-AzManagementGroup -GroupId 'Contoso' -ParentId $parentGroup.id ```
-### Move management groups in Azure CLI
+### Move a management group in the Azure CLI
-Use the update command to move a management group with Azure CLI.
+To move a management group in the Azure CLI, use the `update` command:
```azurecli-interactive az account management-group update --name 'Contoso' --parent ContosoIT ```
-## Audit management groups using activity logs
+## Audit management groups by using activity logs
-Management groups are supported within
-[Azure Activity Log](../../azure-monitor/essentials/platform-logs-overview.md). You can query all
+Management groups are supported in [Azure Monitor activity logs](../../azure-monitor/essentials/platform-logs-overview.md). You can query all
events that happen to a management group in the same central location as other Azure resources. For
-example, you can see all Role Assignments or Policy Assignment changes made to a particular
+example, you can see all role assignments or policy assignment changes made to a particular
management group.
-When looking to query on Management Groups outside of the Azure portal, the target scope for
-management groups looks like **"/providers/Microsoft.Management/managementGroups/{yourMgID}"**.
+When you want to query on management groups outside the Azure portal, the target scope for
+management groups looks like `"/providers/Microsoft.Management/managementGroups/{yourMgID}"`.
-## Referencing management groups from other Resource Providers
+## Reference management groups from other resource providers
-When referencing management groups from other Resource Provider's actions, use the following path as
-the scope. This path is used when using PowerShell, Azure CLI, and REST APIs.
+When you're referencing management groups from another resource provider's actions, use the following path as
+the scope. This path applies when you're using Azure PowerShell, the Azure CLI, and REST APIs.
`/providers/Microsoft.Management/managementGroups/{yourMgID}`
-An example of using this path is when assigning a new role assignment to a management group in
-PowerShell:
+An example of using this path is when you're assigning a new role to a management group in
+Azure PowerShell:
```azurepowershell-interactive New-AzRoleAssignment -Scope "/providers/Microsoft.Management/managementGroups/Contoso" ```
-The same scope path is used when retrieving a policy definition at a management group.
+You use the same scope path to retrieve a policy definition for a management group:
```http GET https://management.azure.com/providers/Microsoft.Management/managementgroups/MyManagementGroup/providers/Microsoft.Authorization/policyDefinitions/ResourceNaming?api-version=2019-09-01 ```
-## Next steps
+## Related content
To learn more about management groups, see: - [Create management groups to organize Azure resources](./create-management-group-portal.md)-- [How to change, delete, or manage your management groups](./manage.md)-- [Review management groups in Azure PowerShell Resources Module](/powershell/module/az.resources#resources)-- [Review management groups in REST API](/rest/api/managementgroups/managementgroups)-- [Review management groups in Azure CLI](/cli/azure/account/management-group)
+- [Review management groups in the Azure PowerShell Az.Resources module](/powershell/module/az.resources#resources)
+- [Review management groups in the REST API](/rest/api/managementgroups/managementgroups)
+- [Review management groups in the Azure CLI](/cli/azure/account/management-group)
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/overview.md
Title: Organize your resources with management groups - Azure Governance
-description: Learn about the management groups, how their permissions work, and how to use them.
+description: Learn about management groups, how their permissions work, and how to use them.
Last updated 07/18/2024
# What are Azure management groups?
-If your organization has many Azure subscriptions, you may need a way to efficiently manage access,
+If your organization has many Azure subscriptions, you might need a way to efficiently manage access,
policies, and compliance for those subscriptions. _Management groups_ provide a governance scope
-above subscriptions. You organize subscriptions into management groups; the governance conditions you apply
+above subscriptions. When you organize subscriptions into management groups, the governance conditions that you apply
cascade by inheritance to all associated subscriptions. Management groups give you enterprise-grade management at scale, no matter what type of subscriptions you might have.
-However, all subscriptions within a single management group must trust the same Microsoft Entra ID tenant.
+However, all subscriptions within a single management group must trust the same Microsoft Entra tenant.
-For example, you can apply policies to a management group that limits the regions available for virtual machine (VM) creation. This policy would be applied to all nested management groups, subscriptions, and resources and allow VM creation only in authorized regions.
+For example, you can apply a policy to a management group that limits the regions available for virtual machine (VM) creation. This policy would be applied to all nested management groups, subscriptions, and resources to allow VM creation only in authorized regions.
## Hierarchy of management groups and subscriptions You can build a flexible structure of management groups and subscriptions to organize your resources into a hierarchy for unified policy and access management. The following diagram shows an example of
-creating a hierarchy for governance using management groups.
+creating a hierarchy for governance by using management groups.
:::image type="complex" source="../media/mg-org.png" alt-text="Diagram of a sample management group hierarchy." border="false":::
- Diagram of a root management group holding both management groups and subscriptions. Some child management groups hold management groups, some hold subscriptions, and some hold both. One of the examples in the sample hierarchy is four levels of management groups, with the child level being all subscriptions.
+ Diagram of a root management group that holds both management groups and subscriptions. Some child management groups hold management groups, some hold subscriptions, and some hold both. One of the examples in the sample hierarchy is four levels of management groups, with all subscriptions at the child level.
:::image-end:::
-You can create a hierarchy that applies a policy, for example, which limits VM locations to the West US region in the management group called "Corp". This policy will inherit all the Enterprise Agreement (EA) subscriptions that are descendants of that management group and will apply to all VMs under those subscriptions. This security policy cannot be altered by the resource or subscription
-owner, allowing for improved governance.
+You can create a hierarchy that applies a policy, for example, that limits VM locations to the West US region in the management group called _Corp_. This policy will inherit all the Enterprise Agreement (EA) subscriptions that are descendants of that management group and will apply to all VMs under those subscriptions. The resource or subscription
+owner can't alter this security policy, to allow for improved governance.
> [!NOTE]
-> Management groups aren't currently supported in Cost Management features for Microsoft Customer Agreement (MCA) subscriptions.
+> Management groups aren't currently supported in cost management features for Microsoft Customer Agreement (MCA) subscriptions.
Another scenario where you would use management groups is to provide user access to multiple
-subscriptions. By moving multiple subscriptions under that management group, you can create one
-[Azure role assignment](../../role-based-access-control/overview.md) on the management group, which
+subscriptions. By moving multiple subscriptions under a management group, you can create one
+[Azure role assignment](../../role-based-access-control/overview.md) on the management group. The role
will inherit that access to all the subscriptions. One assignment on the management group can enable
-users to have access to everything they need instead of scripting Azure RBAC over different
+users to have access to everything they need, instead of scripting Azure role-based access control (RBAC) over different
subscriptions. ### Important facts about management groups -- 10,000 management groups can be supported in a single directory.
+- A single directory can support 10,000 management groups.
- A management group tree can support up to six levels of depth.
- - This limit doesn't include the Root level or the subscription level.
-- Each management group and subscription can only support one parent.
+
+ This limit doesn't include the root level or the subscription level.
+- Each management group and subscription can support only one parent.
- Each management group can have many children.-- All subscriptions and management groups are within a single hierarchy in each directory. See
- [Important facts about the Root management group](#important-facts-about-the-root-management-group).
+- All subscriptions and management groups are within a single hierarchy in each directory. For more information, see
+ [Important facts about the root management group](#important-facts-about-the-root-management-group) later in this article.
## Root management group for each directory
-Each directory is given a single top-level management group called the **root** management group. The
+Each directory has a single top-level management group called the _root_ management group. The
root management group is built into the hierarchy to have all management groups and subscriptions
-fold up to it. This root management group allows for global policies and Azure role assignments to
-be applied at the directory level. The [Microsoft Entra ID Global Administrator needs to elevate
+fold up to it.
+
+The root management group allows for the application of global policies and Azure role assignments
+at the directory level. Initially, the [Microsoft Entra Global Administrator needs to elevate
themselves](../../role-based-access-control/elevate-access-global-admin.md) to the User Access
-Administrator role of this root group initially. After elevating access, the administrator can
+Administrator role of this root group. After elevating access, the administrator can
assign any Azure role to other directory users or groups to manage the hierarchy. As an administrator, you can assign your account as the owner of the root management group. ### Important facts about the root management group -- By default, the root management group's display name is **Tenant root group** and operates itself as a management group. The ID is the same value as the Microsoft Entra ID tenant ID.-- To change the display name, your account must be assigned the **Owner** or **Contributor** role on the
- root management group. See
- [Change the name of a management group](manage.md#change-the-name-of-a-management-group) to update
- the name of a management group.
+- By default, the root management group's display name is **Tenant root group**, and it operates itself as a management group. The ID is the same value as the Microsoft Entra tenant ID.
+- To change the display name, your account must have the Owner or Contributor role on the
+ root management group. For more information, see
+ [Change the name of a management group](manage.md#change-the-name-of-a-management-group).
- The root management group can't be moved or deleted, unlike other management groups. - All subscriptions and management groups fold up into one root management group within the directory. - All resources in the directory fold up to the root management group for global management.
- - New subscriptions are automatically defaulted to the root management group when created.
+ - New subscriptions automatically default to the root management group when they're created.
- All Azure customers can see the root management group, but not all customers have access to manage that root management group. - Everyone who has access to a subscription can see the context of where that subscription is in the hierarchy.
- - No one is given default access to the root management group. Microsoft Entra ID Global Administrators are
- the only users that can elevate themselves to gain access. Once they have access to the root
- management group, the global administrators can assign any Azure role to other users to manage
- it.
+ - No one has default access to the root management group. Microsoft Entra Global Administrators are
+ the only users who can elevate themselves to gain access. After they have access to the root
+ management group, they can assign any Azure role to other users to manage the group.
> [!IMPORTANT]
-> Any assignment of user access or policy on the root management group **applies to all
-> resources within the directory**. Because of this, all customers should evaluate the need to have
-> items defined on this scope. User access and policy assignments should be "Must Have" only at this
+> Any assignment of user access or policy on the root management group applies to all
+> resources within the directory. Because of this access level, all customers should evaluate the need to have
+> items defined on this scope. User access and policy assignments should be "must have" only at this
> scope. ## Initial setup of management groups
-When any user starts using management groups, there's an initial setup process that happens. The
-first step is the root management group is created in the directory. Once this group is created, all
-existing subscriptions that exist in the directory are made children of the root management group.
+When any user starts using management groups, an initial setup process happens. The
+first step is creation of the root management group in the directory. All
+existing subscriptions that exist in the directory then become children of the root management group.
+ The reason for this process is to make sure there's only one management group hierarchy within a directory. The single hierarchy within the directory allows administrative customers to apply global
-access and policies that other customers within the directory can't bypass. Anything assigned on the
-root will apply to the entire hierarchy, which includes all management groups, subscriptions,
-resource groups, and resources within that Microsoft Entra ID tenant.
+access and policies that other customers within the directory can't bypass.
+
+Anything assigned on the
+root applies to the entire hierarchy. That is, it applies to all management groups, subscriptions,
+resource groups, and resources within that Microsoft Entra tenant.
## Management group access Azure management groups support
-[Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) for all
-resource accesses and role definitions. These permissions are inherited to child resources that
-exist in the hierarchy. Any Azure role can be assigned to a management group that will inherit down
-the hierarchy to the resources. For example, the Azure role VM contributor can be assigned to a
+[Azure RBAC](../../role-based-access-control/overview.md) for all
+resource access and role definitions. Child resources that
+exist in the hierarchy inherit these permissions. Any Azure role can be assigned to a management group that will inherit down
+the hierarchy to the resources.
+
+For example, you can assign the Azure role VM Contributor to a
management group. This role has no action on the management group but will inherit to all VMs under that management group. The following chart shows the list of roles and the supported actions on management groups.
-| Azure Role Name | Create | Rename | Move\*\* | Delete | Assign Access | Assign Policy | Read |
+| Azure role name | Create | Rename | Move\*\* | Delete | Assign access | Assign policy | Read |
|:-- |::|::|:--:|::|:-:| ::|:--:| |Owner | X | X | X | X | X | X | X | |Contributor | X | X | X | X | | | X |
-|MG Contributor\* | X | X | X | X | | | X |
+|Management Group Contributor\* | X | X | X | X | | | X |
|Reader | | | | | | | X |
-|MG Reader\* | | | | | | | X |
+|Management Group Reader\* | | | | | | | X |
|Resource Policy Contributor | | | | | | X | | |User Access Administrator | | | | | X | X | |
-\*: The **Management Group Contributor** and **Management Group Reader** roles allow users to perform those actions only on the management group scope.
+\*: These roles allow users to perform the specified actions only on the management group scope.
-\*\*: Role assignments on the root management group aren't required to move a subscription or
+\*\*: Role assignments on the root management group aren't required to move a subscription or a
management group to and from it.
-See [Manage your resources with management groups](manage.md) for
-details on moving items within the hierarchy.
+For details on moving items within the hierarchy, see [Manage your resources with management groups](manage.md).
## Azure custom role definition and assignment You can define a management group as an assignable scope in an Azure custom role definition. The Azure custom role will then be available for assignment on that management group and any management group, subscription, resource group, or resource under it. The custom role
-will inherit down the hierarchy like any built-in role. For information about the limitations with custom roles and management groups, see [Limitations](#limitations).
+will inherit down the hierarchy like any built-in role.
+
+For information about the limitations with custom roles and management groups, see [Limitations](#limitations) later in this article.
### Example definition [Defining and creating a custom role](../../role-based-access-control/custom-roles.md) doesn't
-change with the inclusion of management groups. Use the full path to define the management group
+change with the inclusion of management groups. Use the full path to define the management group:
`/providers/Microsoft.Management/managementgroups/{_groupId_}`. Use the management group's ID and not the management group's display name. This common error happens
-since both are custom-defined fields when creating a management group.
+because both are custom-defined fields in creating a management group.
```json ...
since both are custom-defined fields when creating a management group.
### Issues with breaking the role definition and assignment hierarchy path
-Role definitions are assignable scope anywhere within the management group hierarchy. A role
-definition can be defined on a parent management group while the actual role assignment exists on
-the child subscription. Since there's a relationship between the two items, you'll receive an error
-when trying to separate the assignment from its definition.
+Role definitions are assignable scopes anywhere within the management group hierarchy. A role
+definition can be on a parent management group, whereas the actual role assignment exists on
+the child subscription. Because there's a relationship between the two items, you'll receive an error
+if you try to separate the assignment from its definition.
-For example, let's look at a small section of a hierarchy for a visual.
+For example, consider the following example of a small section of a hierarchy.
:::image type="complex" source="../media/mg-org-sub.png" alt-text="Diagram of a subset of the sample management group hierarchy." border="false":::
- The diagram focuses on the root management group with child Landing zones and Sandbox management groups. The Landing zones management group has two child management groups named Corp and Online while the Sandbox management group has two child subscriptions.
+ The diagram focuses on the root management group with child landing zones and sandbox management groups. The management group for landing zones has two child management groups named Corp and Online, whereas the sandbox management group has two child subscriptions.
:::image-end:::
-Let's say there's a custom role defined on the Sandbox management group. That custom role is then
-assigned on the two Sandbox subscriptions.
+Assume that a custom role is defined on the sandbox management group. That custom role is then
+assigned on the two sandbox subscriptions.
-If we try to move one of those subscriptions to be a child of the Corp management group, this
-move would break the path from subscription role assignment to the Sandbox management group role
-definition. In this scenario, you'll receive an error saying the move isn't allowed since it will
+If you try to move one of those subscriptions to be a child of the Corp management group, you'll break the path from subscription role assignment to the role definition for the sandbox management group. In this scenario, you'll receive an error that says the move isn't allowed because it will
break this relationship.
-There are a couple different options to fix this scenario:
+To fix this scenario, you have these options:
+ - Remove the role assignment from the subscription before moving the subscription to a new parent
- MG.
+ management group.
- Add the subscription to the role definition's assignable scope.-- Change the assignable scope within the role definition. In the above example, you can update the
- assignable scopes from Sandbox to the root management group so that the definition can be reached by
- both branches of the hierarchy.
-- Create another custom role that is defined in the other branch. This new role requires the role
- assignment to be changed on the subscription also.
+- Change the assignable scope within the role definition. In this example, you can update the
+ assignable scopes from the sandbox management group to the root management group so that both branches of the hierarchy can reach the definition.
+- Create another custom role that's defined in the other branch. This new role also requires you to change the role
+ on the subscription.
### Limitations
-There are limitations that exist when using custom roles on management groups.
+There are limitations to using custom roles on management groups:
-- You can only define one management group in the assignable scopes of a new role. This limitation
+- You can define only one management group in the assignable scopes of a new role. This limitation
is in place to reduce the number of situations where role definitions and role assignments are
- disconnected. This situation happens when a subscription or management group with a role
+ disconnected. This kind of situation happens when a subscription or management group with a role
assignment moves to a different parent that doesn't have the role definition. - Custom roles with `DataActions` can't be assigned at the management group scope. For more information, see [Custom role limits](../../role-based-access-control/custom-roles.md#custom-role-limits). - Azure Resource Manager doesn't validate the management group's existence in the role
- definition's assignable scope. If there's a typo or an incorrect management group ID listed, the
+ definition's assignable scope. If there's a typo or an incorrect management group ID, the
role definition is still created. ## Moving management groups and subscriptions
-To move a management group or subscription to be a child of another management group, three rules
-need to be evaluated as true.
-
-If you're doing the move action, you need:
+To move a management group or subscription to be a child of another management group, you need:
-- Management group write and role assignment write permissions on the child subscription or
+- Management group write permissions and role assignment write permissions on the child subscription or
management group.
- - Built-in role example: **Owner**
+ - Built-in role example: Owner
- Management group write access on the target parent management group.
- - Built-in role example: **Owner**, **Contributor**, **Management Group Contributor**
+ - Built-in role example: Owner, Contributor, Management Group Contributor
- Management group write access on the existing parent management group.
- - Built-in role example: **Owner**, **Contributor**, **Management Group Contributor**
+ - Built-in role example: Owner, Contributor, Management Group Contributor
-**Exception**: If the target or the existing parent management group is the root management group,
-the permissions requirements don't apply. Since the root management group is the default landing
+There's an exception: if the target or the existing parent management group is the root management group,
+the permission requirements don't apply. Because the root management group is the default landing
spot for all new management groups and subscriptions, you don't need permissions on it to move an item.
-If the **Owner** role on the subscription is inherited from the current management group, your move
-targets are limited. You can only move the subscription to another management group where you have
-the **Owner** role. You can't move it to a management group where you're a **Contributor** because you would
-lose ownership of the subscription. If you're directly assigned to the **Owner** role for the
-subscription (not inherited from the management group), you can move it to any management group
-where you're assigned the **Contributor** role.
+If the Owner role on the subscription is inherited from the current management group, your move
+targets are limited. You can move the subscription only to another management group where you have
+the Owner role. You can't move the subscription to a management group where you're only a
+Contributor because you would lose ownership of the subscription. If you're directly assigned to the
+Owner role for the subscription, you can move it to any management group where you have the Contributor role.
> [!IMPORTANT]
-> Azure Resource Manager caches management group hierarchy details for up to 30 minutes.
-> As a result, moving a management group may not immediately be reflected in the Azure portal.
+> Azure Resource Manager caches details of the management group hierarchy for up to 30 minutes.
+> As a result, the Azure portal might not immediately show that you moved a management group.
-## Audit management groups using activity logs
+## Auditing management groups by using activity logs
-Management groups are supported within
-[Azure Activity log](../../azure-monitor/essentials/platform-logs-overview.md). You can search all
+Management groups are supported in [Azure Monitor activity logs](../../azure-monitor/essentials/platform-logs-overview.md). You can query all
events that happen to a management group in the same central location as other Azure resources. For example, you can see all role assignments or policy assignment changes made to a particular management group.
-When looking to query on management groups outside the Azure portal, the target scope for
-management groups looks like **"/providers/Microsoft.Management/managementGroups/{_management-group-id_}"**.
+When you want to query on management groups outside the Azure portal, the target scope for
+management groups looks like `"/providers/Microsoft.Management/managementGroups/{management-group-id}"`.
> [!NOTE]
-> Using the Azure Resource Manager REST API, you can enable diagnostic settings on a management group to send related Azure Activity log entries to a Log Analytics workspace, Azure Storage, or Azure Event Hub. For more information, see [Management Group Diagnostic Settings - Create Or Update](/rest/api/monitor/management-group-diagnostic-settings/create-or-update).
+> By using the Azure Resource Manager REST API, you can enable diagnostic settings on a management group to send related Azure Monitor activity log entries to a Log Analytics workspace, Azure Storage, or Azure Event Hubs. For more information, see [Management group diagnostic settings: Create or update](/rest/api/monitor/management-group-diagnostic-settings/create-or-update).
-## Next steps
+## Related content
To learn more about management groups, see: - [Create management groups to organize Azure resources](./create-management-group-portal.md)-- [How to change, delete, or manage your management groups](./manage.md)-- See options for [How to protect your resource hierarchy](./how-to/protect-resource-hierarchy.md)
+- [Change, delete, or manage your management groups](./manage.md)
+- [Protect your resource hierarchy](./how-to/protect-resource-hierarchy.md)
governance Policy For Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-for-kubernetes.md
Finally, to identify the AKS cluster version that you're using, follow the linke
### Add-on versions available per each AKS cluster version
+#### 1.7.0
+Introducing expansion, a shift left feature that lets you know up front whether your workload resources (Deployments, ReplicaSets, Jobs, etc.) will produce admissible pods. Expansion shouldn't change the behavior of your policies; rather, it just shifts Gatekeeper's evaluation of pod-scoped policies to occur at workload admission time rather than pod admission time. However, to perform this evaluation it must generate and evaluate a what-if pod that is based on the pod spec defined in the workload, which may have incomplete metadata. For instance, the what-if pod will not contain the proper owner references. Because of this small risk of policy behavior changing, we're introducing expansion as disabled by default. To enable expansion for a given policy definition, set `.policyRule.then.details.source` to `All`. Built-ins will be updated soon to enable parameterization of this field. If you test your policy definition and find that the what-if pod being generated for evaluation purposes is incomplete, you can also use a mutation with source `Generated` to mutate the what-if pods. For more information on this option, view the [Gatekeeper documentation](https://open-policy-agent.github.io/gatekeeper/website/docs/expansion#mutating-example).
+
+Security improvements.
+- Released July 2024
+- Kubernetes 1.27+
+- Gatekeeper 3.16.3
+
+#### 1.6.1
+Security improvements.
+- Released May 2024
+- Gatekeeper 3.14.2
+
+#### 1.5.0
+Security improvements.
+- Released May 2024
+- Kubernetes 1.27+
+- Gatekeeper 3.16.3
+ #### 1.4.0 Enables mutation and external data by default. The additional mutating webhook and increased validating webhook timeout cap might add latency to calls in the worst case. Also introduces support for viewing policy definition and set definition version in compliance results.
hdinsight Benefits Of Migrating To Hdinsight 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/benefits-of-migrating-to-hdinsight-40.md
Title: Benefits of migrating to Azure HDInsight 4.0.
description: Learn the benefits of migrating to Azure HDInsight 4.0. Previously updated : 07/22/2024 Last updated : 07/23/2024 # Significant version changes in HDInsight 4.0 and advantages
Set synchronization of partitions to occur every 10 minutes expressed in seconds
> [!WARNING]
-> With the `management.task` running every 10 minutes, there will be pressure on the SQL server DTU. This feature also adds cost to Storage access as the partition management threads runs at regular intervals even when cluster is idle.
+> With the `management.task` running every 10 minutes, there will be pressure on the SQL server DTU.
You can verify the output from Microsoft Azure portal.
load-balancer Quickstart Load Balancer Standard Internal Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-bicep.md
Previously updated : 05/01/2023 Last updated : 07/22/2024
load-balancer Quickstart Load Balancer Standard Internal Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-cli.md
Create a resource group with [az group create](/cli/azure/group#az-group-create)
```azurecli-interactive az group create \ --name CreateIntLBQS-rg \
- --location westus3
+ --location westus2
``` When you create an internal load balancer, a virtual network is configured as the network for the load balancer.
Create a virtual network by using [az network vnet create](/cli/azure/network/vn
```azurecli-interactive az network vnet create \ --resource-group CreateIntLBQS-rg \
- --location westus3 \
+ --location westus2 \
--name myVNet \ --address-prefixes 10.1.0.0/16 \ --subnet-name myBackendSubnet \
az network vnet subnet create \
Use [az network bastion create](/cli/azure/network/bastion#az-network-bastion-create) to create a host. ```azurecli-interactive
+az config set extension.use_dynamic_install=yes_without_prompt
+ az network bastion create \ --resource-group CreateIntLBQS-rg \ --name myBastionHost \ --public-ip-address myBastionIP \ --vnet-name myVNet \
- --location westus3
+ --location westus2
``` It can take a few minutes for the Azure Bastion host to deploy.
Create the virtual machines with [az vm create](/cli/azure/vm#az-vm-create).
--resource-group CreateIntLBQS-rg \ --name myVM$n \ --nics myNicVM$n \
- --image win2019datacenter \
+ --image win2022datacenter \
--admin-username azureuser \ --zone $n \ --no-wait
load-balancer Quickstart Load Balancer Standard Internal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-powershell.md
description: This quickstart shows how to create an internal load balancer using
Previously updated : 05/31/2023 Last updated : 07/23/2024 #Customer intent: I want to create a load balancer so that I can load balance internal traffic to VMs.
-# Quickstart: Create an internal load balancer to load balance VMs using Azure PowerShell
+# Quickstart: Create an internal load balancer to load balance virtual machines using Azure PowerShell
-Get started with Azure Load Balancer by using Azure PowerShell to create an internal load balancer and two virtual machines.Additional resources include Azure Bastion, NAT Gateway, a virtual network, and the required subnets.
+Get started with Azure Load Balancer creating an internal load balancer and two virtual machines with Azure PowerShell. Also, you deploy other resources including Azure Bastion, NAT Gateway, a virtual network, and the required subnets.
:::image type="content" source="media/quickstart-load-balancer-standard-internal-portal/internal-load-balancer-resources.png" alt-text="Diagram of resources deployed for internal load balancer." lightbox="media/quickstart-load-balancer-standard-internal-portal/internal-load-balancer-resources.png":::
An Azure resource group is a logical container into which Azure resources are de
Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). ```azurepowershell-interactive
-New-AzResourceGroup -Name 'CreateIntLBQS-rg' -Location 'eastus'
+$rg = @{
+ Name = 'CreateINTLBQS-rg'
+ Location = 'westus2'
+}
+New-AzResourceGroup @rg
``` ## Configure virtual network
Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress)
## Create public IP address for NAT gateway and place IP in variable ## $gwpublicip = @{ Name = 'myNATgatewayIP'
- ResourceGroupName = 'CreateIntLBQS-rg'
- Location = 'eastus'
+ ResourceGroupName = $rg.name
+ Location = 'westus2'
Sku = 'Standard' AllocationMethod = 'static' Zone = 1,2,3
To create a zonal public IP address in zone 1, use the following command:
## Create a zonal public IP address for NAT gateway and place IP in variable ## $gwpublicip = @{ Name = 'myNATgatewayIP'
- ResourceGroupName = 'CreateIntLBQS-rg'
- Location = 'eastus'
+ ResourceGroupName = $rg.name
+ Location = 'westus2'
Sku = 'Standard' AllocationMethod = 'static' Zone = 1
$gwpublicip = New-AzPublicIpAddress @gwpublicip
## Create NAT gateway resource ## $nat = @{
- ResourceGroupName = 'CreateIntLBQS-rg'
+ ResourceGroupName = $rg.name
Name = 'myNATgateway' IdleTimeoutInMinutes = '10' Sku = 'Standard'
- Location = 'eastus'
+ Location = 'westus2'
PublicIpAddress = $gwpublicip } $natGateway = New-AzNatGateway @nat
$bastsubnetConfig = New-AzVirtualNetworkSubnetConfig @bastsubnet
## Create the virtual network ## $net = @{ Name = 'myVNet'
- ResourceGroupName = 'CreateIntLBQS-rg'
- Location = 'eastus'
+ ResourceGroupName = $rg.name
+ Location = 'westus2'
AddressPrefix = '10.1.0.0/16' Subnet = $subnetConfig,$bastsubnetConfig }
$vnet = New-AzVirtualNetwork @net
## Create public IP address for bastion host. ## $bastionip = @{ Name = 'myBastionIP'
- ResourceGroupName = 'CreateIntLBQS-rg'
- Location = 'eastus'
+ ResourceGroupName = $rg.name
+ Location = 'westus2'
Sku = 'Standard' AllocationMethod = 'Static' }
$bastionip = New-AzPublicIpAddress @bastionip
## Create bastion host ## $bastion = @{
- ResourceGroupName = 'CreateIntLBQS-rg'
+ ResourceGroupName = $rg.name
Name = 'myBastion' PublicIpAddress = $bastionip VirtualNetwork = $vnet
$rule1 = New-AzNetworkSecurityRuleConfig @nsgrule
## Create network security group ## $nsg = @{ Name = 'myNSG'
- ResourceGroupName = 'CreateIntLBQS-rg'
- Location = 'eastus'
+ ResourceGroupName = $rg.name
+ Location = 'westus2'
SecurityRules = $rule1 } New-AzNetworkSecurityGroup @nsg
This section details how you can create and configure the following components o
## Place virtual network created in previous step into a variable. ## $net = @{ Name = 'myVNet'
- ResourceGroupName = 'CreateIntLBQS-rg'
+ ResourceGroupName = $rg.name
} $vnet = Get-AzVirtualNetwork @net
$rule = New-AzLoadBalancerRuleConfig @lbrule -EnableTcpReset
## Create the load balancer resource. ## $loadbalancer = @{
- ResourceGroupName = 'CreateIntLBQS-rg'
+ ResourceGroupName = $rg.name
Name = 'myLoadBalancer'
- Location = 'eastus'
+ Location = 'westus2'
Sku = 'Standard' FrontendIpConfiguration = $feip BackendAddressPool = $bePool
$cred = Get-Credential
## Place virtual network created in previous step into a variable. ## $net = @{ Name = 'myVNet'
- ResourceGroupName = 'CreateIntLBQS-rg'
+ ResourceGroupName = $rg.name
} $vnet = Get-AzVirtualNetwork @net ## Place the load balancer into a variable. ## $lb = @{ Name = 'myLoadBalancer'
- ResourceGroupName = 'CreateIntLBQS-rg'
+ ResourceGroupName = $rg.name
} $bepool = Get-AzLoadBalancer @lb | Get-AzLoadBalancerBackendAddressPoolConfig ## Place the network security group into a variable. ##
-$sg = {
+$sg = @{
Name = 'myNSG'
- ResourceGroupName = 'CreateIntLBQS-rg' @sg
+ ResourceGroupName = $rg.name
}
-$nsg = Get-AzNetworkSecurityGroup
+$nsg = Get-AzNetworkSecurityGroup @sg
## For loop with variable to create virtual machines for load balancer backend pool. ## for ($i=1; $i -le 2; $i++)
for ($i=1; $i -le 2; $i++)
## Command to create network interface for VMs ## $nic = @{ Name = "myNicVM$i"
- ResourceGroupName = 'CreateIntLBQS-rg'
- Location = 'eastus'
+ ResourceGroupName = $rg.name
+ Location = 'westus2'
Subnet = $vnet.Subnets[0] NetworkSecurityGroup = $nsg LoadBalancerBackendAddressPool = $bepool
for ($i=1; $i -le 2; $i++)
## Create the virtual machine for VMs ## $vm = @{
- ResourceGroupName = 'CreateIntLBQS-rg'
- Location = 'eastus'
+ ResourceGroupName = $rg.name
+ Location = 'westus2'
VM = $vmConfig Zone = "$i" }
for ($i=1; $i -le 2; $i++)
Publisher = 'Microsoft.Compute' ExtensionType = 'CustomScriptExtension' ExtensionName = 'IIS'
- ResourceGroupName = 'CreateIntLBQS-rg'
+ ResourceGroupName = $rg.name
VMName = "myVM$i"
- Location = 'eastus'
+ Location = 'westus2'
TypeHandlerVersion = '1.8' SettingString = '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}' }
$cred = Get-Credential
## Place the virtual network into a variable. ## $net = @{ Name = 'myVNet'
- ResourceGroupName = 'CreateIntLBQS-rg'
+ ResourceGroupName = $rg.name
} $vnet = Get-AzVirtualNetwork @net ## Place the network security group into a variable. ##
-$sg = {
+$sg = @{
Name = 'myNSG'
- ResourceGroupName = 'CreateIntLBQS-rg' @sg
+ ResourceGroupName = $rg.name
}
-$nsg = Get-AzNetworkSecurityGroup
+$nsg = Get-AzNetworkSecurityGroup @sg
## Command to create network interface for VM ## $nic = @{ Name = "myNicTestVM"
- ResourceGroupName = 'CreateIntLBQS-rg'
- Location = 'eastus'
+ ResourceGroupName = $rg.name
+ Location = 'westus2'
Subnet = $vnet.Subnets[0] NetworkSecurityGroup = $nsg }
$vmConfig = New-AzVMConfig @vmsz `
## Create the virtual machine for VMs ## $vm = @{
- ResourceGroupName = 'CreateIntLBQS-rg'
- Location = 'eastus'
+ ResourceGroupName = $rg.name
+ Location = 'westus2'
VM = $vmConfig } New-AzVM @vm
To see the load balancer distribute traffic across all three VMs, you can force-
When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group, load balancer, and the remaining resources. ```azurepowershell-interactive
-Remove-AzResourceGroup -Name 'CreateIntLBQS-rg'
+Remove-AzResourceGroup -Name $rg.name
``` ## Next steps
load-balancer Quickstart Load Balancer Standard Public Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-powershell.md
description: This quickstart shows how to create a load balancer using Azure PowerShell. Previously updated : 05/01/2023 Last updated : 07/23/2024
# Quickstart: Create a public load balancer to load balance VMs using Azure PowerShell
-Get started with Azure Load Balancer by using Azure PowerShell to create a public load balancer and two virtual machines. Additional resources include Azure Bastion, NAT Gateway, a virtual network, and the required subnets.
+Get started with Azure Load Balancer by using Azure PowerShell to create a public load balancer and two virtual machines. Also, you deploy other resources including Azure Bastion, NAT Gateway, a virtual network, and the required subnets.
:::image type="content" source="media/quickstart-load-balancer-standard-public-portal/public-load-balancer-resources.png" alt-text="Diagram of resources deployed for a standard public load balancer." lightbox="media/quickstart-load-balancer-standard-public-portal/public-load-balancer-resources.png"::: ## Prerequisites
Create a resource group with [New-AzResourceGroup](/powershell/module/az.resourc
```azurepowershell-interactive $rg = @{ Name = 'CreatePubLBQS-rg'
- Location = 'eastus'
+ Location = 'westus2'
} New-AzResourceGroup @rg ```
Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress)
```azurepowershell-interactive $publicip = @{ Name = 'myPublicIP'
- ResourceGroupName = 'CreatePubLBQS-rg'
- Location = 'eastus'
+ ResourceGroupName = $rg.name
+ Location = 'westus2'
Sku = 'Standard' AllocationMethod = 'static' Zone = 1,2,3
To create a zonal public IP address in zone 1, use the following command:
```azurepowershell-interactive $publicip = @{ Name = 'myPublicIP'
- ResourceGroupName = 'CreatePubLBQS-rg'
- Location = 'eastus'
+ ResourceGroupName = $rg.name
+ Location = 'westus2'
Sku = 'Standard' AllocationMethod = 'static' Zone = 1
This section details how you can create and configure the following components o
## Place public IP created in previous steps into variable. ## $pip = @{ Name = 'myPublicIP'
- ResourceGroupName = 'CreatePubLBQS-rg'
+ ResourceGroupName = $rg.name
} $publicIp = Get-AzPublicIpAddress @pip
$rule = New-AzLoadBalancerRuleConfig @lbrule -EnableTcpReset -DisableOutboundSNA
## Create the load balancer resource. ## $loadbalancer = @{
- ResourceGroupName = 'CreatePubLBQS-rg'
+ ResourceGroupName = $rg.name
Name = 'myLoadBalancer'
- Location = 'eastus'
+ Location = 'westus2'
Sku = 'Standard' FrontendIpConfiguration = $feip BackendAddressPool = $bePool
Use a NAT gateway to provide outbound internet access to resources in the backen
## Create public IP address for NAT gateway ## $ip = @{ Name = 'myNATgatewayIP'
- ResourceGroupName = 'CreatePubLBQS-rg'
- Location = 'eastus'
+ ResourceGroupName = $rg.name
+ Location = 'westus2'
Sku = 'Standard' AllocationMethod = 'Static' }
$publicIP = New-AzPublicIpAddress @ip
## Create NAT gateway resource ## $nat = @{
- ResourceGroupName = 'CreatePubLBQS-rg'
+ ResourceGroupName = $rg.name
Name = 'myNATgateway' IdleTimeoutInMinutes = '10' Sku = 'Standard'
- Location = 'eastus'
+ Location = 'westus2'
PublicIpAddress = $publicIP } $natGateway = New-AzNatGateway @nat
$bastsubnetConfig = New-AzVirtualNetworkSubnetConfig @bastsubnet
## Create the virtual network ## $net = @{ Name = 'myVNet'
- ResourceGroupName = 'CreatePubLBQS-rg'
- Location = 'eastus'
+ ResourceGroupName = $rg.name
+ Location = 'westus2'
AddressPrefix = '10.1.0.0/16' Subnet = $subnetConfig,$bastsubnetConfig }
$vnet = New-AzVirtualNetwork @net
## Create public IP address for bastion host. ## $ip = @{ Name = 'myBastionIP'
- ResourceGroupName = 'CreatePubLBQS-rg'
- Location = 'eastus'
+ ResourceGroupName = $rg.name
+ Location = 'westus2'
Sku = 'Standard' AllocationMethod = 'Static' }
$publicip = New-AzPublicIpAddress @ip
## Create bastion host ## $bastion = @{
- ResourceGroupName = 'CreatePubLBQS-rg'
+ ResourceGroupName = $rg.name
Name = 'myBastion' PublicIpAddress = $publicip VirtualNetwork = $vnet
$rule1 = New-AzNetworkSecurityRuleConfig @nsgrule
## Create network security group ## $nsg = @{ Name = 'myNSG'
- ResourceGroupName = 'CreatePubLBQS-rg'
- Location = 'eastus'
+ ResourceGroupName = $rg.name
+ Location = 'westus2'
SecurityRules = $rule1 } New-AzNetworkSecurityGroup @nsg
$cred = Get-Credential
## Place the virtual network into a variable. ## $net = @{ Name = 'myVNet'
- ResourceGroupName = 'CreatePubLBQS-rg'
+ ResourceGroupName = $rg.name
} $vnet = Get-AzVirtualNetwork @net ## Place the load balancer into a variable. ## $lb = @{ Name = 'myLoadBalancer'
- ResourceGroupName = 'CreatePubLBQS-rg'
+ ResourceGroupName = $rg.name
} $bepool = Get-AzLoadBalancer @lb | Get-AzLoadBalancerBackendAddressPoolConfig ## Place the network security group into a variable. ## $ns = @{ Name = 'myNSG'
- ResourceGroupName = 'CreatePubLBQS-rg'
+ ResourceGroupName = $rg.name
} $nsg = Get-AzNetworkSecurityGroup @ns
for ($i=1; $i -le 2; $i++){
## Command to create network interface for VMs ## $nic = @{ Name = "myNicVM$i"
- ResourceGroupName = 'CreatePubLBQS-rg'
- Location = 'eastus'
+ ResourceGroupName = $rg.name
+ Location = 'westus2'
Subnet = $vnet.Subnets[0] NetworkSecurityGroup = $nsg LoadBalancerBackendAddressPool = $bepool
for ($i=1; $i -le 2; $i++){
## Create the virtual machine for VMs ## $vm = @{
- ResourceGroupName = 'CreatePubLBQS-rg'
- Location = 'eastus'
+ ResourceGroupName = $rg.name
+ Location = 'westus2'
VM = $vmConfig Zone = "$i" }
$ext = @{
Publisher = 'Microsoft.Compute' ExtensionType = 'CustomScriptExtension' ExtensionName = 'IIS'
- ResourceGroupName = 'CreatePubLBQS-rg'
+ ResourceGroupName = $rg.name
VMName = "myVM$i"
- Location = 'eastus'
+ Location = 'westus2'
TypeHandlerVersion = '1.8' SettingString = '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}' }
Use [Get-AzPublicIpAddress](/powershell/module/az.network/get-azpublicipaddress)
```azurepowershell-interactive $ip = @{
- ResourceGroupName = 'CreatePubLBQS-rg'
+ ResourceGroupName = $rg.name
Name = 'myPublicIP' } Get-AzPublicIPAddress @ip | select IpAddress
Copy the public IP address, and then paste it into the address bar of your brows
When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group, load balancer, and the remaining resources. ```azurepowershell-interactive
-Remove-AzResourceGroup -Name 'CreatePubLBQS-rg'
+Remove-AzResourceGroup -Name $rg.name
``` ## Next steps
load-balancer Tutorial Cross Region Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-cross-region-cli.md
Previously updated : 06/27/2023 Last updated : 07/23/2024 #Customer intent: As a administrator, I want to deploy a cross-region load balancer for global high availability of my application or service.
az login
`````` ## Create cross-region load balancer
-In this section, you'll create a cross-region load balancer, public IP address, and load balancing rule.
+In this section, you create a cross-region load balancer, public IP address, and load balancing rule.
### Create a resource group
Create a load balancer rule with [az network cross-region-lb rule create](/cli/a
## Create backend pool
-In this section, you'll add two regional standard load balancers to the backend pool of the cross-region load balancer.
+In this section, you add two regional standard load balancers to the backend pool of the cross-region load balancer.
> [!IMPORTANT] > To complete these steps, ensure that two regional load balancers with backend pools have been deployed in your subscription. For more information, see, **[Quickstart: Create a public load balancer to load balance VMs using Azure CLI](quickstart-load-balancer-standard-public-cli.md)**. ### Add the regional frontends to load balancer
-In this section, you'll place the resource IDs of two regional load balancers frontends into variables. You'll then use the variables to add the frontends to the backend address pool of the cross-region load balancer.
+In this section, you place the resource IDs of two regional load balancers frontends into variables, and then use the variables to add the frontends to the backend address pool of the cross-region load balancer.
Retrieve the resource IDs with [az network lb frontend-ip show](/cli/azure/network/lb/frontend-ip#az-network-lb-frontend-ip-show).
Use [az network cross-region-lb address-pool address add](/cli/azure/network/cro
## Test the load balancer
-In this section, you'll test the cross-region load balancer. You'll connect to the public IP address in a web browser. You'll stop the virtual machines in one of the regional load balancer backend pools and observe the failover.
+In this section, you test the cross-region load balancer. You connect to the public IP address in a web browser. You stop the virtual machines in one of the regional load balancer backend pools and observe the failover.
1. To get the public IP address of the load balancer, use [az network public-ip show](/cli/azure/network/public-ip#az-network-public-ip-show):
load-balancer Tutorial Cross Region Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-cross-region-powershell.md
Previously updated : 06/27/2023 Last updated : 07/23/2023 #Customer intent: As a administrator, I want to deploy a cross-region load balancer for global high availability of my application or service.
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
- An Azure subscription. - Two **standard** sku Azure Load Balancers with backend pools deployed in two different Azure regions. - For information on creating a regional standard load balancer and virtual machines for backend pools, see [Quickstart: Create a public load balancer to load balance VMs using Azure PowerShell](quickstart-load-balancer-standard-public-powershell.md).
- - Append the name of the load balancers and virtual machines in each region with a **-R1** and **-R2**.
- Azure PowerShell installed locally or Azure Cloud Shell.
New-AzResourceGroup @rg
### Create cross-region load balancer resources
-In this section, you'll create the resources needed for the cross-region load balancer.
+In this section, you create the resources needed for the cross-region load balancer.
A global standard sku public IP is used for the frontend of the cross-region load balancer.
$lb = New-AzLoadBalancer @lbp`
## Configure backend pool
-In this section, you'll add two regional standard load balancers to the backend pool of the cross-region load balancer.
+In this section, you add two regional standard load balancers to the backend pool of the cross-region load balancer.
> [!IMPORTANT] > To complete these steps, ensure that two regional load balancers with backend pools have been deployed in your subscription. For more information, see, **[Quickstart: Create a public load balancer to load balance VMs using Azure PowerShell](quickstart-load-balancer-standard-public-powershell.md)**.
Set-AzLoadBalancerBackendAddressPool @bepoolcr
## Test the load balancer
-In this section, you'll test the cross-region load balancer. You'll connect to the public IP address in a web browser. You'll stop the virtual machines in one of the regional load balancer backend pools and observe the failover.
+In this section, you test the cross-region load balancer. You connect to the public IP address in a web browser. You stop the virtual machines in one of the regional load balancer backend pools and observe the failover.
1. Use [Get-AzPublicIpAddress](/powershell/module/az.network/get-azpublicipaddress) to get the public IP address of the load balancer:
load-balancer Tutorial Deploy Cross Region Load Balancer Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-deploy-cross-region-load-balancer-template.md
Previously updated : 04/12/2023 Last updated : 07/22/2024 #Customer intent: As a administrator, I want to deploy a cross-region load balancer for global high availability of my application or service.
In this tutorial, you learn how to:
## Prerequisites -- An Azure account with an active subscription. [Create an account for free]
- (https://azure.microsoft.com/free/?WT.mc_id=A261C142F) and access to the Azure portal.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) and access to the Azure portal.
## Review the template In this section, you review the template and the parameters that are used to deploy the cross-region load balancer.
machine-learning Concept Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-customer-managed-keys.md
To opt in for this preview, set the `enableServiceSideCMKEncryption` on a REST A
:::image type="content" source="./media/concept-customer-managed-keys/cmk-service-side-encryption.png" alt-text="Screenshot of the encryption tab with the option for server side encryption selected." lightbox="./media/concept-customer-managed-keys/cmk-service-side-encryption.png"::: > [!NOTE]
-> During this preview key rotation and data labeling capabilities are not supported.
+> During this preview key rotation and data labeling capabilities are not supported. Server-side encryption is currently not supported in reference to an Azure Key Vault for storing your encryption key that has public network access disabled.
For a template that creates a workspace with service-side encryption of metadata, see [https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-workspace-cmk-service-side-encryption](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-workspace-cmk-service-side-encryption).
machine-learning How To Deploy Models Llama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-llama.md
Title: How to deploy Meta Llama models with Azure Machine Learning studio
+ Title: How to deploy Meta Llama 3.1 models with Azure Machine Learning studio
-description: Learn how to deploy Meta Llama models with Azure Machine Learning studio.
+description: Learn how to deploy Meta Llama 3.1 models with Azure Machine Learning studio.
Previously updated : 04/16/2024 Last updated : 07/23/2024 reviewer: shubhirajMsft
-# How to deploy Meta Llama models with Azure Machine Learning studio
+# How to deploy Meta Llama 3.1 models with Azure Machine Learning studio
-In this article, you learn about the Meta Llama models (LLMs). You also learn how to use Azure Machine Learning studio to deploy models from this set either to serverless APIs with pay-as you go billing or to managed compute.
+In this article, you learn about the Meta Llama models family (LLMs). You also learn how to use Azure Machine Learning studio to deploy models from this set either to serverless APIs with pay-as you go billing or to managed compute.
> [!IMPORTANT]
-> Read more about the announcement of Meta Llama 3 models available now on Azure AI Model Catalog: [Microsoft Tech Community Blog](https://aka.ms/Llama3Announcement) and from [Meta Announcement Blog](https://aka.ms/meta-llama3-announcement-blog).
+> Read more about the announcement of Meta Llama 3.1 405B Instruct and other Llama 3.1 models available now on Azure AI Model Catalog: [Microsoft Tech Community Blog](https://aka.ms/meta-llama-3.1-release-on-azure) and from [Meta Announcement Blog](https://aka.ms/meta-llama-3.1-release-announcement).
-Meta Llama 3 models and tools are a collection of pretrained and fine-tuned generative text models ranging in scale from 8 billion to 70 billion parameters. The Meta Llama model family also includes fine-tuned versions optimized for dialogue use cases with reinforcement learning from human feedback (RLHF), called Meta-Llama-3-8B-Instruct and Meta-Llama-3-70B-Instruct. See the following GitHub samples to explore integrations with [LangChain](https://aka.ms/meta-llama3-langchain-sample), [LiteLLM](https://aka.ms/meta-llama3-litellm-sample), [OpenAI](https://aka.ms/meta-llama3-openai-sample) and the [Azure API](https://aka.ms/meta-llama3-azure-api-sample).
+Now available on Azure Machine Learning studio Models-as-a-Service:
+- `Meta-Llama-3.1-405B-Instruct`
+- `Meta-Llama-3.1-70B-Instruct`
+- `Meta-Llama-3.1-8B-Instruct`
+The Meta Llama 3.1 family of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). All models support long context length (128k) and are optimized for inference with support for grouped query attention (GQA). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source chat models on common industry benchmarks.
+
+See the following GitHub samples to explore integrations with [LangChain](https://aka.ms/meta-llama-3.1-405B-instruct-langchain), [LiteLLM](https://aka.ms/meta-llama-3.1-405B-instruct-litellm), [OpenAI](https://aka.ms/meta-llama-3.1-405B-instruct-openai) and the [Azure API](https://aka.ms/meta-llama-3.1-405B-instruct-webrequests).
-## Deploy Meta Llama models as a serverless API
-Certain models in the model catalog can be deployed as a serverless API with pay-as-you-go billing, providing a way to consume them as an API without hosting them on your subscription while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription.
+## Deploy Meta Llama 3.1 405B Instruct as a serverless API
-Meta Llama models are deployed as a serverless API with pay-as-you-go billing are offered by Meta AI through Microsoft Azure Marketplace, and they might add more terms of use and pricing.
+Meta Llama 3.1 models - like `Meta Llama 3.1 405B Instruct` - can be deployed as a serverless API with pay-as-you-go, providing a way to consume them as an API without hosting them on your subscription while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription. Meta Llama 3.1 models are deployed as a serverless API with pay-as-you-go billing through Microsoft Azure Marketplace, and they might add more terms of use and pricing.
### Azure Marketplace model offerings
-The following models are available in Azure Marketplace for Meta Llama models when deployed as a serverless API with pay-as-you-go billing:
+The following models are available in Azure Marketplace for Llama 3.1 and Llama 3 when deployed as a service with pay-as-you-go:
-# [Meta Llama 3](#tab/llama-three)
+# [Meta Llama 3.1](#tab/llama-three)
-* [Meta Llama-3-8B (preview)](https://aka.ms/aistudio/landing/meta-llama-3-8b-base)
-* [Meta Llama-3-70B (preview)](https://aka.ms/aistudio/landing/meta-llama-3-70b-base)
+* [Meta-Llama-3.1-405B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-405B-base)
+* [Meta-Llama-3.1-70B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-8B-refresh)
+* [Meta Llama-3.1-8B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-70B-refresh)
+* [Meta-Llama-3-70B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-70b-chat)
+* [Meta-Llama-3-8B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-8b-chat)
If you need to deploy a different model, [deploy it to managed compute](#deploy-meta-llama-models-to-managed-compute) instead.
If you need to deploy a different model, [deploy it to managed compute](#deploy-
# [Meta Llama 3](#tab/llama-three) - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An Azure Machine Learning workspace and a compute instance. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them. The serverless API model deployment offering for Meta Llama 3 is only available with workspaces created in these regions:
+- An Azure Machine Learning workspace and a compute instance. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them. The serverless API model deployment offering for Meta Llama 3.1 and Llama 3 is only available with hubs created in these regions:
* East US * East US 2
To create a deployment:
# [Meta Llama 3](#tab/llama-three) 1. Go to [Azure Machine Learning studio](https://ml.azure.com/home).
-1. Select the workspace in which you want to deploy your models. To use the pay-as-you-go model deployment offering, your workspace must belong to the **East US 2** or **Sweden Central** region.
-1. Choose the model you want to deploy from the [model catalog](https://ml.azure.com/model/catalog).
+1. Select the workspace in which you want to deploy your models. To use the pay-as-you-go model deployment offering, your workspace must belong to one of the available regions listed in the prerequisites of this article.
+1. Choose `Meta-Llama-3.1-405B-Instruct` to deploy from the [model catalog](https://ml.azure.com/model/catalog).
Alternatively, you can initiate deployment by going to your workspace and selecting **Endpoints** > **Serverless endpoints** > **Create**.
-1. On the model's overview page, select **Deploy** and then **Serverless API with Azure AI Content Safety**.
+1. On the **Details** page for `Meta-Llama-3.1-405B-Instruct`, select **Deploy** and then select **Serverless API with Azure AI Content Safety**.
1. On the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use. You can also select the **Marketplace offer details** tab to learn about pricing for the selected model.
-1. If this is your first time deploying the model in the workspace, you have to subscribe your workspace for the particular offering (for example, Meta-Llama-3-70B) from Azure Marketplace. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each workspace has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**.
+1. If this is your first time deploying the model in the workspace, you have to subscribe your workspace for the particular offering (for example, `Meta-Llama-3.1-405B-Instruct`) from Azure Marketplace. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each workspace has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**.
> [!NOTE] > Subscribing a workspace to a particular Azure Marketplace offering (in this case, Llama-3-70B) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites).
To create a deployment:
-To learn about billing for Meta Llama models deployed as a serverless API, see [Cost and quota considerations for Meta Llama models deployed as a serverless API](#cost-and-quota-considerations-for-meta-llama-models-deployed-as-a-serverless-api).
+To learn about billing for Meta Llama models deployed as a serverless API, see [Cost and quota considerations for Meta Llama models deployed as a serverless API](#cost-and-quota-considerations-for-meta-llama-31-models-deployed-as-a-serverless-api).
### Consume Meta Llama models as a service
Models deployed as a service can be consumed using either the chat or the comple
# [Meta Llama 3](#tab/llama-three) 1. In the **workspace**, select **Endpoints** > **Serverless endpoints**.
-1. Find and select the deployment you created.
+1. Find and select the `Meta-Llama-3.1-405B-Instruct` deployment you created.
1. Copy the **Target** URL and the **Key** token values. 1. Make an API request based on the type of model you deployed. - For completions models, such as `Llama-3-8B`, use the [`<target_url>/v1/completions`](#completions-api) API.
- - For chat models, such as `Llama-3-8B-Instruct`, use the [`<target_url>/v1/chat/completions`](#chat-api) API.
+ - For chat models, such as `Meta-Llama-3.1-405B-Instruct`, use the [`/chat/completions`](#chat-api) API.
- For more information on using the APIs, see the [reference](#reference-for-meta-llama-models-deployed-a-serverless-api) section.
+ For more information on using the APIs, see the [reference](#reference-for-meta-llama-31-models-deployed-a-serverless-api) section.
# [Meta Llama 2](#tab/llama-two)
Models deployed as a service can be consumed using either the chat or the comple
- For completions models, such as `Meta-Llama-2-7B`, use the [`/v1/completions`](#completions-api) API or the [Azure AI Model Inference API](reference-model-inference-api.md) on the route `/completions`. - For chat models, such as `Meta-Llama-2-7B-Chat`, use the [`/v1/chat/completions`](#chat-api) API or the [Azure AI Model Inference API](reference-model-inference-api.md) on the route `/chat/completions`.
- For more information on using the APIs, see the [reference](#reference-for-meta-llama-models-deployed-a-serverless-api) section.
+ For more information on using the APIs, see the [reference](#reference-for-meta-llama-31-models-deployed-a-serverless-api) section.
-### Reference for Meta Llama models deployed a serverless API
+### Reference for Meta Llama 3.1 models deployed a serverless API
Llama models accept both the [Azure AI Model Inference API](reference-model-inference-api.md) on the route `/chat/completions` or a [Llama Chat API](#chat-api) on `/v1/chat/completions`. In the same way, text completions can be generated using the [Azure AI Model Inference API](reference-model-inference-api.md) on the route `/completions` or a [Llama Completions API](#completions-api) on `/v1/completions`
The following is an example response:
## Deploy Meta Llama models to managed compute
-Apart from deploying with the pay-as-you-go managed service, you can also deploy Llama 3 models to managed compute in Azure Machine Learning studio. When deployed to managed compute, you can select all the details about the infrastructure running the model, including the virtual machines to use and the number of instances to handle the load you're expecting. Models deployed to managed compute consume quota from your subscription. All the models in the Meta Llama family can be deployed to managed compute.
+Apart from deploying with the pay-as-you-go managed service, you can also deploy Meta Llama 3.1 models to managed compute in Azure Machine Learning studio. When deployed to managed compute, you can select all the details about the infrastructure running the model, including the virtual machines to use and the number of instances to handle the load you're expecting. Models deployed to managed compute consume quota from your subscription. The following models from the 3.1 release wave are available on managed compute:
+- `Meta-Llama-3.1-8B-Instruct` (FT supported)
+- `Meta-Llama-3.1-70B-Instruct` (FT supported)
+- `Meta-Llama-3.1-8B` (FT supported)
+- `Meta-Llama-3.1-70B` (FT supported)
+- `Llama Guard 3 8B`
+- `Prompt Guard`
### Create a new deployment # [Meta Llama 3](#tab/llama-three)
-Follow these steps to deploy a model such as `Llama-3-7B-Instruct` to a real-time endpoint in [Azure Machine Learning studio](https://ml.azure.com).
+Follow these steps to deploy a model such as `Meta-Llama-3.1-70B-Instruct` to a managed compute in [Azure Machine Learning studio](https://ml.azure.com).
1. Select the workspace in which you want to deploy the model. 1. Choose the model that you want to deploy from the studio's [model catalog](https://ml.azure.com/model/catalog).
- Alternatively, you can initiate deployment by going to your workspace and selecting **Endpoints** > **real-time endpoints** > **Create**.
+ Alternatively, you can initiate deployment by going to your workspace and selecting **Endpoints** > **Managed Comput** > **Create**.
1. On the model's overview page, select **Deploy** and then **Managed Compute without Azure AI Content Safety**.
For more information on how to deploy models to managed compute using the studio
# [Meta Llama 2](#tab/llama-two)
-Follow these steps to deploy a model such as `Llama-2-7b-chat` to a real-time endpoint in [Azure Machine Learning studio](https://ml.azure.com).
+Follow these steps to deploy a model such as `Llama-2-7b-chat` to a managed compute in [Azure Machine Learning studio](https://ml.azure.com).
1. Select the workspace in which you want to deploy the model. 1. Choose the model that you want to deploy from the studio's [model catalog](https://ml.azure.com/model/catalog).
- Alternatively, you can initiate deployment by going to your workspace and selecting **Endpoints** > **real-time endpoints** > **Create**.
+ Alternatively, you can initiate deployment by going to your workspace and selecting **Endpoints** > **managed compute** > **Create**.
1. On the model's overview page, select **Deploy** and then **Managed Compute without Azure AI Content Safety**.
For more information on how to deploy models to managed compute using the studio
### Consume Meta Llama models deployed to managed compute
-For reference about how to invoke Meta Llama 3 models deployed to real-time endpoints, see the model's card in Azure Machine Learning studio [model catalog](concept-model-catalog.md). Each model's card has an overview page that includes a description of the model, samples for code-based inferencing, fine-tuning, and model evaluation.
+For reference about how to invoke Meta Llama 3 models deployed to managed compute, see the model's card in Azure Machine Learning studio [model catalog](concept-model-catalog.md). Each model's card has an overview page that includes a description of the model, samples for code-based inferencing, fine-tuning, and model evaluation.
#### Additional inference examples
-# [Meta Llama 3](#tab/llama-three)
-
-| **Package** | **Sample Notebook** |
-|-|-|
-| OpenAI SDK (experimental) | [openaisdk.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/meta-llama3/openaisdk.ipynb) |
-| LangChain | [langchain.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/meta-llama3/langchain.ipynb) |
-| WebRequests | [webrequests.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/meta-llama3/webrequests.ipynb) |
-| LiteLLM SDK | [litellm.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/meta-llama3/litellm.ipynb) |
-
-# [Meta Llama 2](#tab/llama-two)
- | **Package** | **Sample Notebook** | |-|-|
-| OpenAI SDK (experimental) | [openaisdk.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/llama2/openaisdk.ipynb) |
-| LangChain | [langchain.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/llama2/langchain.ipynb) |
-| WebRequests | [webrequests.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/llama2/webrequests.ipynb) |
-| LiteLLM SDK | [litellm.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/llama2/litellm.ipynb) |
--
+| CLI using CURL and Python web requests | [webrequests.ipynb](https://aka.ms/meta-llama-3.1-405B-instruct-webrequests)|
+| OpenAI SDK (experimental) | [openaisdk.ipynb](https://aka.ms/meta-llama-3.1-405B-instruct-openai)|
+| LangChain | [langchain.ipynb](https://aka.ms/meta-llama-3.1-405B-instruct-langchain)|
+| LiteLLM SDK | [litellm.ipynb](https://aka.ms/meta-llama-3.1-405B-instruct-litellm) |
## Cost and quotas
-### Cost and quota considerations for Meta Llama models deployed as a serverless API
+### Cost and quota considerations for Meta Llama 3.1 models deployed as a serverless API
-Meta Llama models deployed as a serverless API are offered by Meta through Azure Marketplace and integrated with Azure Machine Learning studio for use. You can find Azure Marketplace pricing when deploying or fine-tuning models.
+Meta Llama 3.1 models deployed as a serverless API are offered by Meta through Azure Marketplace and integrated with Azure Machine Learning studio for use. You can find Azure Marketplace pricing when deploying or fine-tuning models.
Each time a workspace subscribes to a given model offering from Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference and fine-tuning; however, multiple meters are available to track each scenario independently.
For more information on how to track costs, see [Monitor costs for models offere
:::image type="content" source="media/how-to-deploy-models-llama/costs-model-as-service-cost-details.png" alt-text="A screenshot showing different resources corresponding to different model offerings and their associated meters." lightbox="media/how-to-deploy-models-llama/costs-model-as-service-cost-details.png":::
-Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
+Quota is managed per deployment. Each deployment has a rate limit of 400,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
-### Cost and quota considerations for Meta Llama models deployed managed compute
+### Cost and quota considerations for Meta Llama 3.1 models deployed managed compute
-For deployment and inferencing of Meta Llama models with managed compute, you consume virtual machine (VM) core quota that is assigned to your subscription on a per-region basis. When you sign up for Azure Machine Learning studio, you receive a default VM quota for several VM families available in the region. You can continue to create deployments until you reach your quota limit. Once you reach this limit, you can request a quota increase.
+For deployment and inferencing of Meta Llama 3.1 models with managed compute, you consume virtual machine (VM) core quota that is assigned to your subscription on a per-region basis. When you sign up for Azure AI Studio, you receive a default VM quota for several VM families available in the region. You can continue to create deployments until you reach your quota limit. Once you reach this limit, you can request a quota increase.
## Content filtering
machine-learning How To Identity Based Service Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-service-authentication.md
When you disable the admin user for ACR, Azure Machine Learning uses a managed i
[!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)] ```azurecli-interactive
- az ml workspace show -w <my workspace> \
- -g <my resource group>
- --query containerRegistry
+ az ml workspace show --name <my workspace name> \
+ --resource-group <my resource group> \
+ --subscription <my subscription id> \
+ --query container_registry
``` This command returns a value similar to the following text. You only want the last portion of the text, which is the ACR instance name:
If ACR admin user is disallowed by subscription policy, you should first create
[!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)] ```azurecli-interactive
-az ml workspace create -w <workspace name> \
+az ml workspace create -n <workspace name> \
-g <workspace resource group> \ -l <region> \ --container-registry /subscriptions/<subscription id>/resourceGroups/<acr resource group>/providers/Microsoft.ContainerRegistry/registries/<acr name>
Create machine learning compute cluster with system-assigned managed identity en
[!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)] ```azurecli-interactive
-az ml compute show --name <cluster name> -w <workspace> -g <resource group>
+az ml compute show --name <cluster name> -n <workspace> -g <resource group>
``` Optionally, you can update the compute cluster to assign a user-assigned managed identity:
In this scenario, Azure Machine Learning service builds the training or inferenc
[!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)] ```azurecli-interactive
- az ml workspace show -w <workspace name> -g <resource group> --query identityPrincipalId
+ az ml workspace show -n <workspace name> -g <resource group> --query identityPrincipalId
``` 1. Grant the Managed Identity Operator role:
In this scenario, Azure Machine Learning service builds the training or inferenc
The following command demonstrates how to use the YAML file to create a connection with your workspace. Replace `<yaml file>`, `<workspace name>`, and `<resource group>` with the values for your configuration: ```azurecli-interactive
- az ml connection create --file <yml file> --resource-group <resource group> --workspace-name <workspace>
+ az ml connection create --file <yml file> --resource-group <resource group> --name <workspace>
``` 1. Once the configuration is complete, you can use the base images from private ACR when building environments for training or inference. The following code snippet demonstrates how to specify the base image ACR and image name in an environment definition:
machine-learning Migrate To V2 Execution Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-pipeline.md
This article gives a comparison of scenario(s) in SDK v1 and SDK v2. In the foll
|[azureml.pipeline.core.Pipeline](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline?view=azure-ml-py&preserve-view=true)|[azure.ai.ml.dsl.pipeline](/python/api/azure-ai-ml/azure.ai.ml.dsl#azure-ai-ml-dsl-pipeline)| |[OutputDatasetConfig](/python/api/azureml-core/azureml.data.output_dataset_config.outputdatasetconfig?view=azure-ml-py&preserve-view=true)|[Output](/python/api/azure-ai-ml/azure.ai.ml.output)| |[dataset as_mount](/python/api/azureml-core/azureml.data.filedataset?view=azure-ml-py#azureml-data-filedataset-as-mount&preserve-view=true)|[Input](/python/api/azure-ai-ml/azure.ai.ml.input)|
+|[StepSequence](/python/api/azureml-pipeline-core/azureml.pipeline.core.stepsequence)|[Data dependency](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/pipelines-with-components/basics/3b_pipeline_with_data)|
## Step and job/component type mapping
machine-learning How To Deploy For Real Time Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-deploy-for-real-time-inference.md
In this article, you'll learn how to deploy a flow as a managed online endpoint
- Have basic understanding on managed identities. [Learn more about managed identities.](../../active-directory/managed-identities-azure-resources/overview.md) > [!NOTE]
-> Managed online endpoint only supports managed virtual network. If your workspace is in custom vnet, you need to try other deployment options, such as deploy to [Kubernetes online endpoint using CLI/SDK](./how-to-deploy-to-code.md), or [deploy to other platforms suchs Docker](https://microsoft.github.io/promptflow/how-to-guides/deploy-a-flow/https://docsupdatetracker.net/index.html).
+> Managed online endpoint only supports managed virtual network. If your workspace is in custom vnet, you need to try other deployment options, such as deploy to [Kubernetes online endpoint using CLI/SDK](./how-to-deploy-to-code.md), or [deploy to other platforms such as Docker](https://microsoft.github.io/promptflow/how-to-guides/deploy-a-flow/https://docsupdatetracker.net/index.html).
## Build the flow and get it ready for deployment
machine-learning How To Deploy To Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-deploy-to-code.md
Before beginning make sure that you have tested your flow properly, and feel con
- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the owner or contributor role for the Azure Machine Learning workspace, or a custom role allowing "Microsoft.MachineLearningServices/workspaces/onlineEndpoints/". If you use studio to create/manage online endpoints/deployments, you'll need an additional permission "Microsoft.Resources/deployments/write" from the resource group owner. For more information, see [Manage access to an Azure Machine Learning workspace](../how-to-assign-roles.md). > [!NOTE]
-> Managed online endpoint only supports managed virtual network. If your workspace is in custom vnet, you can deploy to Kubernetes online endpoint, or [deploy to other platforms suchs Docker](https://microsoft.github.io/promptflow/how-to-guides/deploy-a-flow/https://docsupdatetracker.net/index.html).
+> Managed online endpoint only supports managed virtual network. If your workspace is in custom vnet, you can deploy to Kubernetes online endpoint, or [deploy to other platforms such as Docker](https://microsoft.github.io/promptflow/how-to-guides/deploy-a-flow/https://docsupdatetracker.net/index.html).
### Virtual machine quota allocation for deployment
machine-learning How To Evaluate Semantic Kernel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-evaluate-semantic-kernel.md
This will present you with a detailed table, line-by-line comparison of the resu
> Follow along with our documentations to get started! > And keep an eye out for more integrations.
-If youΓÇÖre interested in learning more about how you can use prompt flow to test and evaluate Semantic Kernel, we recommend following along to the articles we created. At each step, we provide sample code and explanations so you can use prompt flow successfully with Semantic Kernel.
+If you're interested in learning more about how you can use Planners in Semantic Kernel, we recommend that you read the following article:
-* [Using prompt flow with Semantic Kernel](/semantic-kernel/ai-orchestration/planners/evaluate-and-deploy-planners/)
-* [Create a prompt flow with Semantic Kernel](/semantic-kernel/ai-orchestration/planners/evaluate-and-deploy-planners/create-a-prompt-flow-with-semantic-kernel)
-* [Running batches with prompt flow](/semantic-kernel/ai-orchestration/planners/evaluate-and-deploy-planners/running-batches-with-prompt-flow)
-* [Evaluate your plugins and planners](/semantic-kernel/ai-orchestration/planners/evaluate-and-deploy-planners/)
+* [Learn more about planners](/semantic-kernel/ai-orchestration/planners/evaluate-and-deploy-planners/)
When your planner is fully prepared, it can be deployed as an online endpoint in Azure Machine Learning. This allows it to be easily integrated into your application for consumption. Learn more about how to [deploy a flow as a managed online endpoint for real-time inference](./how-to-deploy-for-real-time-inference.md).
migrate Deploy Appliance Script Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/deploy-appliance-script-government.md
Check that the zipped file is secure, before you deploy it.
- Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ``` 3. Verify the latest appliance version and hash value: -
+| **Download** | **Hash value** |
+ | | |
+ | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | [!INCLUDE [security-hash-value.md](includes/security-hash-value.md)] |
### Run the script
Check that the zipped file is secure, before you deploy it.
- ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]``` - Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ``` 3. Verify the latest appliance version and hash value:-
- **Download** | **Hash value**
- |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 07783A31D1E66BE963349B5553DC1F1E94C70AA149E11AC7D8914F4076480731
+
+ | **Download** | **Hash value** |
+ | | |
+ | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | [!INCLUDE [security-hash-value.md](includes/security-hash-value.md)] |
### Run the script
Check that the zipped file is secure, before you deploy it.
- Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ``` 3. Verify the latest appliance version and hash value:
+ | **Download** | **Hash value** |
+ | | |
+ | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | [!INCLUDE [security-hash-value.md](includes/security-hash-value.md)] |
> [!NOTE] > The same script can be used to set up Physical appliance for Azure Government cloud with either public or private endpoint connectivity.
migrate Deploy Appliance Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/deploy-appliance-script.md
Check that the zipped file is secure, before you deploy it.
- Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ``` 3. Verify the latest appliance version and hash value:
+ | **Download** | **Hash value** |
+ | | |
+ | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | [!INCLUDE [security-hash-value.md](includes/security-hash-value.md)] |
> [!NOTE] > The same script can be used to set up VMware appliance for either Azure public or Azure Government cloud.
Check that the zipped file is secure, before you deploy it.
- Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ``` 3. Verify the latest appliance version and hash value:
+ | **Download** | **Hash value** |
+ | | |
+ | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | [!INCLUDE [security-hash-value.md](includes/security-hash-value.md)] |
> [!NOTE] > The same script can be used to set up Hyper-V appliance for either Azure public or Azure Government cloud.
migrate Discover And Assess Using Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/discover-and-assess-using-private-endpoints.md
Check that the zipped file is secure, before you deploy it.
- ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]``` - Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ``` 3. Verify the latest appliance version and hash value:-
+
+ | **Download** | **Hash value** |
+ | | |
+ | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | [!INCLUDE [security-hash-value.md](includes/security-hash-value.md)] |
> [!NOTE] > The same script can be used to set up an appliance with private endpoint connectivity for any of the chosen scenarios, such as VMware, Hyper-V, physical or other to deploy an appliance with the desired configuration.
migrate How To Scale Out For Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-scale-out-for-migration.md
In **Download Azure Migrate appliance**, click **Download**. You need to downlo
- ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]``` - Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ``` > 3. Download the [latest version](https://go.microsoft.com/fwlink/?linkid=2191847) of the scale-out appliance installer from the portal if the computed hash value doesn't match this string:
-07783A31D1E66BE963349B5553DC1F1E94C70AA149E11AC7D8914F4076480731
### 3. Run the Azure Migrate installer script
migrate How To Set Up Appliance Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-set-up-appliance-physical.md
Check that the zipped file is secure, before you deploy it.
- Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ``` 3. Verify the latest appliance version and hash value:
+ | **Download** | **Hash value** |
+ | | |
+ | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | [!INCLUDE [security-hash-value.md](includes/security-hash-value.md)] |
> [!NOTE] > The same script can be used to set up Physical appliance for either Azure public or Azure Government cloud.
migrate Tutorial Discover Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-aws.md
Check that the zipped file is secure, before you deploy it.
**Scenario** | **Download*** | **Hash value** | |
- Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 07783A31D1E66BE963349B5553DC1F1E94C70AA149E11AC7D8914F4076480731
+ Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | [!INCLUDE [security-hash-value.md](includes/security-hash-value.md)]
- For Azure Government: **Scenario** | **Download*** | **Hash value** | |
- Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 07783A31D1E66BE963349B5553DC1F1E94C70AA149E11AC7D8914F4076480731
+ Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | [!INCLUDE [security-hash-value.md](includes/security-hash-value.md)]
### 3. Run the Azure Migrate installer script
migrate Tutorial Discover Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-gcp.md
Check that the zipped file is secure before you deploy it.
**Scenario** | **Download** | **Hash value** | |
- Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 07783A31D1E66BE963349B5553DC1F1E94C70AA149E11AC7D8914F4076480731
+ Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | [!INCLUDE [security-hash-value.md](includes/security-hash-value.md)]
- For Azure Government: **Scenario** | **Download** | **Hash value** | |
- Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 07783A31D1E66BE963349B5553DC1F1E94C70AA149E11AC7D8914F4076480731
+ Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | [!INCLUDE [security-hash-value.md](includes/security-hash-value.md)]
### 3. Run the Azure Migrate installer script
migrate Tutorial Discover Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-hyper-v.md
Hash value is:
**Hash** | **Value** |
-SHA256 | 07783A31D1E66BE963349B5553DC1F1E94C70AA149E11AC7D8914F4076480731
+SHA256 | [!INCLUDE [security-hash-value.md](includes/security-hash-value.md)]
### Create an account to access servers
Check that the zipped file is secure, before you deploy it.
**Scenario*** | **Download** | **SHA256** | |
- Hyper-V (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 07783A31D1E66BE963349B5553DC1F1E94C70AA149E11AC7D8914F4076480731
+ Hyper-V (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | [!INCLUDE [security-hash-value.md](includes/security-hash-value.md)]
### 3. Create an appliance
migrate Tutorial Discover Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-physical.md
Check that the zipped file is secure, before you deploy it.
- ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]``` - Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ``` 3. Verify the latest appliance version and hash value:-
+
+ | **Download** | **Hash value** |
+ | | |
+ | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | [!INCLUDE [security-hash-value.md](includes/security-hash-value.md)] |
> [!NOTE] > The same script can be used to set up Physical appliance for either Azure public or Azure Government cloud with public or private endpoint connectivity.
migrate How To Set Up Appliance Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/how-to-set-up-appliance-vmware.md
Before you deploy the OVA file, verify that the file is secure:
1. Verify the latest appliance versions and hash values: - For the Azure public cloud:-
- **Algorithm** | **Download** | **SHA256**
- | |
- VMware (11.9 GB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191954) | 06256F9C6FB3F011152D861DA43FFA1C5C8FF966931D5CE00F1F252D3A2F4723
+
+ [!INCLUDE [public-cloud-vmware.md](../includes/public-cloud-vmware.md)]
- For Azure Government: **Algorithm** | **Download** | **SHA256** | |
- VMware (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 07783A31D1E66BE963349B5553DC1F1E94C70AA149E11AC7D8914F4076480731
+ VMware (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | [!INCLUDE [security-hash-value.md](../includes/security-hash-value.md)]
#### Create the appliance server
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/tutorial-discover-vmware.md
Before you deploy the OVA file, verify that the file is secure:
- For the Azure public cloud:
- **Algorithm** | **Download** | **SHA256**
- | |
- VMware (11.9 GB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191954) | 06256f9c6fb3f011152d861da43ffa1c5c8ff966931d5ce00f1f252d3a2f4723
+ [!INCLUDE [public-cloud-vmware.md](../includes/public-cloud-vmware.md)]
- For Azure Government: **Algorithm** | **Download** | **SHA256** | |
- VMware (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 07783A31D1E66BE963349B5553DC1F1E94C70AA149E11AC7D8914F4076480731
+ VMware (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | [!INCLUDE [security-hash-value.md](../includes/security-hash-value.md)]
#### Create the appliance server
network-watcher Vnet Flow Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-portal.md
Previously updated : 04/24/2024 Last updated : 07/23/2024 #CustomerIntent: As an Azure administrator, I want to log my virtual network IP traffic using Network Watcher VNet flow logs so that I can analyze it later.
To enable traffic analytics for a flow log, follow these steps:
| Setting | Value | | - | -- |
- | Traffic analytics processing interval | Select the processing interval that you prefer, available options are: **Every 1 hour** and **Every 10 mins**. The default processing interval is every one hour. For more information, see [Traffic analytics](traffic-analytics.md). |
| Subscription | Select the Azure subscription of your Log Analytics workspace. | | Log Analytics Workspace | Select your Log Analytics workspace. By default, Azure portal creates ***DefaultWorkspace-{SubscriptionID}-{Region}*** Log Analytics workspace in ***defaultresourcegroup-{Region}*** resource group. |
+ | Traffic analytics processing interval | Select the processing interval that you prefer, available options are: **Every 1 hour** and **Every 10 mins**. The default processing interval is every one hour. For more information, see [Traffic analytics](traffic-analytics.md). |
:::image type="content" source="./media/vnet-flow-logs-portal/enable-traffic-analytics-settings.png" alt-text="Screenshot that shows configurations of traffic analytics for an existing flow log in the Azure portal." lightbox="./media/vnet-flow-logs-portal/enable-traffic-analytics-settings.png":::
openshift Azure Redhat Openshift Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/azure-redhat-openshift-release-notes.md
Previously updated : 07/15/2024 Last updated : 07/23/2024
Azure Red Hat OpenShift receives improvements on an ongoing basis. To stay up to
## Version 4.14 - May 2024
-We're pleased to announce the launch of OpenShift 4.14 for Azure Red Hat OpenShift. This release enables [OpenShift Container Platform 4.14](https://docs.openshift.com/container-platform/4.14/welcome/https://docsupdatetracker.net/index.html). Version 4.12 will be outside of support after July 17, 2024. Existing clusters on version 4.12 and below should be upgraded before then.
+We're pleased to announce the launch of OpenShift 4.14 for Azure Red Hat OpenShift. This release enables [OpenShift Container Platform 4.14](https://docs.openshift.com/container-platform/4.14/welcome/https://docsupdatetracker.net/index.html). You can check the end of support date on the [support lifecycle page](/azure/openshift/support-lifecycle) for previous versions.
In addition to making version 4.14 available, this release also makes the following features generally available:
payment-hsm Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/getting-started.md
This article provides steps and information necessary to get started with Azure Payment HSM.
-1. First, engage with your Microsoft account manager and get your business cases approved by Azure Payment HSM Product Manager. See [Getting started with Azure Payment HSM](getting-started.md). Ask your Microsoft account manager and Cloud Service Architect (CSA) to send a request [via email](mailto:paymentHSMRequest@microsoft.com).
-2. The Azure Payment HSM comes with payShield Manager license so you can remotely manage the HSM; you must have Thales smart cards and card readers for payShield Manager before onboarding Azure payment HSM. The minimum requirement is one compatible USB Smartcard reader with at least 5 payShield Manager Smartcards. Contact your Thales sales representative for the purchase or using existing compatible smart cards and readers. For more information, see the [Payment HSM support: Prerequisites](support-guide.md#prerequisites).
-
-3. Provide your contact information to the Microsoft account team and the Azure Payment HSM Product Manager [via email](mailto:paymentHSMRequest@microsoft.com), so they can set up your Thales support account.
-
- A Thales Customer ID is created, so you can submit payShield 10K support issues as well as download documentation, software, and firmware from Thales portal. The customer team can use the Thales Customer ID to create individual account access to Thales support portal.
-
- | Email Form |
- |--|
- |Trading Name:|
- |Full Address:<br><br><br>
- |Country:|
- |Post Code:|
- |Contact:|
- | Address Type: Civil / Military |
- | Telephone No. (with Country Code): |
- | Is it state owned/governmental: Y / N
- |Located in a Free trade zone: Y / N|
-4. You must next engage with the Microsoft CSAs to plan your deployment, and to understand the networking requirements and constraints/workarounds before onboarding the service. For details, see:
+1. First, engage with your Microsoft account manager and get your business cases approved by Azure Payment HSM Product Manager. See [Getting started with Azure Payment HSM](getting-started.md). Ask your Microsoft account manager and Cloud Service Architect (CSA) to fill out the [Payment HSM: initial contact form](https://forms.office.com/r/yxREMbybct).
+1. The Azure Payment HSM comes with payShield Manager license so you can remotely manage the HSM; you must have Thales smart cards and card readers for payShield Manager before onboarding Azure payment HSM. The minimum requirement is one compatible USB Smartcard reader with at least 5 payShield Manager Smartcards. Contact your Thales sales representative for the purchase or using existing compatible smart cards and readers. For more information, see the [Payment HSM support: Prerequisites](support-guide.md#prerequisites).
+1. Fill out the [Payment HSM: Thales support account request form](https://forms.office.com/r/tDNPwLCsqB) to provide your contact information to the Microsoft account team and the Azure Payment HSM Product Manager, so they can set up your Thales support account. After your Thales Customer ID is created, so you can submit payShield 10K support issues and download documentation, software, and firmware from Thales portal. The customer team can use the Thales Customer ID to create individual account access to Thales support portal.
+1. You must next engage with the Microsoft CSAs to plan your deployment, and to understand the networking requirements and constraints/workarounds before onboarding the service. For details, see:
- [Azure Payment HSM deployment scenarios](deployment-scenarios.md) - [Solution design for Azure Payment HSM](solution-design.md) - [Azure Payment HSM "fastpathenabled" feature flag and tag](fastpathenabled.md) - [Azure Payment HSM traffic inspection](inspect-traffic.md)
-5. Contact Microsoft support to get your subscription approved and receive feature registration, to access the Azure payment HSM service. See [Register the Azure Payment HSM resource providers](register-payment-hsm-resource-providers.md?tabs=azure-cli). There is no charge at this step.
-6. To create payment HSMs, follow the [Tutorials](create-payment-hsm.md) and [How-To Guides](register-payment-hsm-resource-providers.md). Customer billing starts when the HSM resource is created.
-7. Upgrade the payShield 10K firmware to their desired version.
-8. Review the support process and scope here for Microsoft support and Thales's support: [Azure Payment HSM Service support guide ](support-guide.md).
-9. Monitor your payShield 10K using standard SNMP V3 tools. payShield Monitor is another product available to provide continuous monitoring of HSMs. Contact Thales Sales rep for licensing information.
+1. Contact Microsoft support to get your subscription approved and receive feature registration, to access the Azure payment HSM service. See [Register the Azure Payment HSM resource providers](register-payment-hsm-resource-providers.md?tabs=azure-cli). There is no charge at this step.
+1. To create payment HSMs, follow the [Tutorials](create-payment-hsm.md) and [How-To Guides](register-payment-hsm-resource-providers.md). Customer billing starts when the HSM resource is created.
+1. Upgrade the payShield 10K firmware to their desired version.
+1. Review the support process and scope here for Microsoft support and Thales's support: [Azure Payment HSM Service support guide ](support-guide.md).
+1. Monitor your payShield 10K using standard SNMP V3 tools. payShield Monitor is another product available to provide continuous monitoring of HSMs. Contact Thales Sales rep for licensing information.
## Next steps
postgresql Generative Ai Azure Cognitive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-azure-cognitive.md
Last updated 05/20/2024 + - build-2024
postgresql Generative Ai Azure Local Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-azure-local-ai.md
Last updated 05/20/2024+ - build-2024
postgresql Generative Ai Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-azure-machine-learning.md
Last updated 05/28/2024 + - build-2024
postgresql Generative Ai Azure Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-azure-openai.md
Last updated 05/20/2024
+ - ignite-2023 - build-2024
postgresql Generative Ai Azure Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-azure-overview.md
Last updated 05/20/2024
+ - ignite-2023 - build-2024
postgresql Generative Ai Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-overview.md
Last updated 04/27/2024
+ - ignite-2023
postgresql Generative Ai Recommendation System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-recommendation-system.md
Last updated 04/27/2024
+ - ignite-2023
postgresql Generative Ai Semantic Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-semantic-search.md
Last updated 04/27/2024
+ - ignite-2023
postgresql How To Integrate Azure Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-integrate-azure-ai.md
Last updated 04/27/2024
+ - ignite-2023
postgresql How To Optimize Performance Pgvector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-optimize-performance-pgvector.md
Last updated 04/27/2024
+ - build-2023 - ignite-2023
postgresql How To Use Pg Partman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-use-pg-partman.md
Last updated 05/17/2024 + #customer intent: As a developer, I want to learn how to enable and use pg_partman on Azure Database for PostgreSQL - Flexible Server so that I can optimize my database performance.
postgresql How To Use Pgvector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-use-pgvector.md
Last updated 04/27/2024
+ - build-2023 - ignite-2023
postgresql Concepts Known Issues Migration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/concepts-known-issues-migration-service.md
Here are common limitations that apply to migration scenarios:
- Create TYPE - The migration service doesn't support migration at the object level, that is, at the table level or schema level.-- The migration service is unable to perform migration when the source database is Azure Database for PostgreSQL - Single Server with no public access or is an on-premises/AWS using a private IP, and the target Azure Database for PostgreSQL - Flexible Server instance is accessible only through a private endpoint. - Migration to burstable SKUs isn't supported. Databases must first be migrated to a nonburstable SKU and then scaled down if needed. - The Migration Runtime Server is designed to operate with the default DNS servers/private DNS zones, for example, `privatelink.postgres.database.azure.com`. Custom DNS names/DNS servers aren't supported by the migration service when you use the Migration Runtime Server feature. When you're configuring private endpoints for both the source and target databases, it's imperative to use the default private DNS zone provided by Azure for the private link service. The use of custom DNS configurations isn't yet supported and might lead to connectivity issues during the migration process.
search Cognitive Search Custom Skill Web Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-custom-skill-web-api.md
- ignite-2023 Previously updated : 03/05/2024 Last updated : 07/22/2024 # Custom Web API skill in an Azure AI Search enrichment pipeline
Parameters are case-sensitive.
| Parameter name | Description | |--|-| | `uri` | The URI of the Web API to which the JSON payload is sent. Only the **https** URI scheme is allowed. |
-| `authResourceId` | (Optional) A string that if set, indicates that this skill should use a managed identity on the connection to the function or app hosting the code. This property takes an application (client) ID or app's registration in Microsoft Entra ID, in a [supported format](../active-directory/develop/security-best-practices-for-app-registration.md#application-id-uri): `api://<appId>`. This value is used to scope the authentication token retrieved by the indexer, and is sent along with the custom Web skill API request to the function or app. Setting this property requires that your search service is [configured for managed identity](search-howto-managed-identities-data-sources.md) and your Azure function app is [configured for a Microsoft Entra sign in](../app-service/configure-authentication-provider-aad.md). To use this parameter, call the API with `api-version=2023-10-01-Preview`. |
+| `authResourceId` | (Optional) A string that if set, indicates that this skill should use a managed identity on the connection to the function or app hosting the code. This property takes an application (client) ID or app's registration in Microsoft Entra ID, in any of these formats: `api://<appId>`, `<appId>/.default`, `api://<appId>/.default`. This value is used to scope the authentication token retrieved by the indexer, and is sent along with the custom Web skill API request to the function or app. Setting this property requires that your search service is [configured for managed identity](search-howto-managed-identities-data-sources.md) and your Azure function app is [configured for a Microsoft Entra sign in](../app-service/configure-authentication-provider-aad.md). To use this parameter, call the API with `api-version=2023-10-01-Preview`. |
| `httpMethod` | The method to use while sending the payload. Allowed methods are `PUT` or `POST` | | `httpHeaders` | A collection of key-value pairs where the keys represent header names and values represent header values that are sent to your Web API along with the payload. The following headers are prohibited from being in this collection: `Accept`, `Accept-Charset`, `Accept-Encoding`, `Content-Length`, `Content-Type`, `Cookie`, `Host`, `TE`, `Upgrade`, `Via`. | | `timeout` | (Optional) When specified, indicates the timeout for the http client making the API call. It must be formatted as an XSD "dayTimeDuration" value (a restricted subset of an [ISO 8601 duration](https://www.w3.org/TR/xmlschema11-2/#dayTimeDuration) value). For example, `PT60S` for 60 seconds. If not set, a default value of 30 seconds is chosen. The timeout can be set to a maximum of 230 seconds and a minimum of 1 second. |
search Search Get Started Rag https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-rag.md
Requests to the search endpoint must be authenticated and authorized. You can us
1. On the System assigned tab, set status to **On**.
-1. Configure Azure AI Search for role-based access and assign roles:
+1. Configure Azure AI Search for role-based access:
1. In the Azure portal, find your Azure AI Search service.
Requests to the search endpoint must be authenticated and authorized. You can us
1. On the left menu, select **Access control (IAM)**.
- 1. Add the following role assignments for the Azure OpenAI managed identity: **Search Index Data Reader**, **Search Service Contributor**.
+1. Assign roles:
-1. Assign yourself to the **Cognitive Services OpenAI User** role on Azure OpenAI. This is the only role you need for query workloads.
+ 1. On Azure AI Search, add two role assignments for the Azure OpenAI managed identity:
+
+ - **Search Index Data Reader**
+ - **Search Service Contributor**
+
+ 1. On Azure OpenAI, assign yourself to a role. The code for this quickstart runs locally. Requests to Azure OpenAI originate from your system:
+
+ - **Cognitive Services OpenAI User**
It can take several minutes for permissions to take effect.
This section uses Visual Studio Code and Python to call the chat APIs on Azure O
AZURE_DEPLOYMENT_MODEL: str = "gpt-35-turbo" ```
-1. Specify query parameters. The query is a keyword search using semantic ranking. The search engine returns up to 50 matches, but the model returns just the top 5 in the response. If you can't enable semantic ranking on your search service, set the value to false.
+1. Run the following code to set query parameters. The query is a keyword search using semantic ranking. In a keyword search, the search engine returns up to 50 matches, but only the top 5 are provided to the model. If you can't enable semantic ranking on your search service, set the value to false.
```python # Set query parameters for grounding the conversation on your search index
- k=50
search_type="text" use_semantic_reranker=True sources_to_include=5
This section uses Visual Studio Code and Python to call the chat APIs on Azure O
from azure.core.credentials_async import AsyncTokenCredential from azure.identity.aio import get_bearer_token_provider from azure.search.documents.aio import SearchClient
- from azure.search.documents.models import VectorizableTextQuery, HybridSearch
from openai import AsyncAzureOpenAI from enum import Enum from typing import List, Optional
This section uses Visual Studio Code and Python to call the chat APIs on Azure O
HYBRID = "hybrid" # This function retrieves the selected fields from the search index
- async def get_sources(search_client: SearchClient, query: str, search_type: SearchType, use_semantic_reranker: bool = True, sources_to_include: int = 5, k: int = 50) -> List[str]:
+ async def get_sources(search_client: SearchClient, query: str, search_type: SearchType, use_semantic_reranker: bool = True, sources_to_include: int = 5) -> List[str]:
search_type == SearchType.TEXT, response = await search_client.search( search_text=query,
This section uses Visual Studio Code and Python to call the chat APIs on Azure O
"content": message })
- async def append_grounded_message(self, search_client: SearchClient, query: str, search_type: SearchType, use_semantic_reranker: bool = True, sources_to_include: int = 5, k: int = 50):
- sources = await get_sources(search_client, query, search_type, use_semantic_reranker, sources_to_include, k)
+ async def append_grounded_message(self, search_client: SearchClient, query: str, search_type: SearchType, use_semantic_reranker: bool = True, sources_to_include: int = 5):
+ sources = await get_sources(search_client, query, search_type, use_semantic_reranker, sources_to_include)
sources_formatted = "\n".join([f'{document["HotelName"]}:{document["Description"]}:{document["Tags"]}' for document in sources]) self.append_message(role="user", message=GROUNDED_PROMPT.format(query=query, sources=sources_formatted)) self.search_results.append(
This section uses Visual Studio Code and Python to call the chat APIs on Azure O
query="Can you recommend a few hotels near the ocean with beach access and good views", search_type=SearchType(search_type), use_semantic_reranker=use_semantic_reranker,
- sources_to_include=sources_to_include,
- k=k)
+ sources_to_include=sources_to_include)
await chat_thread.get_openai_response(openai_client=openai_client, model=chat_deployment) print(chat_thread.get_last_message()["content"]) ```
+ Output is from Azure OpenAI, and it consists of recommendations for several hotels. Here's an example of what the output might look like:
+
+ ```
+ Based on your criteria, we recommend the following hotels:
+
+ - Contoso Ocean Motel: located right on the beach and has private balconies with ocean views. They also have indoor and outdoor pools. It's located on the boardwalk near shops and art entertainment.
+ - Northwind Plaza & Suites: offers ocean views, free Wi-Fi, full kitchen, and a free breakfast buffet. Although not directly on the beach, this hotel has great views and is near the aquarium. They also have a pool.
+
+ Several other hotels have views and water features, but do not offer beach access or views of the ocean.
+ ```
+
+ To experiment further, change the query and rerun the last step to better understand how the model works with your data.
+
+ You can also modify the prompt to change the tone or structure of the output.
+ ## Clean up When you're working in your own subscription, it's a good idea at the end of a project to identify whether you still need the resources you created. Resources left running can cost you money. You can delete resources individually or delete the resource group to delete the entire set of resources.
search Vector Search Vectorizer Custom Web Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-vectorizer-custom-web-api.md
- build-2024 Previously updated : 05/28/2024 Last updated : 07/22/2024 # Custom Web API vectorizer
Parameters are case-sensitive.
| `uri` | The URI of the Web API to which the JSON payload is sent. Only the **https** URI scheme is allowed. | | `httpMethod` | The method to use while sending the payload. Allowed methods are `PUT` or `POST` | | `httpHeaders` | A collection of key-value pairs where the keys represent header names and values represent header values that are sent to your Web API along with the payload. The following headers are prohibited from being in this collection: `Accept`, `Accept-Charset`, `Accept-Encoding`, `Content-Length`, `Content-Type`, `Cookie`, `Host`, `TE`, `Upgrade`, `Via`. |
-| `authResourceId` | (Optional) A string that if set, indicates that this vectorizer should use a managed identity on the connection to the function or app hosting the code. This property takes an application (client) ID or app's registration in Microsoft Entra ID, in a [supported format](../active-directory/develop/security-best-practices-for-app-registration.md#application-id-uri): `api://<appId>`. This value is used to scope the authentication token retrieved by the indexer, and is sent along with the custom Web API request to the function or app. Setting this property requires that your search service is [configured for managed identity](search-howto-managed-identities-data-sources.md) and your Azure function app is [configured for a Microsoft Entra sign in](../app-service/configure-authentication-provider-aad.md). |
+| `authResourceId` | (Optional) A string that if set, indicates that this vectorizer should use a managed identity on the connection to the function or app hosting the code. This property takes an application (client) ID or app's registration in Microsoft Entra ID, in any of these formats: `api://<appId>`, `<appId>/.default`, `api://<appId>/.default`. This value is used to scope the authentication token retrieved by the indexer, and is sent along with the custom Web API request to the function or app. Setting this property requires that your search service is [configured for managed identity](search-howto-managed-identities-data-sources.md) and your Azure function app is [configured for a Microsoft Entra sign in](../app-service/configure-authentication-provider-aad.md). |
| `authIdentity` | (Optional) A user-managed identity used by the search service for connecting to the function or app hosting the code. You can use either a [system or user managed identity](search-howto-managed-identities-data-sources.md). To use a system manged identity, leave `authIdentity` blank. | | `timeout` | (Optional) When specified, indicates the timeout for the http client making the API call. It must be formatted as an XSD "dayTimeDuration" value (a restricted subset of an [ISO 8601 duration](https://www.w3.org/TR/xmlschema11-2/#dayTimeDuration) value). For example, `PT60S` for 60 seconds. If not set, a default value of 30 seconds is chosen. The timeout can be set to a maximum of 230 seconds and a minimum of 1 second. |
There are the following other considerations to make when implementing a web API
+ [Integrated vectorization](vector-search-integrated-vectorization.md) + [How to configure a vectorizer in a search index](vector-search-how-to-configure-vectorizer.md) + [Custom Web API skill](cognitive-search-custom-skill-web-api.md)
-+ [Hugging Face Embeddings Generator power skill (can be used for a custom web API vectorizer as well)](https://github.com/Azure-Samples/azure-search-power-skills/tree/main/Vector/EmbeddingGenerator)
++ [Hugging Face Embeddings Generator power skill (can be used for a custom web API vectorizer as well)](https://github.com/Azure-Samples/azure-search-power-skills/tree/main/Vector/EmbeddingGenerator)
security Azure CA Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-CA-details.md
Previously updated : 07/12/2024 Last updated : 07/22/2024
This article outlines the specific root and subordinate Certificate Authorities
Any entity trying to access Microsoft Entra identity services via the TLS/SSL protocols will be presented with certificates from the CAs listed in this article. Different services may use different root or intermediate CAs. The following root and subordinate CAs are relevant to entities that use [certificate pinning](certificate-pinning.md). **How to read the certificate details:**+ - The Serial Number (top string in the table) contains the hexadecimal value of the certificate serial number. - The Thumbprint (bottom string in the table) is the SHA1 thumbprint. - CAs listed in italics are the most recently added CAs.
Any entity trying to access Microsoft Entra identity services via the TLS/SSL pr
| [DigiCert Global Root CA](https://cacerts.digicert.com/DigiCertGlobalRootCA.crt) | 0x083be056904246b1a1756ac95991c74a<br>A8985D3A65E5E5C4B2D7D66D40C6DD2FB19C5436 | | [DigiCert Global Root G2](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt) | 0x033af1e6a711a9a0bb2864b11d09fae5<br>DF3C24F9BFD666761B268073FE06D1CC8D4F82A4 | | [DigiCert Global Root G3](https://cacerts.digicert.com/DigiCertGlobalRootG3.crt) | 0x055556bcf25ea43535c3a40fd5ab4572<br>7E04DE896A3E666D00E687D33FFAD93BE83D349E |
+| [Entrust Root Certification Authority G2](https://web.entrust.com/root-certificates/entrust_g2_ca.cer) | 4a538c28<br>8cf427fd790c3ad166068de81e57efbb932272d4 |
| [Microsoft ECC Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/Microsoft%20ECC%20Root%20Certificate%20Authority%202017.crt) | 0x66f23daf87de8bb14aea0c573101c2ec<br>999A64C37FF47D9FAB95F14769891460EEC4C3C5 | | [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt) | 0x1ed397095fd8b4b347701eaabe7f45b3<br>73a5e64a3bff8316ff0edccc618a906e4eae4d74 |
Any entity trying to access Microsoft Entra identity services via the TLS/SSL pr
|- |- | | [DigiCert Basic RSA CN CA G2](https://crt.sh/?d=2545289014) | 0x02f7e1f982bad009aff47dc95741b2f6<br>4D1FA5D1FB1AC3917C08E43F65015E6AEA571179 | | [DigiCert Cloud Services CA-1](https://crt.sh/?d=12624881) | 0x019ec1c6bd3f597bb20c3338e551d877<br>81B68D6CD2F221F8F534E677523BB236BBA1DC56 |
+| [DigiCert Cloud Services CA-1](https://crt.sh/?d=B3F6B64A07BB9611F47174407841F564FB991F29) | 0f171a48c6f223809218cd2ed6ddc0e8<br>b3f6b64a07bb9611f47174407841f564fb991f29 |
| [DigiCert SHA2 Secure Server CA](https://crt.sh/?d=3422153451) | 0x02742eaa17ca8e21c717bb1ffcfd0ca0<br>626D44E704D1CEABE3BF0D53397464AC8080142C | | [DigiCert TLS Hybrid ECC SHA384 2020 CA1](https://crt.sh/?d=3422153452) | 0x0a275fe704d6eecb23d5cd5b4b1a4e04<br>51E39A8BDB08878C52D6186588A0FA266A69CF28 | | [DigiCert TLS RSA SHA256 2020 CA1](https://crt.sh/?d=4385364571) | 0x06d8d904d5584346f68a2fa754227ec4<br>1C58A3A8518E8759BF075B76B750D4F2DF264FCD |
+| [DigiCert TLS RSA SHA256 2020 CA1](https://crt.sh/?d=6938FD4D98BAB03FAADB97B34396831E3780AEA1) | 0a3508d55c292b017df8ad65c00ff7e4<br>6938fd4d98bab03faadb97b34396831e3780aea1 |
+| [Entrust Certification Authority - L1K](https://aia.entrust.net/l1k-chain256.cer) | 0ee94cc30000000051d37785<br>f21c12f46cdb6b2e16f09f9419cdff328437b2d7 |
+| [Entrust Certification Authority - L1M](https://aia.entrust.net/l1m-chain256.cer) | 61a1e7d20000000051d366a6<br>cc136695639065fab47074d28c55314c66077e90 |
| [GeoTrust Global TLS RSA4096 SHA256 2022 CA1](https://crt.sh/?d=6670931375) | 0x0f622f6f21c2ff5d521f723a1d47d62d<br>7E6DB7B7584D8CF2003E0931E6CFC41A3A62D3DF |
-| [Microsoft Azure ECC TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2001.cer) | 0x09dc42a5f574ff3a389ee06d5d4de440<br>92503D0D74A7D3708197B6EE13082D52117A6AB0 |
-| [Microsoft Azure ECC TLS Issuing CA 01](https://crt.sh/?d=2616305805) | 0x330000001aa9564f44321c54b900000000001a<br>CDA57423EC5E7192901CA1BF6169DBE48E8D1268 |
-| [Microsoft Azure ECC TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2002.cer) | 0x0e8dbe5ea610e6cbb569c736f6d7004b<br>1E981CCDDC69102A45C6693EE84389C3CF2329F1 |
-| [Microsoft Azure ECC TLS Issuing CA 02](https://crt.sh/?d=2616326233) | 0x330000001b498d6736ed5612c200000000001b<br>489FF5765030EB28342477693EB183A4DED4D2A6 |
| [Microsoft Azure ECC TLS Issuing CA 03](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2003%20-%20xsign.crt) | 0x01529ee8368f0b5d72ba433e2d8ea62d<br>56D955C849887874AA1767810366D90ADF6C8536 | | [Microsoft Azure ECC TLS Issuing CA 03](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2003.crt) | 0x330000003322a2579b5e698bcc000000000033<br>91503BE7BF74E2A10AA078B48B71C3477175FEC3 | | [Microsoft Azure ECC TLS Issuing CA 04](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2004%20-%20xsign.crt) | 0x02393d48d702425a7cb41c000b0ed7ca<br>FB73FDC24F06998E070A06B6AFC78FDF2A155B25 | | [Microsoft Azure ECC TLS Issuing CA 04](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2004.crt) | 0x33000000322164aedab61f509d000000000032<br>406E3B38EFF35A727F276FE993590B70F8224AED |
-| [Microsoft Azure ECC TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2005.cer) | 0x0ce59c30fd7a83532e2d0146b332f965<br>C6363570AF8303CDF31C1D5AD81E19DBFE172531 |
-| [Microsoft Azure ECC TLS Issuing CA 05](https://crt.sh/?d=2616326161) | 0x330000001cc0d2a3cd78cf2c1000000000001c<br>4C15BC8D7AA5089A84F2AC4750F040D064040CD4 |
-| [Microsoft Azure ECC TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2006.cer) | 0x066e79cd7624c63130c77abeb6a8bb94<br>7365ADAEDFEA4909C1BAADBAB68719AD0C381163 |
-| [Microsoft Azure ECC TLS Issuing CA 06](https://crt.sh/?d=2616326228) | 0x330000001d0913c309da3f05a600000000001d<br>DFEB65E575D03D0CC59FD60066C6D39421E65483 |
| [Microsoft Azure ECC TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2007%20-%20xsign.crt) | 0x0f1f157582cdcd33734bdc5fcd941a33<br>3BE6CA5856E3B9709056DA51F32CBC8970A83E28 | | [Microsoft Azure ECC TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2007.crt) | 0x3300000034c732435db22a0a2b000000000034<br>AB3490B7E37B3A8A1E715036522AB42652C3CFFE | | [Microsoft Azure ECC TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2008%20-%20xsign.crt) | 0x0ef2e5d83681520255e92c608fbc2ff4<br>716DF84638AC8E6EEBE64416C8DD38C2A25F6630 |
Any entity trying to access Microsoft Entra identity services via the TLS/SSL pr
| [Microsoft Azure RSA TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2007.crt) | 0x330000003bf980b0c83783431700000000003b<br>0E5F41B697DAADD808BF55AD080350A2A5DFCA93 | | [Microsoft Azure RSA TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2008%20-%20xsign.crt) | 0x0efb7e547edf0ff1069aee57696d7ba0<br>31600991ED5FEC63D355A5484A6DCC787EAD89BC | | [Microsoft Azure RSA TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2008.crt) | 0x330000003a5dc2ffc321c16d9b00000000003a<br>512C8F3FB71EDACF7ADA490402E710B10C73026E |
-| [Microsoft Azure TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001.cer) | 0x0aafa6c5ca63c45141ea3be1f7c75317<br>2F2877C5D778C31E0F29C7E371DF5471BD673173 |
-| [Microsoft Azure TLS Issuing CA 01](https://crt.sh/?d=2616326024) | 0x1dbe9496f3db8b8de700000000001d<br>B9ED88EB05C15C79639493016200FDAB08137AF3 |
-| [Microsoft Azure TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002.cer) | 0x0c6ae97cced599838690a00a9ea53214<br>E7EEA674CA718E3BEFD90858E09F8372AD0AE2AA |
-| [Microsoft Azure TLS Issuing CA 02](https://crt.sh/?d=2616326032) | 0x330000001ec6749f058517b4d000000000001e<br>C5FB956A0E7672E9857B402008E7CCAD031F9B08 |
-| [Microsoft Azure TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2005.cer) | 0x0d7bede97d8209967a52631b8bdd18bd<br>6C3AF02E7F269AA73AFD0EFF2A88A4A1F04ED1E5 |
-| [Microsoft Azure TLS Issuing CA 05](https://crt.sh/?d=2616326057) | 0x330000001f9f1fa2043bc28db900000000001f<br>56F1CA470BB94E274B516A330494C792C419CF87 |
-| [Microsoft Azure TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006.cer) | 0x02e79171fb8021e93fe2d983834c50c0<br>30E01761AB97E59A06B41EF20AF6F2DE7EF4F7B0 |
-| [Microsoft Azure TLS Issuing CA 06](https://crt.sh/?d=2616330106) | 0x3300000020a2f1491a37fbd31f000000000020<br>8F1FD57F27C828D7BE29743B4D02CD7E6E5F43E6 |
| [Microsoft ECC TLS Issuing AOC CA 01](https://crt.sh/?d=4789656467) | 0x33000000282bfd23e7d1add707000000000028<br>30ab5c33eb4b77d4cbff00a11ee0a7507d9dd316 | | [Microsoft ECC TLS Issuing AOC CA 02](https://crt.sh/?d=4814787086) | 0x33000000290f8a6222ef6a5695000000000029<br>3709cd92105d074349d00ea8327f7d5303d729c8 | | [Microsoft ECC TLS Issuing EOC CA 01](https://crt.sh/?d=4814787088) | 0x330000002a2d006485fdacbfeb00000000002a<br>5fa13b879b2ad1b12e69d476e6cad90d01013b46 |
Any entity trying to access Microsoft Entra identity services via the TLS/SSL pr
| Γöö [DigiCert TLS RSA SHA256 2020 CA1](https://crt.sh/?d=4385364571) | 0x06d8d904d5584346f68a2fa754227ec4<br>1C58A3A8518E8759BF075B76B750D4F2DF264FCD | | Γöö [GeoTrust Global TLS RSA4096 SHA256 2022 CA1](https://crt.sh/?d=6670931375) | 0x0f622f6f21c2ff5d521f723a1d47d62d<br>7E6DB7B7584D8CF2003E0931E6CFC41A3A62D3DF | | [**DigiCert Global Root G2**](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt) | 0x033af1e6a711a9a0bb2864b11d09fae5<br>DF3C24F9BFD666761B268073FE06D1CC8D4F82A4 |
-| Γöö [Microsoft Azure TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001.cer) | 0x0aafa6c5ca63c45141ea3be1f7c75317<br>2F2877C5D778C31E0F29C7E371DF5471BD673173 |
-| Γöö [Microsoft Azure TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002.cer) | 0x0c6ae97cced599838690a00a9ea53214<br>E7EEA674CA718E3BEFD90858E09F8372AD0AE2AA |
| Γöö [Microsoft Azure RSA TLS Issuing CA 03](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2003%20-%20xsign.crt) | 0x05196526449a5e3d1a38748f5dcfebcc<br>F9388EA2C9B7D632B66A2B0B406DF1D37D3901F6 | | Γöö [Microsoft Azure RSA TLS Issuing CA 04](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2004%20-%20xsign.crt) | 0x09f96ec295555f24749eaf1e5dced49d<br>BE68D0ADAA2345B48E507320B695D386080E5B25 | | Γöö [Microsoft Azure RSA TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2007%20-%20xsign.crt) | 0x0a43a9509b01352f899579ec7208ba50<br>3382517058A0C20228D598EE7501B61256A76442 | | Γöö [Microsoft Azure RSA TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2008%20-%20xsign.crt) | 0x0efb7e547edf0ff1069aee57696d7ba0<br>31600991ED5FEC63D355A5484A6DCC787EAD89BC |
-| Γöö [Microsoft Azure TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2005.cer) | 0x0d7bede97d8209967a52631b8bdd18bd<br>6C3AF02E7F269AA73AFD0EFF2A88A4A1F04ED1E5 |
-| Γöö [Microsoft Azure TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006.cer) | 0x02e79171fb8021e93fe2d983834c50c0<br>30E01761AB97E59A06B41EF20AF6F2DE7EF4F7B0 |
| [**DigiCert Global Root G3**](https://cacerts.digicert.com/DigiCertGlobalRootG3.crt) | 0x055556bcf25ea43535c3a40fd5ab4572<br>7E04DE896A3E666D00E687D33FFAD93BE83D349E |
-| Γöö [Microsoft Azure ECC TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2001.cer) | 0x09dc42a5f574ff3a389ee06d5d4de440<br>92503D0D74A7D3708197B6EE13082D52117A6AB0 |
-| Γöö [Microsoft Azure ECC TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2002.cer) | 0x0e8dbe5ea610e6cbb569c736f6d7004b<br>1E981CCDDC69102A45C6693EE84389C3CF2329F1 |
| Γöö [Microsoft Azure ECC TLS Issuing CA 03](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2003%20-%20xsign.crt) | 0x01529ee8368f0b5d72ba433e2d8ea62d<br>56D955C849887874AA1767810366D90ADF6C8536 | | Γöö [Microsoft Azure ECC TLS Issuing CA 04](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2004%20-%20xsign.crt) | 0x02393d48d702425a7cb41c000b0ed7ca<br>FB73FDC24F06998E070A06B6AFC78FDF2A155B25 |
-| Γöö [Microsoft Azure ECC TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2005.cer) | 0x0ce59c30fd7a83532e2d0146b332f965<br>C6363570AF8303CDF31C1D5AD81E19DBFE172531 |
-| Γöö [Microsoft Azure ECC TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2006.cer) | 0x066e79cd7624c63130c77abeb6a8bb94<br>7365ADAEDFEA4909C1BAADBAB68719AD0C381163 |
| Γöö [Microsoft Azure ECC TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2007%20-%20xsign.crt) | 0x0f1f157582cdcd33734bdc5fcd941a33<br>3BE6CA5856E3B9709056DA51F32CBC8970A83E28 | | Γöö [Microsoft Azure ECC TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2008%20-%20xsign.crt) | 0x0ef2e5d83681520255e92c608fbc2ff4<br>716DF84638AC8E6EEBE64416C8DD38C2A25F6630 |
+| [**Entrust Root Certification Authority G2**](https://web.entrust.com/root-certificates/entrust_g2_ca.cer) | 4a538c28<br>8cf427fd790c3ad166068de81e57efbb932272d4 |
+| Γöö [Entrust Certification Authority - L1K](https://aia.entrust.net/l1k-chain256.cer) | 0ee94cc30000000051d37785<br>f21c12f46cdb6b2e16f09f9419cdff328437b2d7 |
+| Γöö [Entrust Certification Authority - L1M](https://aia.entrust.net/l1m-chain256.cer) | 61a1e7d20000000051d366a6<br>cc136695639065fab47074d28c55314c66077e90 |
| [**Microsoft ECC Root Certificate Authority 2017**](https://www.microsoft.com/pkiops/certs/Microsoft%20ECC%20Root%20Certificate%20Authority%202017.crt) | 0x66f23daf87de8bb14aea0c573101c2ec<br>999A64C37FF47D9FAB95F14769891460EEC4C3C5 |
-| Γöö [Microsoft Azure ECC TLS Issuing CA 01](https://crt.sh/?d=2616305805) | 0x330000001aa9564f44321c54b900000000001a<br>CDA57423EC5E7192901CA1BF6169DBE48E8D1268 |
-| Γöö [Microsoft Azure ECC TLS Issuing CA 02](https://crt.sh/?d=2616326233) | 0x330000001b498d6736ed5612c200000000001b<br>489FF5765030EB28342477693EB183A4DED4D2A6 |
| Γöö [Microsoft Azure ECC TLS Issuing CA 03](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2003.crt) | 0x330000003322a2579b5e698bcc000000000033<br>91503BE7BF74E2A10AA078B48B71C3477175FEC3 | | Γöö [Microsoft Azure ECC TLS Issuing CA 04](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2004.crt) | 0x33000000322164aedab61f509d000000000032<br>406E3B38EFF35A727F276FE993590B70F8224AED |
-| Γöö [Microsoft Azure ECC TLS Issuing CA 05](https://crt.sh/?d=2616326161) | 0x330000001cc0d2a3cd78cf2c1000000000001c<br>4C15BC8D7AA5089A84F2AC4750F040D064040CD4 |
-| Γöö [Microsoft Azure ECC TLS Issuing CA 06](https://crt.sh/?d=2616326228) | 0x330000001d0913c309da3f05a600000000001d<br>DFEB65E575D03D0CC59FD60066C6D39421E65483 |
| Γöö [Microsoft Azure ECC TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2007.crt) | 0x3300000034c732435db22a0a2b000000000034<br>AB3490B7E37B3A8A1E715036522AB42652C3CFFE | | Γöö [Microsoft Azure ECC TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2008.crt) | 0x3300000031526979844798bbb8000000000031<br>CF33D5A1C2F0355B207FCE940026E6C1580067FD | | Γöö [Microsoft ECC TLS Issuing AOC CA 01](https://crt.sh/?d=4789656467) |33000000282bfd23e7d1add707000000000028<br>30ab5c33eb4b77d4cbff00a11ee0a7507d9dd316 |
Any entity trying to access Microsoft Entra identity services via the TLS/SSL pr
| Γöö [Microsoft Azure RSA TLS Issuing CA 04](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2004.crt) | 0x330000003cd7cb44ee579961d000000000003c<br>7304022CA8A9FF7E3E0C1242E0110E643822C45E | | Γöö [Microsoft Azure RSA TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2007.crt) | 0x330000003bf980b0c83783431700000000003b<br>0E5F41B697DAADD808BF55AD080350A2A5DFCA93 | | Γöö [Microsoft Azure RSA TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2008.crt) | 0x330000003a5dc2ffc321c16d9b00000000003a<br>512C8F3FB71EDACF7ADA490402E710B10C73026E |
-| Γöö [Microsoft Azure TLS Issuing CA 01](https://crt.sh/?d=2616326024) | 0x1dbe9496f3db8b8de700000000001d<br>B9ED88EB05C15C79639493016200FDAB08137AF3 |
-| Γöö [Microsoft Azure TLS Issuing CA 02](https://crt.sh/?d=2616326032) | 0x330000001ec6749f058517b4d000000000001e<br>C5FB956A0E7672E9857B402008E7CCAD031F9B08 |
-| Γöö [Microsoft Azure TLS Issuing CA 05](https://crt.sh/?d=2616326057) | 0x330000001f9f1fa2043bc28db900000000001f<br>56F1CA470BB94E274B516A330494C792C419CF87 |
-| Γöö [Microsoft Azure TLS Issuing CA 06](https://crt.sh/?d=2616330106) | 0x3300000020a2f1491a37fbd31f000000000020<br>8F1FD57F27C828D7BE29743B4D02CD7E6E5F43E6 |
| Γöö [Microsoft RSA TLS Issuing AOC CA 01](https://crt.sh/?d=4789678141) |330000002ffaf06f6697e2469c00000000002f<br>4697fdbed95739b457b347056f8f16a975baf8ee | | Γöö [Microsoft RSA TLS Issuing AOC CA 02](https://crt.sh/?d=4814787092) |3300000030c756cc88f5c1e7eb000000000030<br>90ed2e9cb40d0cb49a20651033086b1ea2f76e0e | | Γöö [Microsoft RSA TLS Issuing EOC CA 01](https://crt.sh/?d=4814787098) |33000000310c4914b18c8f339a000000000031<br>a04d3750debfccf1259d553dbec33162c6b42737 |
Microsoft updated Azure services to use TLS certificates from a different set of
### Article change log
+- July 22, 2024: Added Entrust CAs from a parallel Microsoft 365 article to provide a comprehensive list.
+- June 27, 2024: Removed the following CAs, which were superseded by both versions of Microsoft Azure ECC TLS Issuing CAs 03, 04, 07, 08.
+
+ | Certificate Authority | Serial Number<br>Thumbprint |
+ |- |- |
+ | [Microsoft Azure ECC TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2001.cer)|0x09dc42a5f574ff3a389ee06d5d4de440<br>92503D0D74A7D3708197B6EE13082D52117A6AB0|
+ |[Microsoft Azure ECC TLS Issuing CA 01](https://crt.sh/?d=2616305805)|0x330000001aa9564f44321c54b900000000001a<br>CDA57423EC5E7192901CA1BF6169DBE48E8D1268|
+ |[Microsoft Azure ECC TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2002.cer)|0x0e8dbe5ea610e6cbb569c736f6d7004b<br>1E981CCDDC69102A45C6693EE84389C3CF2329F1|
+ |[Microsoft Azure ECC TLS Issuing CA 02](https://crt.sh/?d=2616326233)|0x330000001b498d6736ed5612c200000000001b<br>489FF5765030EB28342477693EB183A4DED4D2A6|
+ |[Microsoft Azure ECC TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2005.cer)|x0ce59c30fd7a83532e2d0146b332f965<br>C6363570AF8303CDF31C1D5AD81E19DBFE172531|
+ |[Microsoft Azure ECC TLS Issuing CA 05](https://crt.sh/?d=2616326161)| 0x330000001cc0d2a3cd78cf2c1000000000001c<br>4C15BC8D7AA5089A84F2AC4750F040D064040CD4|
+ |[Microsoft Azure ECC TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2006.cer)|0x066e79cd7624c63130c77abeb6a8bb94<br>7365ADAEDFEA4909C1BAADBAB68719AD0C381163|
+ |[Microsoft Azure ECC TLS Issuing CA 06](https://crt.sh/?d=2616326228)|0x330000001d0913c309da3f05a600000000001d<br>DFEB65E575D03D0CC59FD60066C6D39421E65483|
+ |[Microsoft Azure TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001.cer)| 0x0aafa6c5ca63c45141ea3be1f7c75317<br>2F2877C5D778C31E0F29C7E371DF5471BD673173|
+ |[Microsoft Azure TLS Issuing CA 01](https://crt.sh/?d=2616326024)| 0x1dbe9496f3db8b8de700000000001d<br>B9ED88EB05C15C79639493016200FDAB08137AF3|
+ |[Microsoft Azure TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002.cer)| 0x0c6ae97cced599838690a00a9ea53214<br>E7EEA674CA718E3BEFD90858E09F8372AD0AE2AA|
+ |[Microsoft Azure TLS Issuing CA 02](https://crt.sh/?d=2616326032)| 0x330000001ec6749f058517b4d000000000001e<br>C5FB956A0E7672E9857B402008E7CCAD031F9B08|
+ |[Microsoft Azure TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2005.cer)| 0x0d7bede97d8209967a52631b8bdd18bd<br>6C3AF02E7F269AA73AFD0EFF2A88A4A1F04ED1E5|
+ |[Microsoft Azure TLS Issuing CA 05](https://crt.sh/?d=2616326057)|0x330000001f9f1fa2043bc28db900000000001f<br>56F1CA470BB94E274B516A330494C792C419CF87|
+ |[Microsoft Azure TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006.cer)| 0x02e79171fb8021e93fe2d983834c50c0<br>30E01761AB97E59A06B41EF20AF6F2DE7EF4F7B0|
+ |[Microsoft Azure TLS Issuing CA 06](https://crt.sh/?d=2616330106)|0x3300000020a2f1491a37fbd31f000000000020<br>8F1FD57F27C828D7BE29743B4D02CD7E6E5F43E6|
+ - July 17, 2023: Added 16 new subordinate Certificate Authorities - February 7, 2023: Added eight new subordinate Certificate Authorities
service-bus-messaging Duplicate Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/duplicate-detection.md
Title: Azure Service Bus duplicate message detection | Microsoft Docs description: This article explains how you can detect duplicates in Azure Service Bus messages. The duplicate message can be ignored and dropped. Previously updated : 06/08/2023 Last updated : 07/23/2024 # Duplicate detection
-If an application fails due to a fatal error immediately after it sends a message, and the restarted application instance erroneously believes that the prior message delivery didn't occur, a subsequent send causes the same message to appear in the system twice.
+If an application fails due to a fatal error immediately after sending a message, and the restarted application instance erroneously believes that the prior message delivery didn't occur, a subsequent send causes the same message to appear in the system twice.
It's also possible for an error at the client or network level to occur a moment earlier, and for a sent message to be committed into the queue, with the acknowledgment not successfully returned to the client. This scenario leaves the client in doubt about the outcome of the send operation.
Enabling duplicate detection helps keep track of the application-controlled `Mes
Application control of the identifier is essential, because only that allows the application to tie the `MessageId` to a business process context from which it can be predictably reconstructed when a failure occurs.
-For a business process in which multiple messages are sent in the course of handling some application context, the `MessageId` may be a composite of the application-level context identifier, such as a purchase order number, and the subject of the message, for example, **12345.2017/payment**.
+For a business process in which multiple messages are sent in the course of handling some application context, the `MessageId` can be a composite of the application-level context identifier, such as a purchase order number, and the subject of the message, for example, **12345.2017/payment**.
The `MessageId` can always be some GUID, but anchoring the identifier to the business process yields predictable repeatability, which is desired for using the duplicate detection feature effectively.
The `MessageId` can always be some GUID, but anchoring the identifier to the bus
## Duplicate detection window size
-Apart from just enabling duplicate detection, you can also configure the size of the duplicate detection history time window during which message-ids are retained. This value defaults to 10 minutes for queues and topics, with a minimum value of 20 seconds to maximum value of 7 days.
+Apart from just enabling duplicate detection, you can also configure the size of the duplicate detection history time window during which message IDs are retained. This value defaults to 10 minutes for queues and topics, with a minimum value of 20 seconds to maximum value of 7 days.
Enabling duplicate detection and the size of the window directly impact the queue (and topic) throughput, since all recorded message IDs must be matched against the newly submitted message identifier.
service-bus-messaging Monitor Service Bus Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/monitor-service-bus-reference.md
Title: Monitoring Azure Service Bus data reference
-description: Important reference material needed when you monitor Azure Service Bus.
+ Title: Monitoring data reference for Azure Service Bus
+description: This article contains important reference material you need when you monitor Azure Service Bus by using Azure Monitor.
Last updated : 07/22/2024+ - Previously updated : 10/11/2022+++
+# Azure Service Bus monitoring data reference
-# Monitoring Azure Service Bus data reference
-See [Monitoring Azure Service Bus](monitor-service-bus.md) for details on collecting and analyzing monitoring data for Azure Service Bus.
-> [!NOTE]
-> Azure Monitor doesn't include dimensions in the exported metrics data sent to a destination like Azure Storage, Azure Event Hubs, Log Analytics, etc.
+See [Monitor Azure Service Bus](monitor-service-bus.md) for details on the data you can collect for Service Bus and how to use it.
-## Metrics
-This section lists all the automatically collected platform metrics collected for Azure Service Bus. The resource provider for these metrics is **Microsoft.ServiceBus/namespaces**.
-### Request metrics
-Counts the number of data and management operations requests.
+### Supported metrics for Microsoft.ServiceBus/Namespaces
+
+The following table lists the metrics available for the Microsoft.ServiceBus/Namespaces resource type.
+
-| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions |
-| - | - | -- | | | |
-| Incoming Requests| Yes | Count | Total | The number of requests made to the Service Bus service over a specified period. | EntityName |
-|Successful Requests| No | Count | Total | The number of successful requests made to the Service Bus service over a specified period. | Entity name<br/>OperationResult|
-|[Server Errors](service-bus-messaging-exceptions.md#exception-categories)| No | Count | Total | The number of requests not processed because of an error in the Service Bus service over a specified period. | Entity name<br/>OperationResult|
-|[User Errors](service-bus-messaging-exceptions.md#exception-categories) | No | Count | Total | The number of requests not processed because of user errors over a specified period. | Entity name|
-|Throttled Requests| No | Count | Total | <p>The number of requests that were throttled because the usage was exceeded.</p><p>MessagingErrorSubCode dimension has the following possible values: <br/><ul><li><b>CPU:</b> CPU throttling</li><li><b>Storage:</b>It indicates throttle because of pending checkpoint operations</li><li><b>Namespace:</b>Namespace operations throttling.</li><li><b>Unknown:</b> Other resource throttling.</li></p> | Entity name<br/>MessagingErrorSubCode |
-| Pending Checkpoint Operations Count | No | count | Average | The number of pending checkpoint operations on the namespace. Service starts to throttle when the pending checkpoint count exceeds limit of (500,000 + (500,000 * messaging units)) operations. This metric applies only to namespaces using the **premium** tier. | MessagingErrorSubCode |
-| Server Send Latency | No | milliseconds | Average | The time taken by the Service Bus service to complete the request. | Entity name |
+The following sections provide more detailed descriptions for metrics presented in the previous section.
+### Request metrics
+
+*Request metrics* count the number of data and management operations requests.
-The following two types of errors are classified as **user errors**:
+| Metric | Description |
+|:-|:|
+| Incoming Requests | The number of requests made to the Service Bus service over a specified period. |
+| Successful Requests | The number of successful requests made to the Service Bus service over a specified period. |
+| [Server Errors](service-bus-messaging-exceptions.md#exception-categories) | The number of requests not processed because of an error in the Service Bus service over a specified period. |
+| [User Errors](service-bus-messaging-exceptions.md#exception-categories) | The number of requests not processed because of user errors over a specified period. |
+| Throttled Requests | The number of requests that were throttled because the usage was exceeded.</p><p>MessagingErrorSubCode dimension has the following possible values: <br/><ul><li><b>CPU:</b> CPU throttling</li><li><b>Storage:</b>It indicates throttle because of pending checkpoint operations</li><li><b>Namespace:</b>Namespace operations throttling.</li><li><b>Unknown:</b> Other resource throttling.</li></p> |
+| Pending Checkpoint Operations Count | The number of pending checkpoint operations on the namespace. Service starts to throttle when the pending checkpoint count exceeds limit of (500,000 + (500,000 * messaging units)) operations. This metric applies only to namespaces using the **premium** tier. |
+| Server Send Latency | The time taken by the Service Bus service to complete the request. |
-1. Client-side errors (In HTTP that would be 400 errors).
-2. Errors that occur while processing messages, such as [MessageLockLostException](/dotnet/api/azure.messaging.servicebus.servicebusfailurereason).
+The following two types of errors are classified as *user errors*:
+- Client-side errors (In HTTP that would be 400 errors).
+- Errors that occur while processing messages, such as [MessageLockLostException](/dotnet/api/azure.messaging.servicebus.servicebusfailurereason).
### Message metrics
-| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions |
-| - | - | -- | | | |
-|Incoming Messages| Yes | Count | Total | The number of events or messages sent to Service Bus over a specified period. For basic and standard tiers, incoming auto-forwarded messages are included in this metric. And, for the premium tier, they aren't included. | Entity name|
-|Outgoing Messages| Yes | Count | Total | The number of events or messages received from Service Bus over a specified period. The outgoing auto-forwarded messages aren't included in this metric. | Entity name|
-| Messages | No | Count | Average | Count of messages in a queue/topic. This metric includes messages in all the different states like active, dead-lettered, scheduled, etc. | Entity name |
-| Active Messages| No | Count | Average | Count of active messages in a queue/topic. Active messages are the messages in the queue or subscription that are in the active state and ready for delivery. The messages are available to be received. | Entity name |
-| Dead-lettered messages| No | Count | Average | Count of dead-lettered messages in a queue/topic. | Entity name |
-| Scheduled messages| No | Count | Average | Count of scheduled messages in a queue/topic. | Entity name |
-|Completed Messages| Yes | Count | Total | The number of messages completed over a specified period. | Entity name|
-| Abandoned Messages| Yes | Count | Total | The number of messages abandoned over a specified period. | Entity name|
-| Size | No | Bytes | Average | Size of an entity (queue or topic) in bytes. | Entity name |
+The following metrics are *message metrics*.
+
+| Metric | Description |
+|:-|:|
+| Incoming Messages | The number of events or messages sent to Service Bus over a specified period. For basic and standard tiers, incoming autoforwarded messages are included in this metric. And, for the premium tier, they aren't included. |
+| Outgoing Messages | The number of events or messages received from Service Bus over a specified period. The outgoing autoforwarded messages aren't included in this metric. |
+| Messages | Count of messages in a queue/topic. This metric includes messages in all the different states like active, dead-lettered, scheduled, etc. |
+| Active Messages | Count of active messages in a queue/topic. Active messages are the messages in the queue or subscription that are in the active state and ready for delivery. The messages are available to be received. |
+| Dead-lettered messages | Count of dead-lettered messages in a queue/topic. |
+| Scheduled messages | Count of scheduled messages in a queue/topic. |
+| Completed Messages | The number of messages completed over a specified period. |
+| Abandoned Messages | The number of messages abandoned over a specified period. |
+| Size | Size of an entity (queue or topic) in bytes. |
> [!IMPORTANT]
-> Values for messages, active, dead-lettered, scheduled, completed, and abandoned messages are point-in-time values. Incoming messages that were consumed immediately after that point-in-time may not be reflected in these metrics.
+> Values for messages, active, dead-lettered, scheduled, completed, and abandoned messages are point-in-time values. Incoming messages that were consumed immediately after that point-in-time might not be reflected in these metrics.
> [!NOTE]
-> When a client tries to get the info about a queue or topic, the Service Bus service returns some static information like name, last updated time, created time, requires session or not etc., and some dynamic information like message counts. If the request gets throttled, the service returns the static information and empty dynamic information. That's why message counts are shown as 0 when the namespace is being throttled. This behavior is by design.
+> When a client tries to get the info about a queue or topic, the Service Bus service returns some static information such as name, last updated time, created time, and requires session or not. Some dynamic information like message counts. If the request gets throttled, the service returns the static information and empty dynamic information. That's why message counts are shown as 0 when the namespace is being throttled. This behavior is by design.
### Connection metrics
-| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions |
-| - | - | -- | | | |
-|Active Connections| No | Count | Total | The number of active connections on a namespace and on an entity in the namespace. Value for this metric is a point-in-time value. Connections that were active immediately after that point-in-time may not be reflected in the metric. | |
-|Connections Opened | No | Count | Average | The number of connections opened. Value for this metric is an aggregation, and includes all connections that were opened in the aggregation time window. | Entity name|
-|Connections Closed | No | Count | Average | The number of connections closed. Value for this metric is an aggregation, and includes all connections that were opened in the aggregation time window. | Entity name|
+The following metrics are *connection metrics*.
+
+| Metric | Description |
+|:-|:|
+| Active Connections | The number of active connections on a namespace and on an entity in the namespace. Value for this metric is a point-in-time value. Connections that were active immediately after that point-in-time may not be reflected in the metric. |
+| Connections Opened | The number of connections opened. Value for this metric is an aggregation, and includes all connections that were opened in the aggregation time window. |
+| Connections Closed | The number of connections closed. Value for this metric is an aggregation, and includes all connections that were opened in the aggregation time window. |
### Resource usage metrics
-> [!NOTE]
-> The following metrics are available only with the **premium** tier.
->
-> The important metrics to monitor for any outages for a premium tier namespace are: **CPU usage per namespace** and **memory size per namespace**. [Set up alerts](../azure-monitor/alerts/alerts-metric.md) for these metrics using Azure Monitor.
->
-> The other metric you could monitor is: **throttled requests**. It shouldn't be an issue though as long as the namespace stays within its memory, CPU, and brokered connections limits. For more information, see [Throttling in Azure Service Bus Premium tier](service-bus-throttling.md#throttling-in-premium-tier)
+The following *resource metrics* are available only with the **premium** tier.
-| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions |
-| - | - | -- | | | |
-|CPU usage per namespace| No | CPU | Percent | The percentage CPU usage of the namespace. | Replica |
-|Memory size usage per namespace| No | Memory Usage | Percent | The percentage memory usage of the namespace. | Replica |
+| Metric | Description |
+|:-|:|
+| CPU usage per namespace | The percentage CPU usage of the namespace. |
+| Memory size usage per namespace | The percentage memory usage of the namespace. |
+
+The important metrics to monitor for any outages for a premium tier namespace are: **CPU usage per namespace** and **memory size per namespace**. [Set up alerts](../azure-monitor/alerts/alerts-metric.md) for these metrics using Azure Monitor.
+
+The other metric you could monitor is: **throttled requests**. It shouldn't be an issue though as long as the namespace stays within its memory, CPU, and brokered connections limits. For more information, see [Throttling in Azure Service Bus Premium tier](service-bus-throttling.md#throttling-in-premium-tier)
### Error metrics
-| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions |
-| - | -- | | | | |
-|Server Errors| No | Count | Total | The number of requests not processed because of an error in the Service Bus service over a specified period. | Entity name<br/><br/>Operation Result |
-|User Errors | No | Count | Total | The number of requests not processed because of user errors over a specified period. | Entity name<br/><br/>Operation Result|
+The following metrics are *error metrics*.
+
+| Metric | Description |
+|:-|:|
+| Server Errors | The number of requests not processed because of an error in the Service Bus service over a specified period. |
+| User Errors | The number of requests not processed because of user errors over a specified period. |
### Geo-Replication metrics
-| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions |
-| - | -- | | | | |
-|Replication Lag Duration| No | Seconds | Max | The offset in seconds between the latest action on the primary and the secondary regions. | |
-|Replication Lag Count | No | Count | Max | The offset in number of operations between the latest action on the primary and the secondary regions. | |
+The following metrics are *geo-replication* metrics:
+
+| Metric | Description |
+|:-|:|
+| Replication Lag Duration | The offset in seconds between the latest action on the primary and the secondary regions. |
+| Replication Lag Count | The offset in number of operations between the latest action on the primary and the secondary regions. |
+++
+- **EntityName** Service Bus supports messaging entities under the namespace. With the Incoming Requests metric, the Entity Name dimension has a value of `-NamespaceOnlyMetric-` in addition to all your queues and topics. This value represents the request, which was made at the namespace level. Examples include a request to list all queues/topics under the namespace or requests to entities that failed authentication or authorization.
+- **MessagingErrorSubCode**
+- **OperationResult**
+- **Replica**
-## Metric dimensions
+> [!NOTE]
+> Azure Monitor doesn't include dimensions in the exported metrics data sent to a destination like Azure Storage, Azure Event Hubs, or Azure Monitor Logs.
+
-Azure Service Bus supports the following dimensions for metrics in Azure Monitor. Adding dimensions to your metrics is optional. If you don't add dimensions, metrics are specified at the namespace level.
+### Supported resource logs for Microsoft.ServiceBus/Namespaces
-|Dimension name|Description|
-| - | -- |
-|Entity Name| Service Bus supports messaging entities under the namespace. With the 'Incoming Requests' metric, the Entity Name dimension will have a value of '-NamespaceOnlyMetric-' in addition to all your queues and topics. This represents the request, which was made at the namespace level. Examples include a request to list all queues/topics under the namespace or requests to entities that failed authentication or authorization.|
-## Resource logs
This section lists the types of resource logs you can collect for Azure Service Bus. - Operational logs
Azure Service Bus now has the capability to dispatch logs to either of two desti
:::image type="content" source="media/monitor-service-bus-reference/destination-table-toggle.png" alt-text="Screenshot of dialog box to set destination table." lightbox="media/monitor-service-bus-reference/destination-table-toggle.png"::: ### Operational logs+ Operational log entries include elements listed in the following table: | Name | Description | Supported in AzureDiagnostics | Supported in AZMSOperationalLogs (Resource Specific table)|
Operational log entries include elements listed in the following table:
| `EventProperties` | Operation properties | Yes | Yes| | `Status` | Operation status | Yes | Yes| | `Caller` | Caller of operation (the Azure portal or management client) | Yes | Yes|
-| `Provider`|Name of Service emitting the logs e.g., ServiceBus | No | Yes|
+| `Provider`|Name of Service emitting the logs, such as ServiceBus | No | Yes|
| `Type`| Type of logs emitted | No | Yes| | `Category`| Log Category | Yes | No|
AzureDiagnostics:
"Caller": "ServiceBus Client", "category": "OperationalLogs" }-- ```+ Resource specific table entry: ```json {- "ActivityId": "0000000000-0000-0000-0000-00000000000000", "EventName": "Retrieve Queue", "resourceId": "/SUBSCRIPTIONS/<AZURE SUBSCRPTION ID>/RESOURCEGROUPS/<RESOURCE GROUP NAME>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<SERVICE BUS NAMESPACE NAME>",
Resource specific table entry:
"Caller": "ServiceBus Client", "type": "AZMSOperationalLogs", "Provider" : "SERVICEBUS"- } ``` ### Events and operations captured in operational logs+ Operational logs capture all management operations that are performed on the Azure Service Bus namespace. Data operations aren't captured, because of the high volume of data operations that are conducted on Azure Service Bus. > [!NOTE] > To help you better track data operations, we recommend using client-side tracing.
-The following management operations are captured in operational logs:
+The following management operations are captured in operational logs:
| Scope | Operation | |-|--|
The following management operations are captured in operational logs:
| Topic | - Create Topic<br>- Update Topic<br>- Delete Topic<br>- AutoDelete Delete Topic<br>- Retrieve Topic | | Subscription | - Create Subscription<br>- Update Subscription<br>- Delete Subscription<br>- AutoDelete Delete Subscription<br>- Retrieve Subscription | - > [!NOTE] > Currently, *Read* operations aren't tracked in the operational logs. ### Virtual network and IP filtering logs
-Service Bus virtual network (VNet) connection event JSON includes elements listed in the following table:
+
+Service Bus virtual network connection event JSON includes elements listed in the following table:
| Name | Description | Supported in Azure Diagnostics | Supported in AZMSVnetConnectionEvents (Resource specific table) | | | -- || |
Service Bus virtual network (VNet) connection event JSON includes elements liste
| `Count` | Number of occurrences for the given action | Yes | Yes | | `ResourceId` | Azure Resource Manager resource ID. | Yes | Yes | | `Category` | Log Category | Yes | No |
-| `Provider`|Name of Service emitting the logs e.g., ServiceBus | No | Yes |
+| `Provider`|Name of Service emitting the logs such as ServiceBus | No | Yes |
| `Type` | Type of Logs Emitted | No | Yes |
-> [!NOTE]
+> [!NOTE]
> Virtual network logs are generated only if the namespace allows access from selected networks or from specific IP addresses (IP filter rules). Here's an example of a virtual network log JSON string:
-AzureDiagnostics;
+AzureDiagnostics:
+ ```json { "SubscriptionId": "0000000-0000-0000-0000-000000000000",
AzureDiagnostics;
"Category": "ServiceBusVNetConnectionEvent" } ```+ Resource specific table entry:+ ```json {
- "SubscriptionId": "0000000-0000-0000-0000-000000000000",
- "NamespaceName": "namespace-name",
- "AddressIp": "1.2.3.4",
- "Action": "Accept Connection",
- "Message": "IP is accepted by IPAddress filter.",
- "Count": 1,
- "ResourceId": "/SUBSCRIPTIONS/<AZURE SUBSCRIPTION ID>/RESOURCEGROUPS/<RESOURCE GROUP NAME>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<SERVICE BUS NAMESPACE NAME>",
- "Provider" : "SERVICEBUS",
- "Type": "AZMSVNetConnectionEvents"
+ "SubscriptionId": "0000000-0000-0000-0000-000000000000",
+ "NamespaceName": "namespace-name",
+ "AddressIp": "1.2.3.4",
+ "Action": "Accept Connection",
+ "Message": "IP is accepted by IPAddress filter.",
+ "Count": 1,
+ "ResourceId": "/SUBSCRIPTIONS/<AZURE SUBSCRIPTION ID>/RESOURCEGROUPS/<RESOURCE GROUP NAME>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<SERVICE BUS NAMESPACE NAME>",
+ "Provider" : "SERVICEBUS",
+ "Type": "AZMSVNetConnectionEvents"
} ``` ## Runtime audit logs+ Runtime audit logs capture aggregated diagnostic information for various data plane access operations (such as send or receive messages) in Service Bus.
-> [!NOTE]
+> [!NOTE]
> Runtime audit logs are currently available only in the **premium** tier. Runtime audit logs include the elements listed in the following table:
Runtime audit logs include the elements listed in the following table:
| `Count` | Total number of operations performed during the aggregated period of 1 minute. | Yes | Yes| | `Properties` | Metadata that is specific to the data plane operation. | yes | Yes| | `Category` | Log category | Yes | No|
-| `Provider` |Name of Service emitting the logs e.g., ServiceBus | No | Yes |
+| `Provider` |Name of Service emitting the logs, such as ServiceBus | No | Yes |
| `Type` | Type of Logs emitted | No | Yes| Here's an example of a runtime audit log entry: AzureDiagnostics:+ ```json {
- "ActivityId": "<activity id>",
- "ActivityName": "ConnectionOpen | Authorization | SendMessage | ReceiveMessage | PeekLockMessage",
- "ResourceId": "/SUBSCRIPTIONS/xxx/RESOURCEGROUPS/<Resource Group Name>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<Service Bus namespace>/servicebus/<service bus name>",
- "Time": "1/1/2021 8:40:06 PM +00:00",
- "Status": "Success | Failure",
- "Protocol": "AMQP | HTTP | SBMP",
- "AuthType": "SAS | AAD",
- "AuthKey": "<AAD Application Name| SAS policy name>",
- "NetworkType": "Public | Private",
- "ClientIp": "x.x.x.x",
- "Count": 1,
- "Category": "RuntimeAuditLogs"
- }
-
+ "ActivityId": "<activity id>",
+ "ActivityName": "ConnectionOpen | Authorization | SendMessage | ReceiveMessage | PeekLockMessage",
+ "ResourceId": "/SUBSCRIPTIONS/xxx/RESOURCEGROUPS/<Resource Group Name>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<Service Bus namespace>/servicebus/<service bus name>",
+ "Time": "1/1/2021 8:40:06 PM +00:00",
+ "Status": "Success | Failure",
+ "Protocol": "AMQP | HTTP | SBMP",
+ "AuthType": "SAS | AAD",
+ "AuthKey": "<AAD Application Name| SAS policy name>",
+ "NetworkType": "Public | Private",
+ "ClientIp": "x.x.x.x",
+ "Count": 1,
+ "Category": "RuntimeAuditLogs"
+}
```+ Resource specific table entry:+ ```json {
- "ActivityId": "<activity id>",
- "ActivityName": "ConnectionOpen | Authorization | SendMessage | ReceiveMessage | PeekLockMessage",
- "ResourceId": "/SUBSCRIPTIONS/xxx/RESOURCEGROUPS/<Resource Group Name>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<Service Bus namespace>/servicebus/<service bus name>",
- "TimeGenerated (UTC)": "1/1/2021 8:40:06 PM +00:00",
- "Status": "Success | Failure",
- "Protocol": "AMQP | HTTP | SBMP",
- "AuthType": "SAS | AAD",
- "AuthKey": "<AAD Application Name| SAS policy name>",
- "NetworkType": "Public | Private",
- "ClientIp": "x.x.x.x",
- "Count": 1,
- "Provider": "SERVICEBUS",
- "Type" : "AZMSRuntimeAuditLogs"
- }
-
+ "ActivityId": "<activity id>",
+ "ActivityName": "ConnectionOpen | Authorization | SendMessage | ReceiveMessage | PeekLockMessage",
+ "ResourceId": "/SUBSCRIPTIONS/xxx/RESOURCEGROUPS/<Resource Group Name>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<Service Bus namespace>/servicebus/<service bus name>",
+ "TimeGenerated (UTC)": "1/1/2021 8:40:06 PM +00:00",
+ "Status": "Success | Failure",
+ "Protocol": "AMQP | HTTP | SBMP",
+ "AuthType": "SAS | AAD",
+ "AuthKey": "<AAD Application Name| SAS policy name>",
+ "NetworkType": "Public | Private",
+ "ClientIp": "x.x.x.x",
+ "Count": 1,
+ "Provider": "SERVICEBUS",
+ "Type" : "AZMSRuntimeAuditLogs"
+}
``` ## Diagnostic Error Logs
-Diagnostic error logs capture error messages for any client side, throttling and Quota exceeded errors. They provide detailed diagnostics for error identification.
-Diagnostic Error Logs include elements listed in below table:
+Diagnostic error logs capture error messages for any client side, throttling, and Quota exceeded errors. They provide detailed diagnostics for error identification.
+
+Diagnostic Error Logs include elements listed in this table:
| Name | Description | Supported in Azure Diagnostics | Supported in AZMSDiagnosticErrorLogs (Resource specific table) | | ||||
Here's an example of Diagnostic error log entry:
```json {
- "ActivityId": "0000000000-0000-0000-0000-00000000000000",
- "SubscriptionId": "<Azure Subscription Id",
- "NamespaceName": "Name of Service Bus Namespace",
- "EntityType": "Queue",
- "EntityName": "Name of Service Bus Queue",
- "ActivityName": "SendMessage",
- "ResourceId": "/SUBSCRIPTIONS/xxx/RESOURCEGROUPS/<Resource Group Name>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<service bus namespace name>",,
- "OperationResult": "ClientError",
- "ErrorCount": 1,
- "EventTimestamp": "3/27/2024 1:02:29.126 PM +00:00",
- "ErrorMessage": "the sessionid was not set on a message, and it cannot be sent to the entity. entities that have session support enabled can only receive messages that have the sessionid set to a valid value.",
- "category": "DiagnosticErrorLogs"
- }
-
+ "ActivityId": "0000000000-0000-0000-0000-00000000000000",
+ "SubscriptionId": "<Azure Subscription Id",
+ "NamespaceName": "Name of Service Bus Namespace",
+ "EntityType": "Queue",
+ "EntityName": "Name of Service Bus Queue",
+ "ActivityName": "SendMessage",
+ "ResourceId": "/SUBSCRIPTIONS/xxx/RESOURCEGROUPS/<Resource Group Name>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<service bus namespace name>",,
+ "OperationResult": "ClientError",
+ "ErrorCount": 1,
+ "EventTimestamp": "3/27/2024 1:02:29.126 PM +00:00",
+ "ErrorMessage": "the sessionid was not set on a message, and it cannot be sent to the entity. entities that have session support enabled can only receive messages that have the sessionid set to a valid value.",
+ "category": "DiagnosticErrorLogs"
+}
```+ Resource specific table entry:+ ```json {
- "ActivityId": "0000000000-0000-0000-0000-00000000000000",
- "NamespaceName": "Name of Service Bus Namespace",
- "EntityType": "Queue",
- "EntityName": "Name of Service Bus Queue",
- "ActivityName": "SendMessage",
- "ResourceId": "/SUBSCRIPTIONS/xxx/RESOURCEGROUPS/<Resource Group Name>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<service bus namespace name>",,
- "OperationResult": "ClientError",
- "ErrorCount": 1,
- "TimeGenerated [UTC]": "1/27/2024 4:02:29.126 PM +00:00",
- "ErrorMessage": "the sessionid was not set on a message, and it cannot be sent to the entity. entities that have session support enabled can only receive messages that have the sessionid set to a valid value.",
- "Type": "AZMSDiagnosticErrorLogs"
- }
-
+ "ActivityId": "0000000000-0000-0000-0000-00000000000000",
+ "NamespaceName": "Name of Service Bus Namespace",
+ "EntityType": "Queue",
+ "EntityName": "Name of Service Bus Queue",
+ "ActivityName": "SendMessage",
+ "ResourceId": "/SUBSCRIPTIONS/xxx/RESOURCEGROUPS/<Resource Group Name>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<service bus namespace name>",,
+ "OperationResult": "ClientError",
+ "ErrorCount": 1,
+ "TimeGenerated [UTC]": "1/27/2024 4:02:29.126 PM +00:00",
+ "ErrorMessage": "the sessionid was not set on a message, and it cannot be sent to the entity. entities that have session support enabled can only receive messages that have the sessionid set to a valid value.",
+ "Type": "AZMSDiagnosticErrorLogs"
+}
``` + [!INCLUDE [service-bus-amqp-support-retirement](../../includes/service-bus-amqp-support-retirement.md)]
-## Azure Monitor Logs tables
Azure Service Bus uses Kusto tables from Azure Monitor Logs. You can query these tables with Log Analytics. For a list of Kusto tables the service uses, see [Azure Monitor Logs table reference](/azure/azure-monitor/reference/tables/tables-resourcetype#service-bus).
-## Next steps
-- For details on monitoring Azure Service Bus, see [Monitoring Azure Service Bus](monitor-service-bus.md).-- For details on monitoring Azure resources, see [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
+### Service Bus Microsoft.ServiceBus/namespaces
+
+- [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity#columns)
+- [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics#columns)
+- [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics#columns)
+- [AZMSOperationalLogs](/azure/azure-monitor/reference/tables/azmsoperationallogs#columns)
+- [AZMSVnetConnectionEvents](/azure/azure-monitor/reference/tables/azmsvnetconnectionevents#columns)
+- [AZMSRunTimeAuditLogs](/azure/azure-monitor/reference/tables/azmsruntimeauditlogs#columns)
+- [AZMSApplicationMetricLogs](/azure/azure-monitor/reference/tables/azmsapplicationmetricLogs#columns)
+- [AZMSDiagnosticErrorLogs](/azure/azure-monitor/reference/tables/azmsdiagnosticerrorlogs#columns)
++
+- [Integration resource provider operations](/azure/role-based-access-control/resource-provider-operations#integration)
+
+## Related content
+
+- See [Monitor Azure Service Bus](monitor-service-bus.md) for a description of monitoring Service Bus.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
service-bus-messaging Monitor Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/monitor-service-bus.md
Title: Monitoring Azure Service Bus
-description: Learn how to use Azure Monitor to view, analyze, and create alerts on metrics from Azure Service Bus.
+ Title: Monitor Azure Service Bus
+description: Start here to learn how to monitor Azure Service Bus by using Azure Monitor metrics, logs, and tools.
Last updated : 07/22/2024+ - Previously updated : 06/26/2023+++ # Monitor Azure Service Bus
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by Azure Service Bus and how to analyze and alert on this data with Azure Monitor.
-## What is Azure Monitor?
-Azure Service Bus creates monitoring data using [Azure Monitor](../azure-monitor/overview.md), which is a full stack monitoring service in Azure. Azure Monitor provides a complete set of features to monitor your Azure resources. It can also monitor resources in other clouds and on-premises.
-Start with the article [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md), which describes the following concepts:
--- What is Azure Monitor?-- Costs associated with monitoring-- Monitoring data collected in Azure-- Configuring data collection-- Standard tools in Azure for analyzing and alerting on monitoring data-
-The following sections build on this article by describing the specific data gathered for Azure Service Bus. These sections also provide examples for configuring data collection and analyzing this data with Azure tools.
+The following sections build on these articles by describing the specific data gathered for Azure Service Bus. These sections also provide examples for configuring data collection and analyzing this data with Azure tools.
> [!TIP] > To understand costs associated with Azure Monitor, see [Azure Monitor cost and usage](../azure-monitor/cost-usage.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md).
-## Monitoring data from Azure Service Bus
-Azure Service Bus collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
-See [Azure Service Bus monitoring data reference](monitor-service-bus-reference.md) for a detailed reference of the logs and metrics created by Azure Service Bus.
+For more information, see [Azure Monitor - Service Bus insights](service-bus-insights.md).
-## Collection and routing
-Platform metrics and the activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+For more information about the resource types for Service Bus, see [Azure Service Bus monitoring data reference](monitor-service-bus-reference.md).
-Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
-See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure Service Bus are listed in [Azure Service Bus monitoring data reference](monitor-service-bus-reference.md#resource-logs).
+The diagnostic logging information is stored in containers named **insights-logs-operationlogs** and **insights-metrics-pt1m**.
-> [!NOTE]
-> Azure Monitor doesn't include dimensions in the exported metrics data, that's sent to a destination like Azure Storage, Azure Event Hubs, Log Analytics, etc.
+Sample URL for an operation log: `https://<Azure Storage account>.blob.core.windows.net/insights-logs-operationallogs/resourceId=/SUBSCRIPTIONS/<Azure subscription ID>/RESOURCEGROUPS/<Resource group name>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<Namespace name>/y=<YEAR>/m=<MONTH-NUMBER>/d=<DAY-NUMBER>/h=<HOUR>/m=<MINUTE>/PT1H.json`. The URL for a metric log is similar.
+The diagnostic logging information is stored in event hubs named **insights-logs-operationlogs** and **insights-metrics-pt1m**. You can also select your own event hub.
-### Azure Storage
-The diagnostic logging information is stored in containers named **insights-logs-operationlogs** and **insights-metrics-pt1m**.
+The diagnostic logging information is stored in tables named **AzureDiagnostics** and **AzureMetrics**.
-Sample URL for an operation log: `https://<Azure Storage account>.blob.core.windows.net/insights-logs-operationallogs/resourceId=/SUBSCRIPTIONS/<Azure subscription ID>/RESOURCEGROUPS/<Resource group name>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<Namespace name>/y=<YEAR>/m=<MONTH-NUMBER>/d=<DAY-NUMBER>/h=<HOUR>/m=<MINUTE>/PT1H.json`. The URL for a metric log is similar.
-### Azure Event Hubs
-The diagnostic logging information is stored in event hubs named **insights-logs-operationlogs** and **insights-metrics-pt1m**. You can also select your own event hub.
+For a list of available metrics for Service Bus, see [Azure Service Bus monitoring data reference](monitor-service-bus-reference.md#metrics).
+
+You can analyze metrics for Azure Service Bus, along with metrics from other Azure services, by selecting **Metrics** from the **Monitoring** section on the home page for your Service Bus namespace. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool. For a list of the platform metrics collected, see [Monitoring Azure Service Bus data reference metrics](monitor-service-bus-reference.md#metrics).
++
+> [!TIP]
+> Azure Monitor metrics data is available for 90 days. However, when creating charts only 30 days can be visualized. For example, if you want to visualize a 90 day period, you must break it into three charts of 30 days within the 90 day period.
-### Log Analytics
-The diagnostic logging information is stored in tables named **AzureDiagnostics** and **AzureMetrics**.
+
+For the available resource log categories, their associated Log Analytics tables, and the log schemas for Service Bus, see [Azure Service Bus monitoring data reference](monitor-service-bus-reference.md#resource-logs).
### Sample operational log output (formatted) ```json {
- "Environment": "PROD",
- "Region": "East US",
- "ScaleUnit": "PROD-BL2-002",
- "ActivityId": "a097a88a-33e5-4c9c-9c64-20f506ec1375",
- "EventName": "Retrieve Namespace",
- "resourceId": "/SUBSCRIPTIONS/<Azure subscription ID>/RESOURCEGROUPS/SPSBUS0213RG/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/SPSBUS0213NS",
- "SubscriptionId": "<Azure subscription ID>",
- "EventTimeString": "5/18/2021 3:25:55 AM +00:00",
- "EventProperties": "{\"SubscriptionId\":\"<Azure subscription ID>\",\"Namespace\":\"spsbus0213ns\",\"Via\":\"https://spsbus0213ns.servicebus.windows.net/$Resources/topics?api-version=2017-04&$skip=0&$top=100\",\"TrackingId\":\"a097a88a-33e5-4c9c-9c64-20f506ec1375_M8CH3_M8CH3_G8\"}",
- "Status": "Succeeded",
- "Caller": "rpfrontdoor",
- "category": "OperationalLogs"
+ "Environment": "PROD",
+ "Region": "East US",
+ "ScaleUnit": "PROD-BL2-002",
+ "ActivityId": "a097a88a-33e5-4c9c-9c64-20f506ec1375",
+ "EventName": "Retrieve Namespace",
+ "resourceId": "/SUBSCRIPTIONS/<Azure subscription ID>/RESOURCEGROUPS/SPSBUS0213RG/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/SPSBUS0213NS",
+ "SubscriptionId": "<Azure subscription ID>",
+ "EventTimeString": "5/18/2021 3:25:55 AM +00:00",
+ "EventProperties": "{\"SubscriptionId\":\"<Azure subscription ID>\",\"Namespace\":\"spsbus0213ns\",\"Via\":\"https://spsbus0213ns.servicebus.windows.net/$Resources/topics?api-version=2017-04&$skip=0&$top=100\",\"TrackingId\":\"a097a88a-33e5-4c9c-9c64-20f506ec1375_M8CH3_M8CH3_G8\"}",
+ "Status": "Succeeded",
+ "Caller": "rpfrontdoor",
+ "category": "OperationalLogs"
} ```
The diagnostic logging information is stored in tables named **AzureDiagnostics*
```json {
- "count": 1,
- "total": 4,
- "minimum": 4,
- "maximum": 4,
- "average": 4,
- "resourceId": "/SUBSCRIPTIONS/<Azure subscription ID>/RESOURCEGROUPS/SPSBUS0213RG/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/SPSBUS0213NS",
- "time": "2021-05-18T03:27:00.0000000Z",
- "metricName": "IncomingMessages",
- "timeGrain": "PT1M"
+ "count": 1,
+ "total": 4,
+ "minimum": 4,
+ "maximum": 4,
+ "average": 4,
+ "resourceId": "/SUBSCRIPTIONS/<Azure subscription ID>/RESOURCEGROUPS/SPSBUS0213RG/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/SPSBUS0213NS",
+ "time": "2021-05-18T03:27:00.0000000Z",
+ "metricName": "IncomingMessages",
+ "timeGrain": "PT1M"
} ```
The diagnostic logging information is stored in tables named **AzureDiagnostics*
> [!NOTE] > When you enable metrics in a diagnostic setting, dimension information is not currently included as part of the information sent to a storage account, event hub, or log analytics.
-The metrics and logs you can collect are discussed in the following sections.
-## Analyzing metrics
-You can analyze metrics for Azure Service Bus, along with metrics from other Azure services, by selecting **Metrics** from the **Azure Monitor** section on the home page for your Service Bus namespace. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool. For a list of the platform metrics collected, see [Monitoring Azure Service Bus data reference metrics](monitor-service-bus-reference.md#metrics).
-![Metrics Explorer with Service Bus namespace selected](./media/monitor-service-bus/metrics.png)
-For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
-> [!TIP]
-> Azure Monitor metrics data is available for 90 days. However, when creating charts only 30 days can be visualized. For example, if you want to visualize a 90 day period, you must break it into three charts of 30 days within the 90 day period.
+Following are sample queries that you can use to help you monitor your Azure Service Bus resources:
-### Filtering and splitting
-For metrics that support dimensions, you can apply filters using a dimension value. For example, add a filter with `EntityName` set to the name of a queue or a topic. You can also split a metric by dimension to visualize how different segments of the metric compare with each other. For more information of filtering and splitting, see [Advanced features of Azure Monitor](../azure-monitor/essentials/metrics-charts.md).
+### [AzureDiagnostics](#tab/AzureDiagnostics)
-## Analyzing logs
-Using Azure Monitor Log Analytics requires you to create a diagnostic configuration and enable __Send information to Log Analytics__. For more information, see the [Collection and routing](#collection-and-routing) section. Data in Azure Monitor Logs is stored in tables, with each table having its own set of unique properties.Azure Service Bus has the capability to dispatch logs to either of two destination tables - Azure Diagnostic or Resource specific tables in Log Analytics. For a detailed reference of the logs and metrics, see [Azure Service Bus monitoring data reference](monitor-service-bus-reference.md).
+- Get management operations in the last seven days.
-> [!IMPORTANT]
-> When you select **Logs** from the Azure Service Bus menu, Log Analytics is opened with the query scope set to the current workspace. This means that log queries will only include data from that resource. If you want to run a query that includes data from other databases or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
+ ```kusto
+ AzureDiagnostics
+ | where TimeGenerated > ago(7d)
+ | where ResourceProvider =="MICROSOFT.SERVICEBUS"
+ | where Category == "OperationalLogs"
+ | summarize count() by EventName_s, _ResourceId
+ ```
+
+- Get runtime audit logs generated in the last one hour.
-### Additional Kusto queries
+ ```kusto
+ AzureDiagnostics
+ | where TimeGenerated > ago(1h)
+ | where ResourceProvider =="MICROSOFT.SERVICEBUS"
+ | where Category == "RuntimeAuditLogs"
+ ```
-Following are sample queries that you can use to help you monitor your Azure Service Bus resources:
+- Get access attempts to a key vault that resulted in "key not found" error.
-### [AzureDiagnostics](#tab/AzureDiagnostics)
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider == "MICROSOFT.SERVICEBUS"
+ | where Category == "Error" and OperationName == "wrapkey"
+ | project Message, _ResourceId
+ ```
+
+- Get errors from the past seven days.
+
+ ```kusto
+ AzureDiagnostics
+ | where TimeGenerated > ago(7d)
+ | where ResourceProvider =="MICROSOFT.SERVICEBUS"
+ | where Category == "Error"
+ | summarize count() by EventName_s, _ResourceId
+ ```
+
+- Get operations performed with a key vault to disable or restore the key.
-+ Get management operations in the last 7 days.
-
- ```kusto
- AzureDiagnostics
- | where TimeGenerated > ago(7d)
- | where ResourceProvider =="MICROSOFT.SERVICEBUS"
- | where Category == "OperationalLogs"
- | summarize count() by EventName_s, _ResourceId
- ```
-+ Get runtime audit logs generated in the last one hour.
-
- ```kusto
- AzureDiagnostics
- | where TimeGenerated > ago(1h)
- | where ResourceProvider =="MICROSOFT.SERVICEBUS"
- | where Category == "RuntimeAuditLogs"
- ```
-+ Get access attempts to a key vault that resulted in "key not found" error.
-
- ```kusto
- AzureDiagnostics
- | where ResourceProvider == "MICROSOFT.SERVICEBUS"
- | where Category == "Error" and OperationName == "wrapkey"
- | project Message, _ResourceId
- ```
-
-+ Get errors from the past 7 days
-
- ```kusto
- AzureDiagnostics
- | where TimeGenerated > ago(7d)
- | where ResourceProvider =="MICROSOFT.SERVICEBUS"
- | where Category == "Error"
- | summarize count() by EventName_s, _ResourceId
- ```
-
-+ Get operations performed with a key vault to disable or restore the key.
-
- ```kusto
- AzureDiagnostics
- | where ResourceProvider == "MICROSOFT.SERVICEBUS"
- | where (Category == "info" and (OperationName == "disable" or OperationName == "restore"))
- | project Message, _ResourceId
- ```
-
-+ Get all the entities that have been autodeleted
-
- ```kusto
- AzureDiagnostics
- | where ResourceProvider == "MICROSOFT.SERVICEBUS"
- | where Category == "OperationalLogs"
- | where EventName_s startswith "AutoDelete"
- | summarize count() by EventName_s, _ResourceId
- ```
- ### [Resource Specific Table](#tab/Resourcespecifictable)
-
-+ Get deny connection events for namespace
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider == "MICROSOFT.SERVICEBUS"
+ | where (Category == "info" and (OperationName == "disable" or OperationName == "restore"))
+ | project Message, _ResourceId
+ ```
+
+- Get all the entities that were autodeleted.
```kusto
- AZMSVNetConnectionEvents
- | extend NamespaceName = tostring(split(_ResourceId, "/")[8])
- | where Provider =~ "ServiceBus"
- | where Action == "Deny Connection"
- | project Action, SubscriptionId, NamespaceName, AddressIp, Reason, Count
- | summarize by Action, NamespaceName
- ```
+ AzureDiagnostics
+ | where ResourceProvider == "MICROSOFT.SERVICEBUS"
+ | where Category == "OperationalLogs"
+ | where EventName_s startswith "AutoDelete"
+ | summarize count() by EventName_s, _ResourceId
+ ```
+
+### [Resource Specific Table](#tab/Resourcespecifictable)
-+ Get failed operation logs for namespace
+- Get deny connection events for namespace.
```kusto
- AZMSOperationalLogs
- | extend NamespaceName = tostring(split(_ResourceId, "/")[8])
- | where Provider =~ "ServiceBus"
- | where isnotnull(NamespaceName) and Status != "Succeeded"
- | project NamespaceName, ResourceId, EventName, Status, Caller, SubscriptionId
- | summarize by NamespaceName, EventName
- ```
+ AZMSVNetConnectionEvents
+ | extend NamespaceName = tostring(split(_ResourceId, "/")[8])
+ | where Provider =~ "ServiceBus"
+ | where Action == "Deny Connection"
+ | project Action, SubscriptionId, NamespaceName, AddressIp, Reason, Count
+ | summarize by Action, NamespaceName
+ ```
-+ Get Send message events for namespace
+- Get failed operation logs for namespace.
+
+ ```kusto
+ AZMSOperationalLogs
+ | extend NamespaceName = tostring(split(_ResourceId, "/")[8])
+ | where Provider =~ "ServiceBus"
+ | where isnotnull(NamespaceName) and Status != "Succeeded"
+ | project NamespaceName, ResourceId, EventName, Status, Caller, SubscriptionId
+ | summarize by NamespaceName, EventName
+ ```
+
+- Get Send message events for namespace.
```kusto AZMSRunTimeAuditLogs
Following are sample queries that you can use to help you monitor your Azure Ser
| where isnotnull(NamespaceInfo) and ActivityName = "SendMessage" | project NamespaceInfo, ActivityName, Protocol, NetworkType, ClientIp, ResourceId | summarize by NamespaceInfo, ActivityName
- ```
-+ Get Failed authorization results for AAD
+ ```
+
+- Get Failed authorization results for Microsoft Entra ID.
```kusto AZMSRunTimeAuditLogs
Following are sample queries that you can use to help you monitor your Azure Ser
| where isnotnull(NamespaceInfo) and isnotnull(AuthKey) and AuthType == "AAD" and Status != "Success" | project NamespaceInfo, AuthKey, ActivityName, Protocol, NetworkType, ClientIp, ResourceId | summarize by NamespaceInfo, AuthKey, ActivityName
- ```
+ ```
+++++
+### Service Bus alert rules
-## Alerts
-You can access alerts for Azure Service Bus by selecting **Alerts** from the **Azure Monitor** section on the home page for your Service Bus namespace. See [Create, view, and manage metric alerts using Azure Monitor](../azure-monitor/alerts/alerts-metric.md) for details on creating alerts.
+You can set alerts for any metric, log entry, or activity log entry listed in the [Azure Service Bus monitoring data reference](monitor-service-bus-reference.md).
-## Next steps
+## Related content
-- For a reference of the logs and metrics, see [Monitoring Azure Service Bus data reference](monitor-service-bus-reference.md).-- For details on monitoring Azure resources, see [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
+- See [Azure Service Bus monitoring data reference](monitor-service-bus-reference.md) for a reference of the metrics, logs, and other important values created for Service Bus.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources.
service-bus-messaging Service Bus Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-managed-service-identity.md
Title: Managed identities for Azure resources with Service Bus description: This article describes how to use managed identities to access with Azure Service Bus entities (queues, topics, and subscriptions). Previously updated : 06/15/2023 Last updated : 07/22/2024 # Authenticate a managed identity with Microsoft Entra ID to access Azure Service Bus resources
Here are the high-level steps to use a managed identity to access a Service Bus
1. Enable managed identity for your client app or environment. For example, enable managed identity for your Azure App Service app, Azure Functions app, or a virtual machine in which your app is running. Here are the articles that help you with this step: - [Configure managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md)
- - [Configure managed identities for Azure resources on a VM](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md)
+ - [Configure managed identities for Azure resources on a virtual machine (VM)](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md)
1. Assign Azure Service Bus Data Owner, Azure Service Bus Data Sender, or Azure Service Bus Data Receiver role to the managed identity at the appropriate scope (Azure subscription, resource group, Service Bus namespace, or Service Bus queue or topic). For instructions to assign a role to a managed identity, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml). 1. In your application, use the managed identity and the endpoint to Service Bus namespace to connect to the namespace. For example, in .NET, you use the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient.-ctor#azure-messaging-servicebus-servicebusclient-ctor(system-string-azure-core-tokencredential)) constructor that takes `TokenCredential` and `fullyQualifiedNamespace` (a string, for example: `cotosons.servicebus.windows.net`) parameters to connect to Service Bus using the managed identity. You pass in [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential), which derives from `TokenCredential` and uses the managed identity. In `DefaultAzureCredentialOptions`, set the `ManagedIdentityClientId` to the ID of client's managed identity. ```csharp
- string fullyQualifiedNamespace = "<your Namespace>.servicebus.windows.net>";
+ string fullyQualifiedNamespace = "<your namespace>.servicebus.windows.net>";
string userAssignedClientId = "<your managed identity client ID>"; var credential = new DefaultAzureCredential(
Here are the high-level steps to use a managed identity to access a Service Bus
> You can disable local or SAS key authentication for a Service Bus namespace and allow only Microsoft Entra authentication. For step-by-step instructions, see [Disable local authentication](disable-local-authentication.md). ## Azure built-in roles for Azure Service Bus
-Microsoft Entra authorizes access to secured resources through [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md). Azure Service Bus defines a set of Azure built-in roles that encompass common sets of permissions used to access Service Bus entities. You can also define custom roles for accessing the data.
+Microsoft Entra authorizes access to secured resources through [Azure role-based access control (RBAC)](../role-based-access-control/overview.md). Azure Service Bus defines a set of Azure built-in roles that encompass common sets of permissions used to access Service Bus entities. You can also define custom roles for accessing the data.
Azure provides the following Azure built-in roles for authorizing access to a Service Bus namespace:
Before you assign an Azure role to a managed identity, determine the scope of ac
The following list describes the levels at which you can scope access to Service Bus resources, starting with the narrowest scope: - **Queue**, **topic**, or **subscription**: Role assignment applies to the specific Service Bus entity. -- **Service Bus namespace**: Role assignment spans the entire topology of Service Bus under the namespace and to the consumer group associated with it.
+- **Service Bus namespace**: Role assignment spans the entire topology of Service Bus under the namespace.
- **Resource group**: Role assignment applies to all the Service Bus resources under the resource group. - **Subscription**: Role assignment applies to all the Service Bus resources in all of the resource groups in the subscription.
service-bus-messaging Service Bus Messages Payloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messages-payloads.md
Title: Azure Service Bus messages, payloads, and serialization | Microsoft Docs description: This article provides an overview of Azure Service Bus messages, payloads, message routing, and serialization. Previously updated : 06/08/2023 Last updated : 07/23/2024 # Messages, payloads, and serialization
The object model of the official Service Bus clients for .NET and Java reflect t
A Service Bus message consists of a binary payload section that Service Bus never handles in any form on the service-side, and two sets of properties. The **broker properties** are predefined by the system. These predefined properties either control message-level functionality inside the broker, or they map to common and standardized metadata items. The **user properties** are a collection of key-value pairs that can be defined and set by the application.
-The predefined broker properties are listed in the following table. The names are used with all official client APIs and also in the [BrokerProperties](/rest/api/servicebus/introduction) JSON object of the HTTP protocol mapping.
+The predefined broker properties are listed in the following table. The names are used with all official client APIs and also in the [`BrokerProperties`](/rest/api/servicebus/introduction) JSON object of the HTTP protocol mapping.
-The equivalent names used at the AMQP protocol level are listed in parentheses.
-While the following names use pascal casing, note that JavaScript and Python clients would use camel and snake casing respectively.
-
-| Property Name | Description |
-||-|
-| `ContentType` (content-type) | Optionally describes the payload of the message, with a descriptor following the format of RFC2045, Section 5; for example, `application/json`. |
-| `CorrelationId` (correlation-id) | Enables an application to specify a context for the message for the purposes of correlation; for example, reflecting the **MessageId** of a message that is being replied to. |
-| `DeadLetterSource` | Only set in messages that have been dead-lettered and later autoforwarded from the dead-letter queue to another entity. Indicates the entity in which the message was dead-lettered. This property is read-only. |
-| `DeliveryCount` | <p>Number of deliveries that have been attempted for this message. The count is incremented when a message lock expires, or the message is explicitly abandoned by the receiver. This property is read-only.</p> <p>The delivery count isn't incremented when the underlying AMQP connection is closed.</p> |
-| `EnqueuedSequenceNumber` | For messages that have been autoforwarded, this property reflects the sequence number that had first been assigned to the message at its original point of submission. This property is read-only. |
-| `EnqueuedTimeUtc` | The UTC instant at which the message has been accepted and stored in the entity. This value can be used as an authoritative and neutral arrival time indicator when the receiver doesn't want to trust the sender's clock. This property is read-only. |
-| `ExpiresΓÇïAtUtc` (absolute-expiry-time) | The UTC instant at which the message is marked for removal and no longer available for retrieval from the entity because of its expiration. Expiry is controlled by the **TimeToLive** property and this property is computed from EnqueuedTimeUtc+TimeToLive. This property is read-only. |
-| `Label` or `Subject` (subject) | This property enables the application to indicate the purpose of the message to the receiver in a standardized fashion, similar to an email subject line. |
-| `LockedΓÇïUntilΓÇïUtc` | For messages retrieved under a lock (peek-lock receive mode, not presettled) this property reflects the UTC instant until which the message is held locked in the queue/subscription. When the lock expires, the [DeliveryCount](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.deliverycount) is incremented and the message is again available for retrieval. This property is read-only. |
-| `LockΓÇïToken` | The lock token is a reference to the lock that is being held by the broker in *peek-lock* receive mode. The token can be used to pin the lock permanently through the [Deferral](message-deferral.md) API and, with that, take the message out of the regular delivery state flow. This property is read-only. |
-| `MessageΓÇïId` (message-id) | The message identifier is an application-defined value that uniquely identifies the message and its payload. The identifier is a free-form string and can reflect a GUID or an identifier derived from the application context. If enabled, the [duplicate detection](duplicate-detection.md) feature identifies and removes second and further submissions of messages with the same **MessageId**. |
-| `PartitionΓÇïKey` | For [partitioned entities](service-bus-partitioning.md), setting this value enables assigning related messages to the same internal partition, so that submission sequence order is correctly recorded. The partition is chosen by a hash function over this value and can't be chosen directly. For session-aware entities, the **SessionId** property overrides this value. |
-| `ReplyΓÇïTo` (reply-to) | This optional and application-defined value is a standard way to express a reply path to the receiver of the message. When a sender expects a reply, it sets the value to the absolute or relative path of the queue or topic it expects the reply to be sent to. |
-| `ReplyΓÇïToΓÇïSessionΓÇïId` (reply-to-group-id) | This value augments the **ReplyTo** information and specifies which **SessionId** should be set for the reply when sent to the reply entity. |
-| `ScheduledΓÇïEnqueueΓÇïTimeΓÇïUtc` | For messages that are only made available for retrieval after a delay, this property defines the UTC instant at which the message will be logically enqueued, sequenced, and therefore made available for retrieval. |
-| `SequenceΓÇïNumber` | The sequence number is a unique 64-bit integer assigned to a message as it is accepted and stored by the broker and functions as its true identifier. For partitioned entities, the topmost 16 bits reflect the partition identifier. Sequence numbers monotonically increase and are gapless. They roll over to 0 when the 48-64 bit range is exhausted. This property is read-only. |
-| `SessionΓÇïId` (group-id) | For session-aware entities, this application-defined value specifies the session affiliation of the message. Messages with the same session identifier are subject to summary locking and enable exact in-order processing and demultiplexing. For entities that aren't session-aware, this value is ignored. |
- |
-| `TimeΓÇïToΓÇïLive` | This value is the relative duration after which the message expires, starting from the instant it has been accepted and stored by the broker, as captured in **EnqueueTimeUtc**. When not set explicitly, the assumed value is the **DefaultTimeToLive** for the respective queue or topic. A message-level **TimeToLive** value can't be longer than the entity's **DefaultTimeToLive** setting. If it's longer, it's silently adjusted. |
-| `To` (to) | This property is reserved for future use in routing scenarios and currently ignored by the broker itself. Applications can use this value in rule-driven autoforward chaining scenarios to indicate the intended logical destination of the message. |
-| `ViaΓÇïPartitionΓÇïKey` | If a message is sent via a transfer queue in the scope of a transaction, this value selects the transfer queue partition. |
-
-The abstract message model enables a message to be posted to a queue via HTTPS and can be retrieved via AMQP. In either case, the message looks normal in the context of the respective protocol. The broker properties are translated as needed, and the user properties are mapped to the most appropriate location on the respective protocol message model. In HTTP, user properties map directly to and from HTTP headers; in AMQP they map to and from the **application-properties** map.
+The equivalent names used at the Advanced Message Queuing Protocol (AMQP) protocol level are listed in parentheses. While the following names use pascal casing, JavaScript and Python clients would use camel and snake casing respectively.
+
+| Property Name | Description |
+|| |
+| `ContentType` (`content-type`) | Optionally describes the payload of the message, with a descriptor following the format of RFC2045, Section 5; for example, `application/json`. |
+| `CorrelationId` (`correlation-id`) | Enables an application to specify a context for the message for the purposes of correlation; for example, reflecting the **MessageId** of a message that is being replied to. |
+| `DeadLetterSource` | Only set in messages that have been dead-lettered and later autoforwarded from the dead-letter queue to another entity. Indicates the entity in which the message was dead-lettered. This property is read-only. |
+| `DeliveryCount` | <p>Number of deliveries that have been attempted for this message. The count is incremented when a message lock expires, or the message is explicitly abandoned by the receiver. This property is read-only.</p> <p>The delivery count isn't incremented when the underlying AMQP connection is closed.</p> |
+| `EnqueuedSequenceNumber` | For messages that have been autoforwarded, this property reflects the sequence number that had first been assigned to the message at its original point of submission. This property is read-only. |
+| `EnqueuedTimeUtc` | The UTC instant at which the message has been accepted and stored in the entity. This value can be used as an authoritative and neutral arrival time indicator when the receiver doesn't want to trust the sender's clock. This property is read-only. |
+| `ExpiresΓÇïAtUtc` (`absolute-expiry-time`) | The UTC instant at which the message is marked for removal and no longer available for retrieval from the entity because of its expiration. Expiry is controlled by the **TimeToLive** property and this property is computed from EnqueuedTimeUtc+TimeToLive. This property is read-only. |
+| `Label` or `Subject` (`subject`) | This property enables the application to indicate the purpose of the message to the receiver in a standardized fashion, similar to an email subject line. |
+| `LockedΓÇïUntilΓÇïUtc` | For messages retrieved under a lock (peek-lock receive mode, not presettled) this property reflects the UTC instant until which the message is held locked in the queue/subscription. When the lock expires, the [DeliveryCount](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.deliverycount) is incremented and the message is again available for retrieval. This property is read-only. |
+| `LockΓÇïToken` | The lock token is a reference to the lock that is being held by the broker in *peek-lock* receive mode. The token can be used to pin the lock permanently through the [Deferral](message-deferral.md) API and, with that, take the message out of the regular delivery state flow. This property is read-only. |
+| `MessageΓÇïId` (`message-id`) | The message identifier is an application-defined value that uniquely identifies the message and its payload. The identifier is a free-form string and can reflect a GUID or an identifier derived from the application context. If enabled, the [duplicate detection](duplicate-detection.md) feature identifies and removes second and further submissions of messages with the same **MessageId**. |
+| `PartitionΓÇïKey` | For [partitioned entities](service-bus-partitioning.md), setting this value enables assigning related messages to the same internal partition, so that submission sequence order is correctly recorded. The partition is chosen by a hash function over this value and can't be chosen directly. For session-aware entities, the **SessionId** property overrides this value. |
+| `ReplyΓÇïTo` (`reply-to`) | This optional and application-defined value is a standard way to express a reply path to the receiver of the message. When a sender expects a reply, it sets the value to the absolute or relative path of the queue or topic it expects the reply to be sent to. |
+| `ReplyΓÇïToΓÇïSessionΓÇïId` (`reply-to-group-id`) | This value augments the **ReplyTo** information and specifies which **SessionId** should be set for the reply when sent to the reply entity. |
+| `ScheduledΓÇïEnqueueΓÇïTimeΓÇïUtc` | For messages that are only made available for retrieval after a delay, this property defines the UTC instant at which the message will be logically enqueued, sequenced, and therefore made available for retrieval. |
+| `SequenceΓÇïNumber` | The sequence number is a unique 64-bit integer assigned to a message as it is accepted and stored by the broker and functions as its true identifier. For partitioned entities, the topmost 16 bits reflect the partition identifier. Sequence numbers monotonically increase and are gapless. They roll over to 0 when the 48-64 bit range is exhausted. This property is read-only. |
+| `SessionΓÇïId` (`group-id`) | For session-aware entities, this application-defined value specifies the session affiliation of the message. Messages with the same session identifier are subject to summary locking and enable exact in-order processing and demultiplexing. For entities that aren't session-aware, this value is ignored. |
+| `TimeΓÇïToΓÇïLive` | This value is the relative duration after which the message expires, starting from the instant it has been accepted and stored by the broker, as captured in **EnqueueTimeUtc**. When not set explicitly, the assumed value is the **DefaultTimeToLive** for the respective queue or topic. A message-level **TimeToLive** value can't be longer than the entity's **DefaultTimeToLive** setting. If it's longer, it's silently adjusted. |
+| `To` (`to`) | This property is reserved for future use in routing scenarios and currently ignored by the broker itself. Applications can use this value in rule-driven autoforward chaining scenarios to indicate the intended logical destination of the message. |
+| `ViaΓÇïPartitionΓÇïKey` | If a message is sent via a transfer queue in the scope of a transaction, this value selects the transfer queue partition. |
+
+The abstract message model enables a message to be posted to a queue via HTTPS and can be retrieved via AMQP. In either case, the message looks normal in the context of the respective protocol. The broker properties are translated as needed, and the user properties are mapped to the most appropriate location on the respective protocol message model. In HTTP, user properties map directly to and from HTTP headers; in AMQP they map to and from the `application-properties` map.
## Message routing and correlation
A subset of the broker properties described previously, specifically `To`, `Repl
- **Simple request/reply**: A publisher sends a message into a queue and expects a reply from the message consumer. To receive the reply, the publisher owns a queue into which it expects replies to be delivered. The address of the queue is expressed in the **ReplyTo** property of the outbound message. When the consumer responds, it copies the **MessageId** of the handled message into the **CorrelationId** property of the reply message and delivers the message to the destination indicated by the **ReplyTo** property. One message can yield multiple replies, depending on the application context. - **Multicast request/reply**: As a variation of the prior pattern, a publisher sends the message into a topic and multiple subscribers become eligible to consume the message. Each of the subscribers might respond in the fashion described previously. This pattern is used in discovery or roll-call scenarios and the respondent typically identifies itself with a user property or inside the payload. If **ReplyTo** points to a topic, such a set of discovery responses can be distributed to an audience. - **Multiplexing**: This session feature enables multiplexing of streams of related messages through a single queue or subscription such that each session (or group) of related messages, identified by matching **SessionId** values, are routed to a specific receiver while the receiver holds the session under lock. Read more about the details of sessions [here](message-sessions.md).-- **Multiplexed request/reply**: This session feature enables multiplexed replies, allowing several publishers to share a reply queue. By setting **ReplyToSessionId**, the publisher can instruct the consumer(s) to copy that value into the **SessionId** property of the reply message. The publishing queue or topic doesn't need to be session-aware. As the message is sent, the publisher can then specifically wait for a session with the given **SessionId** to materialize on the queue by conditionally accepting a session receiver.
+- **Multiplexed request/reply**: This session feature enables multiplexed replies, allowing several publishers to share a reply queue. By setting **ReplyToSessionId**, the publisher can instruct the consumers to copy that value into the **SessionId** property of the reply message. The publishing queue or topic doesn't need to be session-aware. As the message is sent, the publisher can then specifically wait for a session with the given **SessionId** to materialize on the queue by conditionally accepting a session receiver.
-Routing inside of a Service Bus namespace can be realized using autoforward chaining and topic subscription rules. Routing across namespaces can be realized [using Azure LogicApps](https://azure.microsoft.com/services/logic-apps/). As indicated in the previous list, the **To** property is reserved for future use and may eventually be interpreted by the broker with a specially enabled feature. Applications that wish to implement routing should do so based on user properties and not lean on the **To** property; however, doing so now won't cause compatibility issues.
+Routing inside of a Service Bus namespace can be realized using autoforward chaining and topic subscription rules. Routing across namespaces can be realized [using Azure LogicApps](https://azure.microsoft.com/services/logic-apps/). As indicated in the previous list, the **To** property is reserved for future use and might eventually be interpreted by the broker with a specially enabled feature. Applications that wish to implement routing should do so based on user properties and not lean on the **To** property; however, doing so now won't cause compatibility issues.
## Payload serialization
Unlike the Java or .NET Standard variants, the .NET Framework version of the Ser
[!INCLUDE [service-bus-track-0-and-1-sdk-support-retirement](../../includes/service-bus-track-0-and-1-sdk-support-retirement.md)]
-When you use the legacy SBMP protocol, those objects are then serialized with the default binary serializer, or with a serializer that is externally supplied. The object is serialized into an AMQP object. The receiver can retrieve those objects with the [GetBody\<T>()](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.getbody#Microsoft_ServiceBus_Messaging_BrokeredMessage_GetBody__1) method, supplying the expected type. With AMQP, the objects are serialized into an AMQP graph of `ArrayList` and `IDictionary<string,object>` objects, and any AMQP client can decode them.
+When you use the legacy SBMP protocol, those objects are then serialized with the default binary serializer, or with a serializer that is externally supplied. The object is serialized into an AMQP object. The receiver can retrieve those objects with the [`GetBody\<T>()`](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.getbody#Microsoft_ServiceBus_Messaging_BrokeredMessage_GetBody__1) method, supplying the expected type. With AMQP, the objects are serialized into an AMQP graph of `ArrayList` and `IDictionary<string,object>` objects, and any AMQP client can decode them.
[!INCLUDE [service-bus-amqp-support-retirement](../../includes/service-bus-amqp-support-retirement.md)]
-While this hidden serialization magic is convenient, applications should take explicit control of object serialization and turn their object graphs into streams before including them into a message, and do the reverse on the receiver side. This yields interoperable results. While AMQP has a powerful binary encoding model, it's tied to the AMQP messaging ecosystem, and HTTP clients will have trouble decoding such payloads.
+While this hidden serialization magic is convenient, applications should take explicit control of object serialization and turn their object graphs into streams before including them into a message, and do the reverse on the receiver side. This yields interoperable results. While AMQP has a powerful binary encoding model, it's tied to the AMQP messaging ecosystem, and HTTP clients have trouble decoding such payloads.
The .NET Standard and Java API variants only accept byte arrays, which means that the application must handle object serialization control.
service-bus-messaging Service Bus Premium Messaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-premium-messaging.md
Title: Azure Service Bus premium messaging tier
description: This article describes standard and premium tiers of Azure Service Bus. Compares these tiers and provides technical differences. Previously updated : 05/02/2023 Last updated : 07/22/2024 # Service Bus premium messaging tier Service Bus Messaging, which includes entities such as queues and topics, combines enterprise messaging capabilities with rich publish-subscribe semantics at cloud scale. Service Bus Messaging is used as the communication backbone for many sophisticated cloud solutions.
-The *Premium* tier of Service Bus Messaging addresses common customer requests around scale, performance, and availability for mission-critical applications. The Premium tier is recommended for production scenarios. Although the feature sets are nearly identical, these two tiers of Service Bus Messaging are designed to serve different use cases.
+The *Premium* tier of Service Bus Messaging addresses common customer requests around scale, performance, and availability for mission-critical applications. We recommend that you use the premium tier for production scenarios. Although the feature sets are nearly identical, standard and premium tiers of Service Bus Messaging are designed to serve different use cases.
Some high-level differences are highlighted in the following table. | Criteria | Premium | Standard | | | | |
-| Throughput | High throughput |Variable throughput |
-| Performance | Predictable performance |Variable latency |
-| Pricing | Fixed pricing |Pay as you go variable pricing |
-| Scale | Ability to scale workload up and down |N/A |
-| Message size | Message size up to 100 MB. For more information, see [Large message support](#large-messages-support). |Message size up to 256 KB |
+| Throughput | High throughput | Variable throughput |
+| Performance | Predictable performance | Variable latency |
+| Pricing | Fixed pricing | Pay as you go variable pricing |
+| Scale | Ability to scale workload up and down | N/A |
+| Message size | Message size up to 100 MB. For more information, see [Large message support](#large-messages-support). | Message size up to 256 KB |
**Service Bus Premium Messaging** provides resource isolation at the CPU and memory level so that each customer workload runs in isolation. This resource container is called a *messaging unit*. Each premium namespace is allocated at least one messaging unit. You can purchase 1, 2, 4, 8 or 16 messaging units for each Service Bus Premium namespace. A single workload or entity can span multiple messaging units and the number of messaging units can be changed at will. The result is predictable and repeatable performance for your Service Bus-based solution.
-Not only is this performance more predictable and available, but it's also faster. With Premium Messaging, peak performance is much faster than with the Standard tier.
+Not only is this performance more predictable and available, but it's also faster. With premium messaging, peak performance is much faster than with the standard tier.
-## Premium Messaging technical differences
+## Premium messaging technical differences
-The following sections discuss a few differences between Premium and Standard messaging tiers.
+The following sections discuss a few differences between premium and standard messaging tiers.
### Express entities
-Because Premium messaging runs in an isolated run-time environment, express entities aren't supported in Premium namespaces. An express entity holds a message in memory temporarily before writing it to persistent storage. If you have code running under Standard messaging and want to port it to the Premium tier, ensure that the express entity feature is disabled.
+Because Premium messaging runs in an isolated run-time environment, express entities aren't supported in premium namespaces. An express entity holds a message in memory temporarily before writing it to persistent storage. If you have code running under standard messaging and want to port it to the premium tier, ensure that the express entity feature is disabled.
-## Premium Messaging resource usage
+## Premium messaging resource usage
In general, any operation on an entity might cause CPU and memory usage. Here are some of these operations: -- Management operations such as CRUD (Create, Retrieve, Update, and Delete) operations on queues, topics, and subscriptions.
+- Management operations such as Create, Retrieve, Update, and Delete (CRUD) operations on queues, topics, and subscriptions.
- Runtime operations (send and receive messages) - Monitoring operations and alerts
-The additional CPU And memory usage isn't priced additionally though. For the Premium Messaging tier, there's a single price for the message unit.
+The additional CPU And memory usage isn't priced additionally though. For the premium messaging tier, there's a single price for the message unit.
The CPU and memory usage are tracked and displayed to you for the following reasons:
The CPU and memory usage are tracked and displayed to you for the following reas
## How many messaging units are needed?
-You specify the number of messaging units when provisioning an Azure Service Bus Premium namespace. These messaging units are dedicated resources that are allocated to the namespace. When partitioning has been enabled on the namespace, the messaging units are equally distributed across the partitions.
+You specify the number of messaging units when provisioning an Azure Service Bus premium namespace. These messaging units are dedicated resources that are allocated to the namespace. When partitioning is enabled on the namespace, the messaging units are equally distributed across the partitions.
-The number of messaging units allocated to the Service Bus Premium namespace can be **dynamically adjusted** to factor in the change (increase or decrease) in workloads.
+The number of messaging units allocated to the Service Bus premium namespace can be **dynamically adjusted** to factor in the change (increase or decrease) in workloads.
There are a few factors to take into consideration when deciding the number of messaging units for your architecture:
To learn how to configure a Service Bus namespace to automatically scale (increa
> The billing meters for Service Bus are hourly. In the case of scaling up, you only pay for the additional resources for the hours that these were used. >
-## Get started with Premium Messaging
+## Get started with premium messaging
-Getting started with Premium Messaging is straightforward and the process is similar to that of Standard Messaging. Begin by [creating a namespace](service-bus-quickstart-portal.md#create-a-namespace-in-the-azure-portal) in the [Azure portal](https://portal.azure.com). Make sure you select **Premium** under **Pricing tier**. Select **View full pricing details** to see more information about each tier.
+Getting started with premium messaging is straightforward and the process is similar to that of standard messaging. Begin by [creating a namespace](service-bus-quickstart-portal.md#create-a-namespace-in-the-azure-portal) in the [Azure portal](https://portal.azure.com). Make sure you select **Premium** under **Pricing tier**. Select **View full pricing details** to see more information about each tier.
:::image type="content" source="./media/service-bus-premium-messaging/select-premium-tier.png" alt-text="Screenshot that shows the selection of premium tier when creating a namespace."::: You can also create [Premium namespaces using Azure Resource Manager templates](https://azure.microsoft.com/resources/templates/servicebus-pn-ar/). ## Large messages support
-Azure Service Bus premium tier namespaces support the ability to send large message payloads up to 100 MB. This feature is primarily targeted towards legacy workloads that have used larger message payloads on other enterprise messaging brokers and are looking to seamlessly migrate to Azure Service Bus.
+Azure Service Bus premium tier namespaces support the ability to send large message payloads up to 100 MB. This feature is primarily targeted towards legacy workloads that used larger message payloads on other enterprise messaging brokers and are looking to seamlessly migrate to Azure Service Bus.
Here are some considerations when sending large messages on Azure Service Bus - - Supported on Azure Service Bus premium tier namespaces only.-- Supported only when using the AMQP protocol. Not supported when using SBMP or HTTP protocols, in the premium tier, the maximum message size for these protocols is 1 MB.
+- Supported only when using the Advanced Message Queuing Protocol (AMQP) protocol. Not supported when using SBMP or HTTP protocols, in the premium tier, the maximum message size for SBMP and HTTP protocols is 1 MB.
- Supported when using [Java Message Service (JMS) 2.0 client SDK](how-to-use-java-message-service-20.md) and other language client SDKs. - Sending large messages result in decreased throughput and increased latency.-- While 100-MB message payloads are supported, it's recommended to keep the message payloads as small as possible to ensure reliable performance from the Service Bus namespace.
+- While 100-MB message payloads are supported, we recommend that you keep the message payloads as small as possible to ensure reliable performance from the Service Bus namespace.
- The max message size is enforced only for messages sent to the queue or topic. The size limit isn't enforced for the receive operation. It allows you to update the max message size for a given queue (or topic). - Batching isn't supported. - Service Bus Explorer doesn't support sending or receiving large messages.
The following network security features are available only in the premium tier.
Configuring IP firewall using the Azure portal is available only for the premium tier namespaces. However, you can configure IP firewall rules for other tiers using Azure Resource Manager templates, CLI, PowerShell, or REST API. For more information, see [Configure IP firewall](service-bus-ip-filtering.md). ## Encryption of data at rest
-Azure Service Bus Premium provides encryption of data at rest with Azure Storage Service Encryption (Azure SSE). Service Bus Premium uses Azure Storage to store the data. All the data that's stored with Azure Storage is encrypted using Microsoft-managed keys. If you use your own key (also referred to as customer managed key (CMD) or customer-managed key), the data is still encrypted using the Microsoft-managed key, but in addition the Microsoft-managed key is encrypted using the customer-managed key. This feature enables you to create, rotate, disable, and revoke access to customer-managed keys that are used for encrypting Microsoft-managed keys. Enabling the CMK feature is a one time setup process on your namespace. For more information, see [Encrypting Azure Service Bus data at rest](configure-customer-managed-key.md).
+Azure Service Bus Premium provides encryption of data at rest with Azure Storage Service Encryption (Azure SSE). Service Bus Premium uses Azure Storage to store the data. All the data stored with Azure Storage is encrypted using Microsoft-managed keys. If you use your own key (also referred to as customer managed key), the data is still encrypted using the Microsoft-managed key, but in addition the Microsoft-managed key is encrypted using the customer-managed key. This feature enables you to create, rotate, disable, and revoke access to customer-managed keys that are used for encrypting Microsoft-managed keys. Enabling the customer-managed key feature is a one time setup process on your namespace. For more information, see [Encrypting Azure Service Bus data at rest](configure-customer-managed-key.md).
## Partitioning There are some differences between the standard and premium tiers when it comes to partitioning. -- Partitioning is available at entity creation for all queues and topics in basic or standard SKUs. A namespace can have both partitioned and nonpartitioned entities. Partitioning is available at namespace creation for the premium tier, and all queues and topics in that namespace will be partitioned. Any previously migrated partitioned entities in premium namespaces continue to work as expected.
+- Partitioning is available at entity creation for all queues and topics in basic or standard SKUs. A namespace can have both partitioned and nonpartitioned entities. Partitioning is available at namespace creation for the premium tier, and all queues and topics in that namespace are partitioned. Any previously migrated partitioned entities in premium namespaces continue to work as expected.
- When partitioning is enabled in the Basic or Standard SKUs, Service Bus creates 16 partitions. When partitioning is enabled in the premium tier, the number of partitions is specified during namespace creation. For more information, see [Partitioning in Service Bus](service-bus-partitioning.md).
Azure Service Bus spreads the risk of catastrophic failures of individual machin
For a premium tier namespace, the outage risk is further spread across three physically separated facilities availability zones, and the service has enough capacity reserves to instantly cope with the complete, catastrophic loss of a datacenter. The all-active Azure Service Bus cluster model within a failure domain along with the availability zone support is superior to any on-premises message broker product in terms of resiliency against grave hardware failures and even catastrophic loss of entire datacenter facilities. Still, there might be grave situations with widespread physical destruction that even those measures can't sufficiently defend against.
-The Service Bus Geo-disaster recovery feature is designed to make it easier to recover from a disaster of this magnitude and abandon a failed Azure region for good without having to change your application configurations. Abandoning an Azure region typically involves several services and this feature primarily aims at helping to preserve the integrity of the composite application configuration. The feature is globally available for the Service Bus premium tier.
+The Service Bus Geo-disaster recovery (Geo-DR) feature is designed to make it easier to recover from a disaster of this magnitude and abandon a failed Azure region for good without having to change your application configurations. Abandoning an Azure region typically involves several services and this feature primarily aims at helping to preserve the integrity of the composite application configuration. The feature is globally available for the Service Bus premium tier.
+
+The Geo-Disaster Recovery feature ensures that the entire configuration of a namespace (entities, configuration, properties) is continuously replicated from a primary namespace to a secondary namespace with which it's paired, and it allows you to initiate a once-only failover move from the primary to the secondary at any time. The failover move repoints the chosen alias name for the namespace to the secondary namespace and then breaks the pairing. The failover is nearly instantaneous once initiated.
For more information, see [Azure Service Bus Geo-disaster recovery](service-bus-geo-dr.md).
+## Geo-replication
+The Geo-Replication feature is one of the options to [insulate Azure Service Bus applications against outages and disasters](service-bus-outages-disasters.md), providing replication of both metadata (entities, configuration, properties) and data (message data and message property / state changes), whereas the Geo-DR feature described in the previous section replicates only the metadata.
+
+The Geo-Replication feature ensures that the metadata and data of a namespace are continuously replicated from a primary region to one or more secondary regions.
+
+- Queues, topics, subscriptions, filters.
+- Data, which reside in the entities.
+- All state changes and property changes executed against the messages within a namespace.
+- Namespace configuration.
+
+This feature allows promoting any secondary region to primary, at any time. Promoting a secondary repoints the name for the namespace to the selected secondary region, and switches the roles between the primary and secondary region. The promotion is nearly instantaneous once initiated.
+++ ## Java Message Service (JMS) support The premium tier supports JMS 1.1 and JMS 2.0. For more information, see [How to use JMS 2.0 with Azure Service Bus Premium](how-to-use-java-message-service-20.md).
service-bus-messaging Service Bus Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-troubleshooting-guide.md
You see an error that the entity is no longer available.
The resource might have been deleted. Follow these steps to identify why the entity was deleted. - Check the activity log to see if there's an Azure Resource Manager request for deletion. -- Check the operational log to see if there was a direct API call for deletion. To learn how to collect an operational log, see [Collection and routing](monitor-service-bus.md#collection-and-routing). For the schema and an example of an operation log, see [Operation logs](monitor-service-bus-reference.md#operational-logs)
+- Check the operational log to see if there was a direct API call for deletion. To learn how to collect an operational log, see [Monitor Azure Service Bus](monitor-service-bus.md#data-storage). For the schema and an example of an operation log, see [Operation logs](monitor-service-bus-reference.md#operational-logs)
- Check the operation log to see if there was an `autodeleteonidle` related deletion.
service-fabric Monitor Service Fabric Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/monitor-service-fabric-reference.md
See [Monitor Service Fabric](monitor-service-fabric.md) for details on the data
Azure Monitor doesn't collect any platform metrics or resource logs for Service Fabric. You can monitor and collect: - Service Fabric system, node, and application events. For the full event listing, see [List of Service Fabric events](service-fabric-diagnostics-event-generation-operational.md).-- Windows performance counters on nodes and applications. For the list of performance counters, see [Performance metrics](service-fabric-diagnostics-event-generation-perf.md).
+- Windows performance counters on nodes and applications. For the list of performance counters, see [Performance metrics](#performance-metrics).
- Cluster, node, and system service health data. You can use the [FabricClient.HealthManager property](/dotnet/api/system.fabric.fabricclient.healthmanager) to get the health client to use for health related operations, like report health or get entity health. - Metrics for the guest operating system (OS) that runs on a cluster node, through one or more agents that run on the guest OS.
Azure Monitor doesn't collect any platform metrics or resource logs for Service
> [!NOTE] > The Azure Monitor agent replaces the previously-used Azure Diagnostics extension and Log Analytics agent. For more information, see [Overview of Azure Monitor agents](/azure/azure-monitor/agents/agents-overview).
+## Performance metrics
+
+Metrics should be collected to understand the performance of your cluster as well as the applications running in it. For Service Fabric clusters, we recommend collecting the following performance counters.
+
+### Nodes
+
+For the machines in your cluster, consider collecting the following performance counters to better understand the load on each machine and make appropriate cluster scaling decisions.
+
+| Counter Category | Counter Name |
+| | |
+| Logical Disk | Logical Disk Free Space |
+| PhysicalDisk(per Disk) | Avg. Disk Read Queue Length |
+| PhysicalDisk(per Disk) | Avg. Disk Write Queue Length |
+| PhysicalDisk(per Disk) | Avg. Disk sec/Read |
+| PhysicalDisk(per Disk) | Avg. Disk sec/Write |
+| PhysicalDisk(per Disk) | Disk Reads/sec |
+| PhysicalDisk(per Disk) | Disk Read Bytes/sec |
+| PhysicalDisk(per Disk) | Disk Writes/sec |
+| PhysicalDisk(per Disk) | Disk Write Bytes/sec |
+| Memory | Available MBytes |
+| PagingFile | % Usage |
+| Processor(Total) | % Processor Time |
+| Process (per service) | % Processor Time |
+| Process (per service) | ID Process |
+| Process (per service) | Private Bytes |
+| Process (per service) | Thread Count |
+| Process (per service) | Virtual Bytes |
+| Process (per service) | Working Set |
+| Process (per service) | Working Set - Private |
+| Network Interface(all-instances) | Bytes recd |
+| Network Interface(all-instances) | Bytes sent |
+| Network Interface(all-instances) | Bytes total |
+| Network Interface(all-instances) | Output Queue Length |
+| Network Interface(all-instances) | Packets Outbound Discarded |
+| Network Interface(all-instances) | Packets Received Discarded |
+| Network Interface(all-instances) | Packets Outbound Errors |
+| Network Interface(all-instances) | Packets Received Errors |
+
+### .NET applications and services
+
+Collect the following counters if you are deploying .NET services to your cluster.
+
+| Counter Category | Counter Name |
+| | |
+| .NET CLR Memory (per service) | Process ID |
+| .NET CLR Memory (per service) | # Total committed Bytes |
+| .NET CLR Memory (per service) | # Total reserved Bytes |
+| .NET CLR Memory (per service) | # Bytes in all Heaps |
+| .NET CLR Memory (per service) | Large Object Heap size |
+| .NET CLR Memory (per service) | # GC Handles |
+| .NET CLR Memory (per service) | # Gen 0 Collections |
+| .NET CLR Memory (per service) | # Gen 1 Collections |
+| .NET CLR Memory (per service) | # Gen 2 Collections |
+| .NET CLR Memory (per service) | % Time in GC |
+
+### Service Fabric's custom performance counters
+
+Service Fabric generates a substantial amount of custom performance counters. If you have the SDK installed, you can see the comprehensive list on your Windows machine in your Performance Monitor application (Start > Performance Monitor).
+
+In the applications you are deploying to your cluster, if you are using Reliable Actors, add counters from `Service Fabric Actor` and `Service Fabric Actor Method` categories (see [Service Fabric Reliable Actors Diagnostics](service-fabric-reliable-actors-diagnostics.md)).
+
+If you use Reliable Services or Service Remoting, we similarly have `Service Fabric Service` and `Service Fabric Service Method` counter categories that you should collect counters from, see [monitoring with service remoting](service-fabric-reliable-serviceremoting-diagnostics.md) and [reliable services performance counters](service-fabric-reliable-services-diagnostics.md#performance-counters).
+
+If you use Reliable Collections, we recommend adding the `Avg. Transaction ms/Commit` from the `Service Fabric Transactional Replicator` to collect the average commit latency per transaction metric.
+ [!INCLUDE [horz-monitor-ref-logs-tables](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-logs-tables.md)]
-### Service Fabric Clusters
+### Service Fabric clusters
Microsoft.ServiceFabric/clusters - [AzureActivity](/azure/azure-monitor/reference/tables/AzureActivity#columns)
Microsoft.ServiceFabric/clusters
- See [Monitor Service Fabric](monitor-service-fabric.md) for a description of monitoring Service Fabric. - See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources. - See [List of Service Fabric events](service-fabric-diagnostics-event-generation-operational.md) for the list of Service Fabric system, node, and application events.-- See [Performance metrics](service-fabric-diagnostics-event-generation-perf.md) for the list of Windows performance counters on nodes and applications.
service-fabric Monitor Service Fabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/monitor-service-fabric.md
Azure Service Fabric has the following layers that you can monitor: -- Service health and performance counters for the service *infrastructure*. For more information, see [Performance metrics](service-fabric-diagnostics-event-generation-perf.md).-- Client metrics, logs, and events for the *platform* or *cluster* nodes, including container metrics. The metrics and logs are different for Linux or Windows nodes. For more information, see [Monitor the cluster](service-fabric-diagnostics-event-generation-infra.md).-- The *applications* that run on the nodes. You can monitor applications with Application Insights key or SDK, EventStore, or ASP.NET Core logging. For more information, see [Application logging](service-fabric-diagnostics-event-generation-app.md).
+- [Application monitoring](#application-monitoring): The *applications* that run on the nodes. You can monitor applications with Application Insights key or SDK, EventStore, or ASP.NET Core logging.
+- [Platform (cluster) monitoring](#platform-cluster-monitoring): Client metrics, logs, and events for the *platform* or *cluster* nodes, including container metrics. The metrics and logs are different for Linux or Windows nodes.
+- [Infrastructure (performance) monitoring](#infrastructure-performance-monitoring): Service health and performance counters for the service *infrastructure*.
You can monitor how your applications are used, the actions taken by the Service Fabric platform, your resource utilization with performance counters, and the overall health of your cluster. [Azure Monitor logs](service-fabric-diagnostics-event-analysis-oms.md) and [Application Insights](service-fabric-diagnostics-event-analysis-appinsights.md) offer built-in integration with Service Fabric. -- For an overview of monitoring and diagnostics for Service Fabric infrastructure, platform, and applications, see [Monitoring and diagnostics for Azure Service Fabric](service-fabric-diagnostics-overview.md).
+- To learn about best practices, see [Monitoring and diagnostic best practices for Azure Service Fabric](service-fabric-best-practices-monitoring.md).
- For a tutorial that shows how to view Service Fabric events and health reports, query the EventStore APIs, and monitor performance counters, see [Tutorial: Monitor a Service Fabric cluster in Azure](service-fabric-tutorial-monitor-cluster.md).
+- To learn how to configure Azure Monitor logs to monitor your Windows containers orchestrated on Service Fabric, see [Tutorial: Monitor Windows containers on Service Fabric using Azure Monitor logs](service-fabric-tutorial-monitoring-wincontainers.md).
### Service Fabric Explorer [Service Fabric Explorer](service-fabric-visualizing-your-cluster.md), a desktop application for Windows, macOS, and Linux, is an open-source tool for inspecting and managing Azure Service Fabric clusters. To enable automation, every action that can be taken through Service Fabric Explorer can also be done through PowerShell or a REST API.
-### EventStore
+## Application monitoring
-[EventStore](service-fabric-diagnostics-eventstore.md) is a feature that shows Service Fabric platform events in Service Fabric Explorer and programmatically through the [Service Fabric Client Library](/dotnet/api/overview/azure/service-fabric#client-library) REST API. You can see a snapshot view of what's going on in your cluster for each node, service, and application, and query based on the time of the event.
+Application monitoring tracks how features and components of your application are being used. You want to monitor your applications to make sure issues that impact users are caught. The responsibility of application monitoring is on the users developing an application and its services since it is unique to the business logic of your application. Monitoring your applications can be useful in the following scenarios:
+* How much traffic is my application experiencing? - Do you need to scale your services to meet user demands or address a potential bottleneck in your application?
+* Are my service-to-service calls successful and tracked?
+* What actions are taken by the users of my application? - Collecting telemetry can guide future feature development and better diagnostics for application errors
+* Is my application throwing unhandled exceptions?
+* What is happening within the services running inside my containers?
-The EventStore APIs are available only for Windows clusters running on Azure. On Windows machines, these events are fed into the Event Log, so you can see Service Fabric Events in Event Viewer.
+The great thing about application monitoring is that developers can use whatever tools and framework they'd like since it lives within the context of your application! You can learn more about the Azure solution for application monitoring with Azure Monitor Application Insights in [Event analysis with Application Insights](service-fabric-diagnostics-event-analysis-appinsights.md).
-### Application Insights
+We also have a tutorial with how to [set this up for .NET Applications](service-fabric-tutorial-monitoring-aspnet.md). This tutorial goes over how to install the right tools, an example to write custom telemetry in your application, and viewing the application diagnostics and telemetry in the Azure portal.
-Application Insights integrates with Service Fabric to provide Service Fabric specific metrics and tooling experiences for Visual Studio and Azure portal. Application Insights provides a comprehensive out-of-the-box logging experience. For more information, see [Event analysis and visualization with Application Insights](service-fabric-diagnostics-event-analysis-appinsights.md).
+### Application logging
+Instrumenting your code is not only a way to gain insights about your users, but also the only way you can know whether something is wrong in your application, and to diagnose what needs to be fixed. Although technically it's possible to connect a debugger to a production service, it's not a common practice. So, having detailed instrumentation data is important.
-For more information about the resource types for Azure Service Fabric, see [Service Fabric monitoring data reference](monitor-service-fabric-reference.md).
+Some products automatically instrument your code. Although these solutions can work well, manual instrumentation is almost always required to be specific to your business logic. In the end, you must have enough information to forensically debug the application. Service Fabric applications can be instrumented with any logging framework. This section describes a few different approaches to instrumenting your code, and when to choose one approach over another.
+- **Application Insights SDK**: Application Insights has a rich integration with Service Fabric out of the box. Users can add the AI Service Fabric nuget packages and receive data and logs created and collected viewable in the Azure portal. Additionally, users are encouraged to add their own telemetry in order to diagnose and debug their applications and track which services and parts of their application are used the most. The [TelemetryClient](/dotnet/api/microsoft.applicationinsights.telemetryclient) class in the SDK provides many ways to track telemetry in your applications. For more information, see [Event analysis and visualization with Application Insights](service-fabric-diagnostics-event-analysis-appinsights.md).
+ Check out an example of how to instrument and add application insights to your application in our tutorial for [monitoring and diagnosing a .NET application](service-fabric-tutorial-monitoring-aspnet.md).
+- **EventSource**: When you create a Service Fabric solution from a template in Visual Studio, an **EventSource**-derived class (**ServiceEventSource** or **ActorEventSource**) is generated. A template is created, in which you can add events for your application or service. The **EventSource** name **must** be unique, and should be renamed from the default template string MyCompany-&lt;solution&gt;-&lt;project&gt;. Having multiple **EventSource** definitions that use the same name causes an issue at run time. Each defined event must have a unique identifier. If an identifier is not unique, a runtime failure occurs. Some organizations preassign ranges of values for identifiers to avoid conflicts between separate development teams. For more information, see [Vance's blog](/archive/blogs/vancem/introduction-tutorial-logging-etw-events-in-c-system-diagnostics-tracing-eventsource) or the [MSDN documentation](/previous-versions/msp-n-p/dn774985(v=pandp.20)).
+
+- **ASP.NET Core logging**: It's important to carefully plan how you will instrument your code. The right instrumentation plan can help you avoid potentially destabilizing your code base, and then needing to reinstrument the code. To reduce risk, you can choose an instrumentation library like [Microsoft.Extensions.Logging](https://www.nuget.org/packages/Microsoft.Extensions.Logging/), which is part of Microsoft ASP.NET Core. ASP.NET Core has an [ILogger](/dotnet/api/microsoft.extensions.logging.ilogger) interface that you can use with the provider of your choice, while minimizing the effect on existing code. You can use the code in ASP.NET Core on Windows and Linux, and in the full .NET Framework, so your instrumentation code is standardized.
+
+For examples on how to use these suggestions, see [Add logging to your Service Fabric application](service-fabric-how-to-diagnostics-log.md).
+
+## Platform (cluster) monitoring
+
+A user is in control over what telemetry comes from their application since a user writes the code itself, but what about the diagnostics from the Service Fabric platform? One of Service Fabric's goals is to keep applications resilient to hardware failures. This goal is achieved through the platform's system services' ability to detect infrastructure issues and rapidly failover workloads to other nodes in the cluster. But in this particular case, what if the system services themselves have issues? Or if in attempting to deploy or move a workload, rules for the placement of services are violated? Service Fabric provides diagnostics for these and more to make sure you are informed about activity taking place in your cluster. Some sample scenarios for cluster monitoring include:
+
+For more information on platform (cluster) monitoring, see [Monitoring the cluster](service-fabric-diagnostics-event-generation-infra.md).
+
+### Service Fabric events
+
+Service Fabric provides a comprehensive set of diagnostics events out of the box, which you can access through the EventStore or the operational event channel the platform exposes. These [Service Fabric events](service-fabric-diagnostics-events.md) illustrate actions done by the platform on different entities such as nodes, applications, services, and partitions. The same events are available on both Windows and Linux clusters.
+
+- **Service Fabric event channels**: On Windows, Service Fabric events are available from a single ETW provider with a set of relevant `logLevelKeywordFilters` used to pick between Operational and Data & Messaging channels. This is the way in which we separate out outgoing Service Fabric events to be filtered on as needed. On Linux, Service Fabric events come through LTTng and are put into one Storage table, from where they can be filtered as needed. These channels contain curated, structured events that can be used to better understand the state of your cluster. Diagnostics are enabled by default at the cluster creation time, which create an Azure Storage table where the events from these channels are sent for you to query in the future.
+
+- [EventStore](service-fabric-diagnostics-eventstore.md) is a feature that shows Service Fabric platform events in Service Fabric Explorer and programmatically through the [Service Fabric Client Library](/dotnet/api/overview/azure/service-fabric#client-library) REST API. You can see a snapshot view of what's going on in your cluster for each node, service, and application, and query based on the time of the event. The EventStore APIs are available only for Windows clusters running on Azure. On Windows machines, these events are fed into the Event Log, so you can see Service Fabric Events in Event Viewer.
+
+![Screenshot shows the EVENTS tab of the Nodes pane several events, including a NodeDown event.](media/service-fabric-diagnostics-overview/eventstore.png)
+
+The diagnostics provided are in the form of a comprehensive set of events out of the box. These [Service Fabric events](service-fabric-diagnostics-events.md) illustrate actions done by the platform on different entities such as Nodes, Applications, Services, Partitions etc. In the last scenario above, if a node were to go down, the platform would emit a `NodeDown` event and you could be notified immediately by your monitoring tool of choice. Other common examples include `ApplicationUpgradeRollbackStarted` or `PartitionReconfigured` during a failover. **The same events are available on both Windows and Linux clusters.**
+
+The events are sent through standard channels on both Windows and Linux and can be read by any monitoring tool that supports these. The Azure Monitor solution is Azure Monitor logs. Feel free to read more about our [Azure Monitor logs integration](service-fabric-diagnostics-event-analysis-oms.md) which includes a custom operational dashboard for your cluster and some sample queries from which you can create alerts. More cluster monitoring concepts are available at [Platform level event and log generation](service-fabric-diagnostics-event-generation-infra.md).
+
+### Health monitoring
+
+The Service Fabric platform includes a health model, which provides extensible health reporting for the status of entities in a cluster. Each node, application, service, partition, replica, or instance, has a continuously updatable health status. The health status can either be "OK", "Warning", or "Error". Think of Service Fabric events as verbs done by the cluster to various entities and health as an adjective for each entity. Each time the health of a particular entity transitions, an event will also be emitted. This way you can set up queries and alerts for health events in your monitoring tool of choice, just like any other event.
+
+Additionally, we even let users override health for entities. If your application is going through an upgrade and you have validation tests failing, you can write to Service Fabric Health using the Health API to indicate your application is no longer healthy, and Service Fabric will automatically roll back the upgrade! For more on the health model, check out the [introduction to Service Fabric health monitoring](service-fabric-health-introduction.md)
+
+![Screenshot of SFX health dashboard.](media/service-fabric-diagnostics-overview/sfx-healthstatus.png)
+
+### Watchdogs
-### Performance counters
+Generally, a watchdog is a separate service that watches health and load across services, pings endpoints, and reports unexpected health events in the cluster. This can help prevent errors that may not be detected based only on the performance of a single service. Watchdogs are also a good place to host code that performs remedial actions that don't require user interaction, such as cleaning up log files in storage at certain time intervals. If you want a fully implemented, open source SF watchdog service that includes an easy-to-use watchdog extensibility model and that runs in both Windows and Linux clusters, see the [FabricObserver](https://github.com/microsoft/service-fabric-observer) project. FabricObserver is production-ready software. We encourage you to deploy FabricObserver to your test and production clusters and extend it to meet your needs either through its plug-in model or by forking it and writing your own built-in observers. The former (plug-ins) is the recommended approach.
-Service Fabric system performance is usually measured through performance counters. These performance counters can come from various sources including the operating system, the .NET framework, or the Service Fabric platform itself. For a list of performance counters that should be collected at the infrastructure level, see [Performance metrics](service-fabric-diagnostics-event-generation-perf.md).
+## Infrastructure (performance) monitoring
-Service Fabric also provides a set of performance counters for the Reliable Services and Actors programming models. For more information, see [Monitoring for Reliable Service Remoting](service-fabric-reliable-serviceremoting-diagnostics.md#performance-counters) and [Performance monitoring for Reliable Actors](service-fabric-reliable-actors-diagnostics.md#performance-counters).
+Now that we've covered the diagnostics in your application and the platform, how do we know the hardware is functioning as expected? Monitoring your underlying infrastructure is a key part of understanding the state of your cluster and your resource utilization. Measuring system performance depends on many factors that can be subjective depending on your workloads. These factors are typically measured through performance counters. These performance counters can come from a variety of sources including the operating system, the .NET framework, or the Service Fabric platform itself. Some scenarios in which they would be useful are
+
+* Am I utilizing my hardware efficiently? Do you want to use your hardware at 90% CPU or 10% CPU. This comes in handy when scaling your cluster, or optimizing your application's processes.
+* Can I predict infrastructure issues proactively? - many issues are preceded by sudden changes (drops) in performance, so you can use performance counters such as network I/O and CPU utilization to predict and diagnose the issues proactively.
+
+A list of performance counters that should be collected at the infrastructure level can be found at [Performance metrics](monitor-service-fabric-reference.md#performance-metrics).
Azure Monitor Logs is recommended for monitoring cluster level events. After you configure the [Log Analytics agent](service-fabric-diagnostics-oms-agent.md) with your workspace, you can collect: -- Performance metrics such as CPU Utilization.
+- Performance metrics such as CPU utilization.
- .NET performance counters such as process level CPU utilization. - Service Fabric performance counters such as number of exceptions from a reliable service.-- Container metrics such as CPU Utilization.
+- Container metrics such as CPU utilization.
++
+For more information about the resource types for Azure Service Fabric, see [Service Fabric monitoring data reference](monitor-service-fabric-reference.md).
+++ ### Guest OS metrics
Service Fabric can collect the following logs:
- For Linux clusters, Azure Monitor Logs is also the recommended tool for Azure platform and infrastructure monitoring. Linux platform diagnostics require different configuration. For more information, see [Service Fabric Linux cluster events in Syslog](service-fabric-diagnostics-oms-syslog.md). - You can configure the Azure Monitor agent to send guest OS logs to Azure Monitor Logs, where you can query on them by using Log Analytics. - You can write Service Fabric container logs to *stdout* or *stderr* so they're available in Azure Monitor Logs.
+- You can set up the [container monitoring solution](service-fabric-diagnostics-oms-containers.md) for Azure Monitor Logs to view container events.
-### Service Fabric events
-
-Service Fabric provides a comprehensive set of diagnostics events out of the box, which you can access through the EventStore or the operational event channel the platform exposes. These [Service Fabric events](service-fabric-diagnostics-events.md) illustrate actions done by the platform on different entities such as nodes, applications, services, and partitions. The same events are available on both Windows and Linux clusters.
-
-On Windows, Service Fabric events are available from a single Event Tracing for Windows (ETW) provider with a set of relevant `logLevelKeywordFilters` used to pick between Operational and Data & Messaging channels. On Linux, Service Fabric events come through LTTng and are put into one Azure Storage table, from where they can be filtered as needed. Diagnostics can be enabled at cluster creation time, which creates a Storage table where the events from these channels are sent.
-
-The events are sent through standard channels on both Windows and Linux and can be read by any monitoring tool that supports them, including Azure Monitor Logs. For more information, see [Azure Monitor logs integration](service-fabric-diagnostics-event-analysis-oms.md).
-
-### Health monitoring
-
-The Service Fabric platform includes a health model, which provides extensible health reporting for the status of entities in a cluster. Each node, application, service, partition, replica, or instance has a continuously updatable health status. Each time the health of a particular entity transitions, an event is also emitted. You can set up queries and alerts for health events in your monitoring tool, just like any other event.
+### Other logging solutions
-## Partner logging solutions
+Although the two solutions we recommended, [Azure Monitor logs](service-fabric-diagnostics-event-analysis-oms.md) and [Application Insights](service-fabric-diagnostics-event-analysis-appinsights.md), have built in integration with Service Fabric, many events are written out through ETW providers and are extensible with other logging solutions. You should also look into the [Elastic Stack](https://www.elastic.co/products) (especially if you are considering running a cluster in an offline environment), [Dynatrace](https://www.dynatrace.com/), or any other platform of your preference. For a list of integrated partners, see [Azure Service Fabric Monitoring Partners](service-fabric-diagnostics-partners.md).
-Many events are written out through ETW providers and are extensible with other logging solutions. Examples are [Elastic Stack](https://www.elastic.co/products), especially if you're running a cluster in an offline environment, or [Dynatrace](https://www.dynatrace.com/). For a list of integrated partners, see [Azure Service Fabric Monitoring Partners](service-fabric-diagnostics-partners.md).
+The key points for any platform you choose should include how comfortable you are with the user interface, the querying capabilities, the custom visualizations and dashboards available, and the additional tools they provide to enhance your monitoring experience.
[!INCLUDE [horz-monitor-activity-log](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-activity-log.md)]
The following table lists some alert rules for Service Fabric. These alerts are
[!INCLUDE [horz-monitor-advisor-recommendations](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-advisor-recommendations.md)]
+## Recommended setup
+
+Now that we've gone over each area of monitoring and example scenarios, here is a summary of the Azure monitoring tools and set up needed to monitor all areas above.
+
+* Application monitoring with [Application Insights](service-fabric-tutorial-monitoring-aspnet.md)
+* Cluster monitoring with [Diagnostics Agent](service-fabric-diagnostics-event-aggregation-wad.md) and [Azure Monitor logs](service-fabric-diagnostics-oms-setup.md)
+* Infrastructure monitoring with [Azure Monitor logs](service-fabric-diagnostics-oms-agent.md)
+
+You can also use and modify the [sample ARM template](service-fabric-diagnostics-oms-setup.md#deploy-azure-monitor-logs-with-azure-resource-manager) to automate deployment of all necessary resources and agents.
+ ## Related content - See [Service Fabric monitoring data reference](monitor-service-fabric-reference.md) for a reference of the metrics, logs, and other important values created for Service Fabric.
service-fabric Service Fabric Best Practices Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-applications.md
Service Fabric Reliable Actors enables you to easily create stateful, virtual ac
## Application diagnostics
-Be thorough about adding [application logging](./service-fabric-diagnostics-event-generation-app.md) in service calls. It will help you diagnose scenarios in which services call each other. For example, when A calls B calls C calls D, the call could fail anywhere. If you don't have enough logging, failures are hard to diagnose. If the services are logging too much because of call volumes, be sure to at least log errors and warnings.
+Be thorough about adding [application logging](monitor-service-fabric.md#application-logging) in service calls. It will help you diagnose scenarios in which services call each other. For example, when A calls B calls C calls D, the call could fail anywhere. If you don't have enough logging, failures are hard to diagnose. If the services are logging too much because of call volumes, be sure to at least log errors and warnings.
## Design guidance on Azure * Visit the [Azure architecture center](/azure/architecture/microservices/) for design guidance on [building microservices on Azure](/azure/architecture/microservices/).
service-fabric Service Fabric Best Practices Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-monitoring.md
Last updated 07/14/2022
# Monitoring and diagnostic best practices for Azure Service Fabric
-[Monitoring and diagnostics](./service-fabric-diagnostics-overview.md) are critical to developing, testing, and deploying workloads in any cloud environment. For example, you can track how your applications are used, the actions taken by the Service Fabric platform, your resource utilization with performance counters, and the overall health of your cluster. You can use this information to diagnose and correct issues, and prevent them from occurring in the future.
+[Monitoring and diagnostics](monitor-service-fabric.md) are critical to developing, testing, and deploying workloads in any cloud environment. For example, you can track how your applications are used, the actions taken by the Service Fabric platform, your resource utilization with performance counters, and the overall health of your cluster. You can use this information to diagnose and correct issues, and prevent them from occurring in the future.
## Application monitoring
Generally, a watchdog is a separate service that watches health and load across
## Next steps
-* Get started instrumenting your applications: [Application level event and log generation](service-fabric-diagnostics-event-generation-app.md).
+* Get started instrumenting your applications: [Application level event and log generation](monitor-service-fabric.md#application-logging).
* Go through the steps to set up Application Insights for your application with [Monitor and diagnose an ASP.NET Core application on Service Fabric](service-fabric-tutorial-monitoring-aspnet.md). * Learn more about monitoring the platform and the events Service Fabric provides for you: [Platform level event and log generation](service-fabric-diagnostics-event-generation-infra.md). * Configure Azure Monitor logs integration with Service Fabric: [Set up Azure Monitor logs for a cluster](service-fabric-diagnostics-oms-setup.md)
service-fabric Service Fabric Content Roadmap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-content-roadmap.md
Service Fabric provides multiple ways to [view health reports](service-fabric-vi
[Check this page for a training video that describes the Service Fabric health model and how it's used:](/shows/building-microservices-applications-on-azure-service-fabric/service-fabric-health-system) ## Monitoring and diagnostics
-[Monitoring and diagnostics](service-fabric-diagnostics-overview.md) are critical to developing, testing, and deploying applications and services in any environment. Service Fabric solutions work best when you plan and implement monitoring and diagnostics that help ensure applications and services are working as expected in a local development environment or in production.
+[Monitoring and diagnostics](monitor-service-fabric.md) are critical to developing, testing, and deploying applications and services in any environment. Service Fabric solutions work best when you plan and implement monitoring and diagnostics that help ensure applications and services are working as expected in a local development environment or in production.
The main goals of monitoring and diagnostics are to:
The overall workflow of monitoring and diagnostics consists of three steps:
2. Event aggregation: generated events need to be collected and aggregated before they can be displayed 3. Analysis: events need to be visualized and accessible in some format, to allow for analysis and display as needed
-Multiple products are available that cover these three areas, and you are free to choose different technologies for each. For more information, read [Monitoring and diagnostics for Azure Service Fabric](service-fabric-diagnostics-overview.md).
+Multiple products are available that cover these three areas, and you are free to choose different technologies for each. For more information, read [Monitoring and diagnostics for Azure Service Fabric](monitor-service-fabric.md).
## Next steps * Learn how to create a [cluster in Azure](service-fabric-cluster-creation-via-portal.md) or a [standalone cluster on Windows](service-fabric-cluster-creation-for-windows-server.md).
service-fabric Service Fabric Diagnostics Common Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-common-scenarios.md
Last updated 07/14/2022
# Diagnose common scenarios with Service Fabric
-This article illustrates common scenarios users have encountered in the area of monitoring and diagnostics with Service Fabric. The scenarios presented cover all 3 layers of service fabric: Application, Cluster, and Infrastructure. Each solution uses Application Insights and Azure Monitor logs, Azure monitoring tools, to complete each scenario. The steps in each solution give users an introduction on how to use Application Insights and Azure Monitor logs in the context of Service Fabric.
-
+This article illustrates common scenarios users have encountered in the area of monitoring and diagnostics with Service Fabric. The scenarios presented cover all three layers of service fabric: Application, Cluster, and Infrastructure. Each solution uses Application Insights and Azure Monitor logs, Azure monitoring tools, to complete each scenario. The steps in each solution give users an introduction on how to use Application Insights and Azure Monitor logs in the context of Service Fabric.
## Prerequisites and Recommendations
-The solutions in this article will use the following tools. We recommend you have these set up and configured:
+The solutions in this article use the following tools. We recommend you have these set up and configured:
* [Application Insights with Service Fabric](service-fabric-tutorial-monitoring-aspnet.md) * [Enable Azure Diagnostics on your cluster](service-fabric-diagnostics-event-aggregation-wad.md)
The solutions in this article will use the following tools. We recommend you hav
## How can I see unhandled exceptions in my application? 1. Navigate to your Application Insights resource that your application is configured with.
-2. Click on *Search* in the top left. Then click filter on the next panel.
+2. Select *Search* in the top left. Then select filter on the next panel.
![AI Overview](media/service-fabric-diagnostics-common-scenarios/ai-search-filter.png)
The solutions in this article will use the following tools. We recommend you hav
![AI Filter List](media/service-fabric-diagnostics-common-scenarios/ai-filter-list.png)
- By clicking an exception in the list, you can look at more details including the service context if you are using the Service Fabric Application Insights SDK.
+ By clicking an exception in the list, you can look at more details including the service context if you're using the Service Fabric Application Insights SDK.
![AI Exception](media/service-fabric-diagnostics-common-scenarios/ai-exception.png) ## How do I view which HTTP calls are used in my services? 1. In the same Application Insights resource, you can filter on "requests" instead of exceptions and view all requests made
-2. If you are using the Service Fabric Application Insights SDK, you can see a visual representation of your services connected to one another, and the number of succeeded and failed requests. On the left click "Application Map"
+2. If you're using the Service Fabric Application Insights SDK, you can see a visual representation of your services connected to one another, and the number of succeeded and failed requests. On the left, select "Application Map"
![AI App Map Blade](media/service-fabric-diagnostics-common-scenarios/app-map-blade.png) ![AI App Map](media/service-fabric-diagnostics-common-scenarios/app-map-new.png)
The solutions in this article will use the following tools. We recommend you hav
## How do I create an alert when a node goes down 1. Node events are tracked by your Service Fabric cluster. Navigate to the Service Fabric Analytics solution resource named **ServiceFabric(NameofResourceGroup)**
-2. Click on the graph on the bottom of the blade titled "Summary"
+2. Select the graph on the bottom of the blade titled "Summary"
![Azure Monitor logs solution](media/service-fabric-diagnostics-common-scenarios/oms-solution-azure-portal.png)
-3. Here you have many graphs and tiles displaying various metrics. Click on one of the graphs and it will take you to the Log Search. Here you can query for any cluster events or performance counters.
+3. Here you have many graphs and tiles displaying various metrics. Select one of the graphs and it will take you to the Log Search. Here you can query for any cluster events or performance counters.
4. Enter the following query. These event IDs are found in the [Node events reference](service-fabric-diagnostics-event-generation-operational.md#application-events) ```kusto
The solutions in this article will use the following tools. We recommend you hav
| where EventID >= 25622 and EventID <= 25626 ```
-5. Click "New Alert Rule" at the top and now anytime an event arrives based on this query, you will receive an alert in your chosen method of communication.
+5. Select "New Alert Rule" at the top and now anytime an event arrives based on this query, you'll receive an alert in your chosen method of communication.
![Azure Monitor logs New Alert](media/service-fabric-diagnostics-common-scenarios/oms-create-alert.png) ## How can I be alerted of application upgrade rollbacks?
-1. On the same Log Search window as before enter the following query for upgrade rollbacks. These event IDs are found under [Application events reference](service-fabric-diagnostics-event-generation-operational.md#application-events)
+1. On the same Log Search window as before, enter the following query for upgrade rollbacks. These event IDs are found under [Application events reference](service-fabric-diagnostics-event-generation-operational.md#application-events)
```kusto ServiceFabricOperationalEvent | where EventID == 29623 or EventID == 29624 ```
-2. Click "New Alert Rule" at the top and now anytime an event arrives based on this query, you will receive an alert.
+2. Select "New Alert Rule" at the top and now anytime an event arrives based on this query, you'll receive an alert.
## How do I see container metrics?
-In the same view with all the graphs, you will see some tiles for the performance of your containers. You need the Log Analytics Agent and [Container Monitoring solution](service-fabric-diagnostics-oms-containers.md) for these tiles to populate.
+In the same view with all the graphs, you'll see some tiles for the performance of your containers. You need the Log Analytics Agent and [Container Monitoring solution](service-fabric-diagnostics-oms-containers.md) for these tiles to populate.
![Log Analytics Container Metrics](media/service-fabric-diagnostics-common-scenarios/containermetrics.png)
In the same view with all the graphs, you will see some tiles for the performanc
![Log Analytics Workspace Tab](media/service-fabric-diagnostics-common-scenarios/workspacetab.png)
-2. Once youΓÇÖre on the workspaceΓÇÖs page, click on ΓÇ£Advanced settingsΓÇ¥ in the same left menu.
+2. Once youΓÇÖre on the workspaceΓÇÖs page, Select ΓÇ£Advanced settingsΓÇ¥ in the same left menu.
![Log Analytics Advanced Settings](media/service-fabric-diagnostics-common-scenarios/advancedsettingsoms.png)
-3. Click on Data > Windows Performance Counters (Data > Linux Performance Counters for Linux machines) to start collecting specific counters from your nodes via the Log Analytics agent. Here are examples of the format for counters to add
+3. Select Data > Windows Performance Counters (Data > Linux Performance Counters for Linux machines) to start collecting specific counters from your nodes via the Log Analytics agent. Here are examples of the format for counters to add
* `.NET CLR Memory(<ProcessNameHere>)\\# Total committed Bytes` * `Processor(_Total)\\% Processor Time`
In the same view with all the graphs, you will see some tiles for the performanc
![Log Analytics Perf Counters](media/service-fabric-diagnostics-common-scenarios/omsperfcounters.png)
-4. This will allow you to see how your infrastructure is handling your workloads, and set relevant alerts based on resource utilization. For example ΓÇô you may want to set an alert if the total Processor utilization goes above 90% or below 5%. The counter name you would use for this is ΓÇ£% Processor Time.ΓÇ¥ You could do this by creating an alert rule for the following query:
+4. This allows you to see how your infrastructure is handling your workloads, and set relevant alerts based on resource utilization. For example ΓÇô you might want to set an alert if the total Processor utilization goes above 90% or below 5%. The counter name you would use for this is ΓÇ£% Processor Time.ΓÇ¥ You could do this by creating an alert rule for the following query:
```kusto Perf | where CounterName == "% Processor Time" and InstanceName == "_Total" | where CounterValue >= 90 or CounterValue <= 5.
Check these links for the full list of performance counters on Reliable [Service
* Learn more about Azure Monitor logs [alerting](../azure-monitor/alerts/alerts-overview.md) to aid in detection and diagnostics. * For on-premises clusters, Azure Monitor logs offers a gateway (HTTP Forward Proxy) that can be used to send data to Azure Monitor logs. Read more about that in [Connecting computers without Internet access to Azure Monitor logs using the Log Analytics gateway](../azure-monitor/agents/gateway.md) * Get familiarized with the [log search and querying](../azure-monitor/logs/log-query-overview.md) features offered as part of Azure Monitor logs
-* Get a more detailed overview of Azure Monitor logs and what it offers, read [What is Azure Monitor logs?](../azure-monitor/overview.md)
+* For a detailed overview of Azure Monitor logs and what it offers, read [What is Azure Monitor logs?](../azure-monitor/overview.md)
service-fabric Service Fabric Diagnostics Event Aggregation Wad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-event-aggregation-wad.md
Now that you're aggregating events in Azure Storage, [set up Azure Monitor logs]
## Deploy the Diagnostics extension through Azure Resource Manager ### Create a cluster with the diagnostics extension
-To create a cluster by using Resource Manager, you need to add the Diagnostics configuration JSON to the full Resource Manager template. We provide a sample five-VM cluster Resource Manager template with Diagnostics configuration added to it as part of our Resource Manager template samples. You can see it at this location in the Azure Samples gallery: [Five-node cluster with Diagnostics Resource Manager template sample](https://azure.microsoft.com/resources/templates/service-fabric-secure-cluster-5-node-1-nodetype/).
+To create a cluster by using Resource Manager, you need to add the Diagnostics configuration JSON to the full Resource Manager template. We provide a Resource Manager template for a five-VM cluster with Diagnostics configuration added to it as part of our Resource Manager template samples. You can see it at this location in the Azure Samples gallery: [Five-node cluster with Diagnostics Resource Manager template sample](https://azure.microsoft.com/resources/templates/service-fabric-secure-cluster-5-node-1-nodetype/).
To see the Diagnostics setting in the Resource Manager template, open the azuredeploy.json file and search for **IaaSDiagnostics**. To create a cluster by using this template, select the **Deploy to Azure** button available at the previous link.
After you modify the template.json file as described, republish the Resource Man
### Update storage quota
-Since the tables populated by the extension grows until the quota is hit, you may want to consider decreasing the quota size. The default value is 50 GB and is configurable in the template under the `overallQuotaInMB` field under `DiagnosticMonitorConfiguration`
+Since the tables populated by the extension grow until the quota is hit, you might want to consider decreasing the quota size. The default value is 50 GB and is configurable in the template under the `overallQuotaInMB` field under `DiagnosticMonitorConfiguration`
```json "overallQuotaInMB": "50000", ``` ## Log collection configurations
-Logs from additional channels are also available for collection, here are some of the most common configurations you can make in the template for clusters running in Azure.
+Logs from additional channels are also available for collection. Here are some of the most common configurations you can make in the template for clusters running in Azure.
* Operational Channel - Base: Enabled by default, high-level operations performed by Service Fabric and the cluster, including events for a node coming up, a new application being deployed, or an upgrade rollback, etc. For a list of events, refer to [Operational Channel Events](./service-fabric-diagnostics-event-generation-operational.md).
To collect performance counters or event logs, modify the Resource Manager templ
## Collect Performance Counters
-To collect performance metrics from your cluster, add the performance counters to your "WadCfg > DiagnosticMonitorConfiguration" in the Resource Manager template for your cluster. See [Performance monitoring with WAD](service-fabric-diagnostics-perf-wad.md) for steps on modifying your `WadCfg` to collect specific performance counters. Reference [Service Fabric Performance Counters](service-fabric-diagnostics-event-generation-perf.md) for a list of performance counters that we recommend collecting.
+To collect performance metrics from your cluster, add the performance counters to your "WadCfg > DiagnosticMonitorConfiguration" in the Resource Manager template for your cluster. See [Performance monitoring with WAD](service-fabric-diagnostics-perf-wad.md) for steps on modifying your `WadCfg` to collect specific performance counters. Reference [Performance metrics](monitor-service-fabric-reference.md#performance-metrics) for a list of performance counters that we recommend collecting.
If you are using an Application Insights sink, as described in the section below, and want these metrics to show up in Application Insights, then make sure to add the sink name in the "sinks" section as shown above. This will automatically send the performance counters that are individually configured to your Application Insights resource.
service-fabric Service Fabric Diagnostics Event Analysis Oms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-event-analysis-oms.md
Last updated 07/14/2022
* How do I know when a node goes down? * How do I know if my application's services have started or stopped? -
-## Overview of the Log Analytics workspace
-
->[!NOTE]
->While diagnostic storage is enabled by default at the cluster creation time, you must still set up the Log Analytics workspace to read from the diagnostic storage.
-
-Azure Monitor logs collects data from managed resources, including an Azure storage table or an agent, and maintains it in a central repository. The data can then be used for analysis, alerting, and visualization, or further exporting. Azure Monitor logs supports events, performance data, or any other custom data. Check out [steps to configure the diagnostics extension to aggregate events](service-fabric-diagnostics-event-aggregation-wad.md) and [steps to create a Log Analytics workspace to read from the events in storage](service-fabric-diagnostics-oms-setup.md) to make sure data is flowing into Azure Monitor logs.
-
-After data is received by Azure Monitor logs, Azure has several *Monitoring Solutions* that are prepackaged solutions or operational dashboards to monitor incoming data, customized to several scenarios. These include a *Service Fabric Analytics* solution and a *Containers* solution, which are the two most relevant ones to diagnostics and monitoring when using Service Fabric clusters. This article describes how to use the Service Fabric Analytics solution, which is created with the workspace.
+To learn more about using Azure Monitor to collect and analyze data for this service, see [Monitor Azure Service Fabric](monitor-service-fabric.md).
## Access the Service Fabric Analytics solution
-In the [Azure Portal](https://portal.azure.com), go to the resource group in which you created the Service Fabric Analytics solution.
+In the [Azure portal](https://portal.azure.com), go to the resource group in which you created the Service Fabric Analytics solution.
Select the resource **ServiceFabric\<nameOfOMSWorkspace\>**.
-In `Summary`, you will see tiles in the form of a graph for each of the solutions enabled, including one for Service Fabric. Click the **Service Fabric** graph to continue to the Service Fabric Analytics solution.
+In `Summary`, you will see tiles in the form of a graph for each of the solutions enabled, including one for Service Fabric. Select the **Service Fabric** graph to continue to the Service Fabric Analytics solution.
![Service Fabric solution](media/service-fabric-diagnostics-event-analysis-oms/oms_service_fabric_summary.PNG)
The following image shows the home page of the Service Fabric Analytics solution
## View Service Fabric Events, including actions on nodes
-On the Service Fabric Analytics page, click on the graph for **Service Fabric Events**.
+On the Service Fabric Analytics page, select the graph for **Service Fabric Events**.
![Service Fabric Solution Operational Channel](media/service-fabric-diagnostics-event-analysis-oms/oms_service_fabric_events_selection.png)
-Click **List** to view the events in a list.
-Once here you will see all the system events that have been collected. For reference, these are from the **WADServiceFabricSystemEventsTable** in the Azure Storage account, and similarly the reliable services and actors events you see next are from those respective tables.
+Select **List** to view the events in a list. Once here, you see all the system events that have been collected. For reference, these are from the **WADServiceFabricSystemEventsTable** in the Azure Storage account, and similarly the reliable services and actors events you see next are from those respective tables.
![Query Operational Channel](media/service-fabric-diagnostics-event-analysis-oms/oms_service_fabric_events.png)
-Alternatively you can click the magnifying glass on the left and use the Kusto query language to find what you're looking for. For example, to find all actions taken on nodes in the cluster, you can use the following query. The event IDs used below are found in the [operational channel events reference](service-fabric-diagnostics-event-generation-operational.md).
+Alternatively, you can select the magnifying glass on the left and use the Kusto query language to find what you're looking for. For example, to find all actions taken on nodes in the cluster, you can use the following query. The event IDs used below are found in the [operational channel events reference](service-fabric-diagnostics-event-generation-operational.md).
```kusto ServiceFabricOperationalEvent
You can query on many more fields such as the specific nodes (Computer) the syst
## View Service Fabric Reliable Service and Actor events
-On the Service Fabric Analytics page, click the graph for **Reliable Services**.
+On the Service Fabric Analytics page, select the graph for **Reliable Services**.
![Service Fabric Solution Reliable Services](media/service-fabric-diagnostics-event-analysis-oms/oms_reliable_services_events_selection.png)
-Click **List** to view the events in a list. Here you can see events from the reliable services. You can see different events for when the service runasync is started and completed which typically happens on deployments and upgrades.
+Select **List** to view the events in a list. Here you can see events from the reliable services. You can see different events for when the service runasync is started and completed which typically happens on deployments and upgrades.
![Query Reliable Services](media/service-fabric-diagnostics-event-analysis-oms/oms_reliable_service_events.png)
Reliable actor events can be viewed in a similar fashion. To configure more deta
}, ```
-The Kusto query language is powerful. Another valuable query you can run is to find out which nodes are generating the most events. The query in the screenshot below shows Service Fabric operational events aggregated with the specific service and node.
+The Kusto query language is powerful. Another valuable query you can run is to find out which nodes are generating the most events. The query in the following screenshot shows Service Fabric operational events aggregated with the specific service and node.
![Query Events per Node](media/service-fabric-diagnostics-event-analysis-oms/oms_kusto_query.png)
The Kusto query language is powerful. Another valuable query you can run is to f
* For on-premises clusters, Azure Monitor logs offers a Gateway (HTTP Forward Proxy) that can be used to send data to Azure Monitor logs. Read more about that in [Connecting computers without Internet access to Azure Monitor logs using the Log Analytics gateway](../azure-monitor/agents/gateway.md). * Configure [automated alerting](../azure-monitor/alerts/alerts-overview.md) to aid in detection and diagnostics. * Get familiarized with the [log search and querying](../azure-monitor/logs/log-query-overview.md) features offered as part of Azure Monitor logs.
-* Get a more detailed overview of Azure Monitor logs and what it offers, read [What is Azure Monitor logs?](../azure-monitor/overview.md).
+* For a detailed overview of Azure Monitor logs and what it offers, read [What is Azure Monitor logs?](../azure-monitor/overview.md).
service-fabric Service Fabric Diagnostics Event Generation App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-event-generation-app.md
- Title: Azure Service Fabric Application Level Monitoring
-description: Learn about application and service level events and logs used to monitor and diagnose Azure Service Fabric clusters.
----- Previously updated : 07/14/2022--
-# Application logging
-
-Instrumenting your code is not only a way to gain insights about your users, but also the only way you can know whether something is wrong in your application, and to diagnose what needs to be fixed. Although technically it's possible to connect a debugger to a production service, it's not a common practice. So, having detailed instrumentation data is important.
-
-Some products automatically instrument your code. Although these solutions can work well, manual instrumentation is almost always required to be specific to your business logic. In the end, you must have enough information to forensically debug the application. Service Fabric applications can be instrumented with any logging framework. This document describes a few different approaches to instrumenting your code, and when to choose one approach over another.
-
-For examples on how to use these suggestions, see [Add logging to your Service Fabric application](service-fabric-how-to-diagnostics-log.md).
-
-## Application Insights SDK
-
-Application Insights has a rich integration with Service Fabric out of the box. Users can add the AI Service Fabric nuget packages and receive data and logs created and collected viewable in the Azure portal. Additionally, users are encouraged to add their own telemetry in order to diagnose and debug their applications and track which services and parts of their application are used the most. The [TelemetryClient](/dotnet/api/microsoft.applicationinsights.telemetryclient) class in the SDK provides many ways to track telemetry in your applications. Check out an example of how to instrument and add application insights to your application in our tutorial for [monitoring and diagnosing a .NET application](service-fabric-tutorial-monitoring-aspnet.md)
-
-## EventSource
-
-When you create a Service Fabric solution from a template in Visual Studio, an **EventSource**-derived class (**ServiceEventSource** or **ActorEventSource**) is generated. A template is created, in which you can add events for your application or service. The **EventSource** name **must** be unique, and should be renamed from the default template string MyCompany-&lt;solution&gt;-&lt;project&gt;. Having multiple **EventSource** definitions that use the same name causes an issue at run time. Each defined event must have a unique identifier. If an identifier is not unique, a runtime failure occurs. Some organizations preassign ranges of values for identifiers to avoid conflicts between separate development teams. For more information, see [Vance's blog](/archive/blogs/vancem/introduction-tutorial-logging-etw-events-in-c-system-diagnostics-tracing-eventsource) or the [MSDN documentation](/previous-versions/msp-n-p/dn774985(v=pandp.20)).
-
-## ASP.NET Core logging
-
-It's important to carefully plan how you will instrument your code. The right instrumentation plan can help you avoid potentially destabilizing your code base, and then needing to reinstrument the code. To reduce risk, you can choose an instrumentation library like [Microsoft.Extensions.Logging](https://www.nuget.org/packages/Microsoft.Extensions.Logging/), which is part of Microsoft ASP.NET Core. ASP.NET Core has an [ILogger](/dotnet/api/microsoft.extensions.logging.ilogger) interface that you can use with the provider of your choice, while minimizing the effect on existing code. You can use the code in ASP.NET Core on Windows and Linux, and in the full .NET Framework, so your instrumentation code is standardized.
-
-## Next steps
-
-Once you have chosen your logging provider to instrument your applications and services, your logs and events need to be aggregated before they can be sent to any analysis platform. Read about [Application Insights](service-fabric-diagnostics-event-analysis-appinsights.md) and [EventFlow](service-fabric-diagnostics-event-aggregation-eventflow.md) to better understand some of the Azure Monitor recommended options.
service-fabric Service Fabric Diagnostics Event Generation Infra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-event-generation-infra.md
Last updated 07/14/2022
# Monitoring the cluster
-It is important to monitor at the cluster level to determine whether or not your hardware and cluster are behaving as expected. Though Service Fabric can keep applications running during a hardware failure, but you still need to diagnose whether an error is occurring in an application or in the underlying infrastructure. You also should monitor your clusters to better plan for capacity, helping in decisions about adding or removing hardware.
+It's important to monitor at the cluster level to determine whether or not your hardware and cluster are behaving as expected. Though Service Fabric can keep applications running during a hardware failure, but you still need to diagnose whether an error is occurring in an application or in the underlying infrastructure. You also should monitor your clusters to better plan for capacity, helping in decisions about adding or removing hardware.
Service Fabric exposes several structured platform events, as [Service Fabric events](service-fabric-diagnostics-events.md), through the EventStore and various log channels out-of-the-box.
High-level operations performed by Service Fabric and the cluster, including eve
* **Operational - detailed** Health reports and load balancing decisions.
-The operation channel can be accessed through a variety of ways including ETW/Windows Event Logs, the [EventStore](service-fabric-diagnostics-eventstore.md) (available on Windows in versions 6.2 and later for Windows clusters). The EventStore gives you access to your cluster's events on a per entity basis (entities including cluster, nodes, applications, services, partitions, replicas, and containers) and exposes them via REST APIs and the Service Fabric client library. Use the EventStore to monitor your dev/test clusters, and for getting a point-in-time understanding of the state of your production clusters.
+The operation channel can be accessed through various ways including ETW/Windows Event Logs, the [EventStore](service-fabric-diagnostics-eventstore.md) (available on Windows in versions 6.2 and later for Windows clusters). The EventStore gives you access to your cluster's events on a per entity basis (entities including cluster, nodes, applications, services, partitions, replicas, and containers) and exposes them via REST APIs and the Service Fabric client library. Use the EventStore to monitor your dev/test clusters, and for getting a point-in-time understanding of the state of your production clusters.
* **Data & Messaging** Critical logs and events generated in the messaging (currently only the ReverseProxy) and data path (reliable services models). * **Data & Messaging - detailed**
-Verbose channel that contains all the non-critical logs from data and messaging in the cluster (this channel has a very high volume of events).
+Verbose channel that contains all the noncritical logs from data and messaging in the cluster (this channel has a high volume of events).
In addition to these, there are two structured EventSource channels provided, as well as logs that we collect for support purposes.
System logs generated by Service Fabric only to be used by us when providing sup
These various channels cover most of the platform level logging that is recommended. To improve platform level logging, consider investing in better understanding the health model and adding custom health reports, and adding custom **Performance Counters** to build a real-time understanding of the impact of your services and applications on the cluster.
-In order to take advantage of these logs, it is highly recommended to leave "Diagnostics" enabled during cluster creation in the Azure Portal.
-By turning on diagnostics, when the cluster is deployed, Windows Azure Diagnostics is able to acknowledge the Operational,
+In order to take advantage of these logs, it's highly recommended to leave "Diagnostics" enabled during cluster creation in the Azure portal.
+By turning on diagnostics, when the cluster is deployed, Azure Diagnostics is able to acknowledge the Operational,
Reliable Services, and Reliable actors channels, and store the data as explained further in [Aggregate events with Azure Diagnostics](service-fabric-diagnostics-event-aggregation-wad.md).
Service Fabric has its own health model, which is described in detail in these a
- [Add custom Service Fabric health reports](service-fabric-report-health.md) - [View Service Fabric health reports](service-fabric-view-entities-aggregated-health.md)
-Health monitoring is critical to multiple aspects of operating a service, especially during an application upgrade. After each upgrade domain of the service is upgraded, the upgrade domain must pass health checks before the deployment moves to the next upgrade domain. If OK health status cannot be achieved, the deployment is rolled back, so that the application remains in a known OK state. Although some customers might be affected before the services are rolled back, most customers won't experience an issue. Also, a resolution occurs relatively quickly without having to wait for action from a human operator. The more health checks that are incorporated into your code, the more resilient your service is to deployment issues.
+Health monitoring is critical to multiple aspects of operating a service, especially during an application upgrade. After each upgrade domain of the service is upgraded, the upgrade domain must pass health checks before the deployment moves to the next upgrade domain. If OK health status can't be achieved, the deployment is rolled back, so that the application remains in a known OK state. Although some customers might be affected before the services are rolled back, most customers won't experience an issue. Also, a resolution occurs relatively quickly without having to wait for action from a human operator. The more health checks that are incorporated into your code, the more resilient your service is to deployment issues.
-Another aspect of service health is reporting metrics from the service. Metrics are important in Service Fabric because they are used to balance resource usage. Metrics can also be an indicator of system health. For example, you might have an application that has many services, and each instance reports a requests per second (RPS) metric. If one service is using more resources than another service, Service Fabric moves service instances around the cluster, to try to maintain even resource utilization. For a more detailed explanation of how resource utilization works, see [Manage resource consumption and load in Service Fabric with metrics](service-fabric-cluster-resource-manager-metrics.md).
+Another aspect of service health is reporting metrics from the service. Metrics are important in Service Fabric because they're used to balance resource usage. Metrics can also be an indicator of system health. For example, you might have an application that has many services, and each instance reports a requests per second (RPS) metric. If one service is using more resources than another service, Service Fabric moves service instances around the cluster, to try to maintain even resource utilization. For a more detailed explanation of how resource utilization works, see [Manage resource consumption and load in Service Fabric with metrics](service-fabric-cluster-resource-manager-metrics.md).
-Metrics also can help give you insight into how your service is performing. Over time, you can use metrics to check that the service is operating within expected parameters. For example, if trends show that at 9 AM on Monday morning the average RPS is 1,000, then you might set up a health report that alerts you if the RPS is below 500 or above 1,500. Everything might be perfectly fine, but it might be worth a look to be sure that your customers are having a great experience. Your service can define a set of metrics that can be reported for health check purposes, but that don't affect the resource balancing of the cluster. To do this, set the metric weight to zero. We recommend that you start all metrics with a weight of zero, and not increase the weight until you are sure that you understand how weighting the metrics affects resource balancing for your cluster.
+Metrics also can help give you insight into how your service is performing. Over time, you can use metrics to check that the service is operating within expected parameters. For example, if trends show that at 9 AM on Monday morning the average RPS is 1,000, then you might set up a health report that alerts you if the RPS is below 500 or above 1,500. Everything might be perfectly fine, but it might be worth a look to be sure that your customers are having a great experience. Your service can define a set of metrics that can be reported for health check purposes, but that don't affect the resource balancing of the cluster. To do this, set the metric weight to zero. We recommend that you start all metrics with a weight of zero, and not increase the weight until you're sure that you understand how weighting the metrics affects resource balancing for your cluster.
> [!TIP] > Don't use too many weighted metrics. It can be difficult to understand why service instances are being moved around for balancing. A few metrics can go a long way!
Any information that can indicate the health and performance of your application
## Service Fabric support logs
-If you need to contact Microsoft support for help with your Azure Service Fabric cluster, support logs are almost always required. If your cluster is hosted in Azure, support logs are automatically configured and collected as part of creating a cluster. The logs are stored in a dedicated storage account in your cluster's resource group. The storage account doesn't have a fixed name, but in the account, you see blob containers and tables with names that start with *fabric*. For information about setting up log collections for a standalone cluster, see [Create and manage a standalone Azure Service Fabric cluster](service-fabric-cluster-creation-for-windows-server.md) and [Configuration settings for a standalone Windows cluster](service-fabric-cluster-manifest.md). For standalone Service Fabric instances, the logs should be sent to a local file share. You are **required** to have these logs for support, but they are not intended to be usable by anyone outside of the Microsoft customer support team.
+If you need to contact Microsoft support for help with your Azure Service Fabric cluster, support logs are almost always required. If your cluster is hosted in Azure, support logs are automatically configured and collected as part of creating a cluster. The logs are stored in a dedicated storage account in your cluster's resource group. The storage account doesn't have a fixed name, but in the account, you see blob containers and tables with names that start with *fabric*. For information about setting up log collections for a standalone cluster, see [Create and manage a standalone Azure Service Fabric cluster](service-fabric-cluster-creation-for-windows-server.md) and [Configuration settings for a standalone Windows cluster](service-fabric-cluster-manifest.md). For standalone Service Fabric instances, the logs should be sent to a local file share. You're **required** to have these logs for support, but they aren't intended to be usable by anyone outside of the Microsoft customer support team.
## Measuring performance
-Measure performance of your cluster will help you understand how it is able to handle load and drive decisions around scaling your cluster (see more about scaling a cluster [on Azure](service-fabric-cluster-scale-in-out.md), or [on-premises](service-fabric-cluster-windows-server-add-remove-nodes.md)). Performance data is also useful when compared to actions you or your applications and services may have taken, when analyzing logs in the future.
+Measure performance of your cluster to help you understand how it's able to handle load and drive decisions around scaling your cluster (see more about scaling a cluster [on Azure](service-fabric-cluster-scale-in-out.md), or [on-premises](service-fabric-cluster-windows-server-add-remove-nodes.md)). Performance data is also useful when compared to actions you or your applications and services might have taken, when analyzing logs in the future.
-For a list of performance counters to collect when using Service Fabric, see [Performance Counters in Service Fabric](service-fabric-diagnostics-event-generation-perf.md)
+For a list of performance counters to collect when using Service Fabric, see [Performance metrics](monitor-service-fabric-reference.md#performance-metrics)
Here are two common ways in which you can set up collecting performance data for your cluster:
service-fabric Service Fabric Diagnostics Event Generation Operational https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-event-generation-operational.md
Last updated 07/14/2022
# List of Service Fabric events
-Service Fabric exposes a primary set of cluster events to inform you of the status of your cluster as [Service Fabric Events](service-fabric-diagnostics-events.md). These are based on actions performed by Service Fabric on your nodes and your cluster or management decisions made by a cluster owner/operator. These events can be accessed by configuring in a number of ways including configuring [Azure Monitor logs with your cluster](service-fabric-diagnostics-oms-setup.md), or querying the [EventStore](service-fabric-diagnostics-eventstore.md). On Windows machines, these events are fed into the EventLog - so you can see Service Fabric Events in Event Viewer.
+Service Fabric exposes a primary set of cluster events to inform you of the status of your cluster as [Service Fabric Events](service-fabric-diagnostics-events.md). These are based on actions performed by Service Fabric on your nodes and your cluster or management decisions made by a cluster owner/operator. These events can be accessed by configuring in various ways including configuring [Azure Monitor logs with your cluster](service-fabric-diagnostics-oms-setup.md), or querying the [EventStore](service-fabric-diagnostics-eventstore.md). On Windows machines, these events are fed into the EventLog - so you can see Service Fabric Events in Event Viewer.
Here are some characteristics of these events
-* Each event is tied to a specific entity in the cluster e.g. Application, Service, Node, Replica.
+* Each event is tied to a specific entity in the cluster, for example, Application, Service, Node, Replica.
* Each event contains a set of common fields: EventInstanceId, EventName, and Category.
-* Each event contains fields that tie the event back to the entity it is associated with. For instance, the ApplicationCreated event would have fields that identify the name of the application created.
-* Events are structured in such a way that they can be consumed in a variety of tools to do further analysis. Additionally, relevant details for an event are defined as separate properties as opposed to a long String.
+* Each event contains fields that tie the event back to the entity it's associated with. For instance, the ApplicationCreated event would have fields that identify the name of the application created.
+* Events are structured in such a way that they can be consumed in various tools to do further analysis. Additionally, relevant details for an event are defined as separate properties as opposed to a long String.
* Events are written by different subsystems in Service Fabric are identified by Source(Task) below. More information is available on these subsystems in [Service Fabric Architecture](service-fabric-architecture.md) and [Service Fabric Technical Overview](service-fabric-technical-overview.md).
-Here is a list of these Service Fabric events organized by entity.
+Here's a list of these Service Fabric events organized by entity.
## Cluster events
More details on cluster upgrades can be found [here](service-fabric-cluster-upgr
| | | | | | | | 29627 | ClusterUpgradeStarted | Upgrade | A cluster upgrade has started | CM | Informational | | 29628 | ClusterUpgradeCompleted | Upgrade | A cluster upgrade has completed | CM | Informational |
-| 29629 | ClusterUpgradeRollbackStarted | Upgrade | A cluster upgrade has started to rollback | CM | Warning |
+| 29629 | ClusterUpgradeRollbackStarted | Upgrade | A cluster upgrade has started to roll back | CM | Warning |
| 29630 | ClusterUpgradeRollbackCompleted | Upgrade | A cluster upgrade has completed rolling back | CM | Warning | | 29631 | ClusterUpgradeDomainCompleted | Upgrade | An upgrade domain has finished upgrading during a cluster upgrade | CM | Informational |
More details on cluster upgrades can be found [here](service-fabric-cluster-upgr
| | | | | | | | 18602 | NodeDeactivateCompleted | StateTransition | Deactivation of a node has completed | FM | Informational | | 18603 | NodeUp | StateTransition | The cluster has detected a node has started up | FM | Informational |
-| 18604 | NodeDown | StateTransition | The cluster has detected a node has shut down. During a node restart, you will see a NodeDown event followed by a NodeUp event | FM | Error |
+| 18604 | NodeDown | StateTransition | The cluster has detected a node has shut down. During a node restart, you'll see a NodeDown event followed by a NodeUp event | FM | Error |
| 18605 | NodeAddedToCluster | StateTransition | A new node has been added to the cluster and Service Fabric can deploy applications to this node | FM | Informational | | 18606 | NodeRemovedFromCluster | StateTransition | A node has been removed from the cluster. Service Fabric will no longer deploy applications to this node | FM | Informational | | 18607 | NodeDeactivateStarted | StateTransition | Deactivation of a node has started | FM | Informational |
More details on application upgrades can be found [here](service-fabric-applicat
| | | | | | | | 29621 | ApplicationUpgradeStarted | Upgrade | An application upgrade has started | CM | Informational | | 29622 | ApplicationUpgradeCompleted | Upgrade | An application upgrade has completed | CM | Informational |
-| 29623 | ApplicationUpgradeRollbackStarted | Upgrade | An application upgrade has started to rollback |CM | Warning |
+| 29623 | ApplicationUpgradeRollbackStarted | Upgrade | An application upgrade has started to roll back |CM | Warning |
| 29624 | ApplicationUpgradeRollbackCompleted | Upgrade | An application upgrade has completed rolling back | CM | Warning | | 29626 | ApplicationUpgradeDomainCompleted | Upgrade | An upgrade domain has finished upgrading during an application upgrade | CM | Informational |
The [Service Fabric Health Model](service-fabric-health-introduction.md) provide
## Events prior to version 6.2
-Here is a comprehensive list of events provided by Service Fabric prior to version 6.2.
+Here's a comprehensive list of events provided by Service Fabric before version 6.2.
| EventId | Name | Source (Task) | Level | | | | | |
Here is a comprehensive list of events provided by Service Fabric prior to versi
## Next steps
-* Get an overview of [diagnostics in Service Fabric](service-fabric-diagnostics-overview.md)
+* Get an overview of [diagnostics in Service Fabric](monitor-service-fabric.md)
* Learn more about the EventStore in [Service Fabric Eventstore Overview](service-fabric-diagnostics-eventstore.md) * Modifying your [Azure Diagnostics](service-fabric-diagnostics-event-aggregation-wad.md) configuration to collect more logs * [Setting up Application Insights](service-fabric-diagnostics-event-analysis-appinsights.md) to see your Operational channel logs
service-fabric Service Fabric Diagnostics Event Generation Perf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-event-generation-perf.md
- Title: Azure Service Fabric Performance Monitoring
-description: Learn about performance counters for monitoring and diagnostics of Azure Service Fabric clusters.
------ Previously updated : 07/14/2022--
-# Performance metrics
-
-Metrics should be collected to understand the performance of your cluster as well as the applications running in it. For Service Fabric clusters, we recommend collecting the following performance counters.
-
-## Nodes
-
-For the machines in your cluster, consider collecting the following performance counters to better understand the load on each machine and make appropriate cluster scaling decisions.
-
-| Counter Category | Counter Name |
-| | |
-| Logical Disk | Logical Disk Free Space |
-| PhysicalDisk(per Disk) | Avg. Disk Read Queue Length |
-| PhysicalDisk(per Disk) | Avg. Disk Write Queue Length |
-| PhysicalDisk(per Disk) | Avg. Disk sec/Read |
-| PhysicalDisk(per Disk) | Avg. Disk sec/Write |
-| PhysicalDisk(per Disk) | Disk Reads/sec |
-| PhysicalDisk(per Disk) | Disk Read Bytes/sec |
-| PhysicalDisk(per Disk) | Disk Writes/sec |
-| PhysicalDisk(per Disk) | Disk Write Bytes/sec |
-| Memory | Available MBytes |
-| PagingFile | % Usage |
-| Processor(Total) | % Processor Time |
-| Process (per service) | % Processor Time |
-| Process (per service) | ID Process |
-| Process (per service) | Private Bytes |
-| Process (per service) | Thread Count |
-| Process (per service) | Virtual Bytes |
-| Process (per service) | Working Set |
-| Process (per service) | Working Set - Private |
-| Network Interface(all-instances) | Bytes recd |
-| Network Interface(all-instances) | Bytes sent |
-| Network Interface(all-instances) | Bytes total |
-| Network Interface(all-instances) | Output Queue Length |
-| Network Interface(all-instances) | Packets Outbound Discarded |
-| Network Interface(all-instances) | Packets Received Discarded |
-| Network Interface(all-instances) | Packets Outbound Errors |
-| Network Interface(all-instances) | Packets Received Errors |
-
-## .NET applications and services
-
-Collect the following counters if you are deploying .NET services to your cluster.
-
-| Counter Category | Counter Name |
-| | |
-| .NET CLR Memory (per service) | Process ID |
-| .NET CLR Memory (per service) | # Total committed Bytes |
-| .NET CLR Memory (per service) | # Total reserved Bytes |
-| .NET CLR Memory (per service) | # Bytes in all Heaps |
-| .NET CLR Memory (per service) | Large Object Heap size |
-| .NET CLR Memory (per service) | # GC Handles |
-| .NET CLR Memory (per service) | # Gen 0 Collections |
-| .NET CLR Memory (per service) | # Gen 1 Collections |
-| .NET CLR Memory (per service) | # Gen 2 Collections |
-| .NET CLR Memory (per service) | % Time in GC |
-
-### Service Fabric's custom performance counters
-
-Service Fabric generates a substantial amount of custom performance counters. If you have the SDK installed, you can see the comprehensive list on your Windows machine in your Performance Monitor application (Start > Performance Monitor).
-
-In the applications you are deploying to your cluster, if you are using Reliable Actors, add counters from `Service Fabric Actor` and `Service Fabric Actor Method` categories (see [Service Fabric Reliable Actors Diagnostics](service-fabric-reliable-actors-diagnostics.md)).
-
-If you use Reliable Services or Service Remoting, we similarly have `Service Fabric Service` and `Service Fabric Service Method` counter categories that you should collect counters from, see [monitoring with service remoting](service-fabric-reliable-serviceremoting-diagnostics.md) and [reliable services performance counters](service-fabric-reliable-services-diagnostics.md#performance-counters).
-
-If you use Reliable Collections, we recommend adding the `Avg. Transaction ms/Commit` from the `Service Fabric Transactional Replicator` to collect the average commit latency per transaction metric.
--
-## Next steps
-
-* Learn more about [event generation at the platform level](service-fabric-diagnostics-event-generation-infra.md) in Service Fabric
-* Collect performance metrics through [Log Analytics agent](service-fabric-diagnostics-oms-agent.md)
service-fabric Service Fabric Diagnostics Eventstore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-eventstore.md
Introduced in version 6.2, the EventStore service is a monitoring option in Serv
The EventStore is a stateful Service Fabric service that maintains events from the cluster. The event are exposed through the Service Fabric Explorer, REST and APIs. EventStore queries the cluster directly to get diagnostics data on any entity in your cluster and should be used to help: * Diagnose issues in development or testing, or where you might be using a monitoring pipeline
-* Confirm that management actions you are taking on your cluster are being processed correctly
+* Confirm that management actions you're taking on your cluster are being processed correctly
* Get a "snapshot" of how Service Fabric is interacting with a particular entity ![Screenshot shows the EVENTS tab of the Nodes pane several events, including a NodeDown event.](media/service-fabric-diagnostics-eventstore/eventstore.png)
The EventStore service can be queried for events that are available for each ent
* Partition Replicas: events from all replicas / instances within a specific partition identified by `partitionId` * Partition Replica: events from a specific replica / instance identified by `replicaId` and `partitionId`
-To learn more about the API check out the [EventStore API reference](/rest/api/servicefabric/sfclient-index-eventsstore).
+To learn more about the API, see the [EventStore API reference](/rest/api/servicefabric/sfclient-index-eventsstore).
-The EventStore service also has the ability to correlate events in your cluster. By looking at events that were written at the same time from different entities that may have impacted each other, the EventStore service is able to link these events to help with identifying causes for activities in your cluster. For example, if one of your applications happens to become unhealthy without any induced changes, the EventStore will also look at other events exposed by the platform and could correlate this with an `Error` or `Warning` event. This helps with faster failure detection and root causes analysis.
+The EventStore service also has the ability to correlate events in your cluster. By looking at events that were written at the same time from different entities that might have impacted each other, the EventStore service is able to link these events to help with identifying causes for activities in your cluster. For example, if one of your applications happens to become unhealthy without any induced changes, the EventStore will also look at other events exposed by the platform and could correlate this with an `Error` or `Warning` event. This helps with faster failure detection and root causes analysis.
## Enable EventStore on your cluster
In [fabricSettings.json in your cluster](service-fabric-cluster-fabric-settings.
``` ### Azure cluster version 6.5+
-If your Azure cluster gets upgraded to version 6.5 or higher, EventStore will be automatically enabled on your cluster. To opt out, you need to update your cluster template with the following:
+If your Azure cluster gets upgraded to version 6.5 or higher, EventStore is automatically enabled on your cluster. To opt out, you need to update your cluster template with the following:
* Use an API version of `2019-03-01` or newer * Add the following code to your properties section in your cluster
If your Azure cluster gets upgraded to version 6.5 or higher, EventStore will be
### Azure cluster version 6.4
-If you are using version 6.4, you can edit your Azure Resource Manager template to turn on EventStore service. This is done by performing a [cluster config upgrade](service-fabric-cluster-config-upgrade-azure.md) and adding the following code, you can use PlacementConstraints to put the replicas of the EventStore service on a specific NodeType e.g. a NodeType dedicated for the system services. The `upgradeDescription` section configures the config upgrade to trigger a restart on the nodes. You can remove the section in another update.
+If you're using version 6.4, you can edit your Azure Resource Manager template to turn on EventStore service. This is done by performing a [cluster config upgrade](service-fabric-cluster-config-upgrade-azure.md) and adding the following code, you can use PlacementConstraints to put the replicas of the EventStore service on a specific NodeType, for example, a NodeType dedicated for the system services. The `upgradeDescription` section configures the config upgrade to trigger a restart on the nodes. You can remove the section in another update.
```json "fabricSettings": [
If you are using version 6.4, you can edit your Azure Resource Manager template
## Next steps * Get started with the EventStore API - [Using the EventStore APIs in Azure Service Fabric clusters](service-fabric-diagnostics-eventstore-query.md) * Learn more about the list of events offered by EventStore - [Service Fabric events](service-fabric-diagnostics-event-generation-operational.md)
-* Overview of monitoring and diagnostics in Service Fabric - [Monitoring and Diagnostics for Service Fabric](service-fabric-diagnostics-overview.md)
+* Overview of monitoring and diagnostics in Service Fabric - [Monitor Service Fabric](monitor-service-fabric.md)
* View the full list of API calls - [EventStore REST API Reference](/rest/api/servicefabric/sfclient-index-eventsstore) * Learn more about monitoring your cluster - [Monitoring the cluster and platform](service-fabric-diagnostics-event-generation-infra.md).
service-fabric Service Fabric Diagnostics Oms Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-oms-agent.md
Last updated 07/14/2022
This article covers the steps to add the Log Analytics agent as a virtual machine scale set extension to your cluster, and connect it to your existing Azure Log Analytics workspace. This enables collecting diagnostics data about containers, applications, and performance monitoring. By adding it as an extension to the virtual machine scale set resource, Azure Resource Manager ensures that it gets installed on every node, even when scaling the cluster. > [!NOTE]
-> This article assumes that you have an Azure Log Analytics workspace already set up. If you do not, head over to [Set up Azure Monitor logs](service-fabric-diagnostics-oms-setup.md)
-
+> This article assumes that you have an Azure Log Analytics workspace already set up. If you do not, head over to [Set up Azure Monitor logs](service-fabric-diagnostics-oms-setup.md).
## Add the agent extension via Azure CLI
-The best way to add the Log Analytics agent to your cluster is via the virtual machine scale set APIs available with the Azure CLI. If you do not have Azure CLI set up yet, head over to Azure portal and open up a [Cloud Shell](../cloud-shell/overview.md) instance, or [Install the Azure CLI](/cli/azure/install-azure-cli).
+The best way to add the Log Analytics agent to your cluster is via the virtual machine scale set APIs available with the Azure CLI. If you don't have Azure CLI set up yet, head over to Azure portal and open up a [Cloud Shell](../cloud-shell/overview.md) instance, or [Install the Azure CLI](/cli/azure/install-azure-cli).
-1. Once your Cloud Shell is requested, make sure you are working in the same subscription as your resource. Check this with `az account show` and make sure the "name" value matches that of your cluster's subscription.
+1. Once your Cloud Shell is requested, make sure you work in the same subscription as your resource. Check this with `az account show` and make sure the "name" value matches that of your cluster's subscription.
-2. In the Portal, navigate to the resource group where your Log Analytics workspace is located. Click into the log analytics resource (the type of the resource will be Log Analytics workspace). Once you are at the resource overview page, click on **Advanced Settings** under the Settings section on the left menu.
+2. In the Portal, navigate to the resource group where your Log Analytics workspace is located. Select the log analytics resource (the type of the resource will be Log Analytics workspace). Once you are at the resource overview page, select **Advanced Settings** under the Settings section on the left menu.
![Log analytics properties page](media/service-fabric-diagnostics-oms-agent/oms-advanced-settings.png)
-3. Click on **Windows Servers** if you are standing up a Windows cluster, and **Linux Servers** if you are creating a Linux cluster. This page will show you your `workspace ID` and `workspace key` (listed as Primary Key in the portal). You will need both for the next step.
+3. Select **Windows Servers** if you're standing up a Windows cluster, and **Linux Servers** if you're creating a Linux cluster. This page shows your `workspace ID` and `workspace key` (listed as Primary Key in the portal). You need both for the next step.
4. Run the command to install the Log Analytics agent onto your cluster, using the `vmss extension set` API:
Now that you have added the Log Analytics agent, head on over to the Log Analyti
1. In the Azure portal, go to the resource group in which you created the Service Fabric Analytics solution. Select **ServiceFabric\<nameOfLog AnalyticsWorkspace\>**.
-2. Click **Log Analytics**.
+2. Select **Log Analytics**.
-3. Click **Advanced Settings**.
+3. Select **Advanced Settings**.
-4. Click **Data**, then click **Windows or Linux Performance Counters**. There is a list of default counters you can choose to enable and you can set the interval for collection too. You can also add [additional performance counters](service-fabric-diagnostics-event-generation-perf.md) to collect. The proper format is referenced in this [article](/windows/win32/perfctrs/specifying-a-counter-path).
+4. Select **Data**, then choose **Windows or Linux Performance Counters**. There is a list of default counters you can choose to enable and you can set the interval for collection too. You can also add [additional performance counters](monitor-service-fabric-reference.md#performance-metrics) to collect. The proper format is referenced in this [article](/windows/win32/perfctrs/specifying-a-counter-path).
-5. Click **Save**, then click **OK**.
+5. Select **Save**, then choose **OK**.
6. Close the Advanced Settings blade.
-7. Under the General heading, click **Workspace summary**.
+7. Under the General heading, select **Workspace summary**.
-8. You will see tiles in the form of a graph for each of the solutions enabled, including one for Service Fabric. Click the **Service Fabric** graph to continue to the Service Fabric Analytics solution.
+8. You'll see tiles in the form of a graph for each of the solutions enabled, including one for Service Fabric. Select the **Service Fabric** graph to continue to the Service Fabric Analytics solution.
-9. You will see a few tiles with graphs on operational channel and reliable services events. The graphical representation of the data flowing in for the counters you have selected will appear under Node Metrics.
+9. You'll see a few tiles with graphs on operational channel and reliable services events. The graphical representation of the data flowing in for the counters you have selected appear under Node Metrics.
-10. Click on a Container Metric graph to see additional details. You can also query on performance counter data similarly to cluster events and filter on the nodes, perf counter name, and values using the Kusto query language.
+10. Select on a Container Metric graph to see additional details. You can also query on performance counter data similarly to cluster events and filter on the nodes, perf counter name, and values using the Kusto query language.
![Log Analytics perf counter query](media/service-fabric-diagnostics-event-analysis-oms/oms_node_metrics_table.PNG) ## Next steps
-* Collect relevant [performance counters](service-fabric-diagnostics-event-generation-perf.md). To configure the Log Analytics agent to collect specific performance counters, review [configuring data sources](../azure-monitor/agents/agent-data-sources.md#configure-data-sources).
+* Collect relevant [performance counters](monitor-service-fabric-reference.md#performance-metrics). To configure the Log Analytics agent to collect specific performance counters, review [configuring data sources](../azure-monitor/agents/agent-data-sources.md#configure-data-sources).
* Configure Azure Monitor logs to set up [automated alerting](../azure-monitor/alerts/alerts-overview.md) to aid in detecting and diagnostics * As an alternative you can collect performance counters through [Azure Diagnostics extension and send them to Application Insights](service-fabric-diagnostics-event-aggregation-wad.md#add-the-application-insights-sink-to-the-resource-manager-template)
service-fabric Service Fabric Diagnostics Oms Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-oms-containers.md
This article covers the steps required to set up the Azure Monitor logs containe
[!INCLUDE [log-analytics-agent-note.md](~/reusable-content/ce-skilling/azure/includes/log-analytics-agent-note.md)] - ## Set up the container monitoring solution > [!NOTE]
service-fabric Service Fabric Diagnostics Oms Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-oms-setup.md
Last updated 07/14/2022
# Set up Azure Monitor logs for a cluster
-Azure Monitor logs is our recommendation to monitor cluster level events. You can set up Log Analytics workspace through Azure Resource Manager, PowerShell, or Azure Marketplace. If you maintain an updated Resource Manager template of your deployment for future use, use the same template to set up your Azure Monitor logs environment. Deployment via Marketplace is easier if you already have a cluster deployed with diagnostics enabled. If you do not have subscription-level access in the account to which you are deploying to, deploy by using PowerShell or the Resource Manager template.
+Azure Monitor logs is our recommendation to monitor cluster level events. You can set up Log Analytics workspace through Azure Resource Manager, PowerShell, or Azure Marketplace. If you maintain an updated Resource Manager template of your deployment for future use, use the same template to set up your Azure Monitor logs environment. Deployment via Marketplace is easier if you already have a cluster deployed with diagnostics enabled. If you don't have subscription-level access in the account to which you're deploying, deploy by using PowerShell or the Resource Manager template.
> [!NOTE]
-> To set up Azure Monitor logs to monitor your cluster, you need to have diagnostics enabled to view cluster-level or platform-level events. Refer to [how to set up diagnostics in Windows clusters](service-fabric-diagnostics-event-aggregation-wad.md) and [how to set up diagnostics in Linux clusters](service-fabric-diagnostics-oms-syslog.md) for more
--
+> To set up Azure Monitor logs to monitor your cluster, you need to have diagnostics enabled to view cluster-level or platform-level events. Refer to [how to set up diagnostics in Windows clusters](service-fabric-diagnostics-event-aggregation-wad.md) and [how to set up diagnostics in Linux clusters](service-fabric-diagnostics-oms-syslog.md) for more.
[!INCLUDE [updated-for-az](~/reusable-content/ce-skilling/azure/includes/updated-for-az.md)] ## Deploy a Log Analytics workspace by using Azure Marketplace
-If you want to add a Log Analytics workspace after you have deployed a cluster, go to Azure Marketplace in the portal and look for **Service Fabric Analytics**. This is a custom solution for Service Fabric deployments that has data specific to Service Fabric. In this process you will create both the solution (the dashboard to view insights) and workspace (the aggregation of the underlying cluster data).
+If you want to add a Log Analytics workspace after you have deployed a cluster, go to Azure Marketplace in the portal and look for **Service Fabric Analytics**. This is a custom solution for Service Fabric deployments that has data specific to Service Fabric. In this process, you create both the solution (the dashboard to view insights) and workspace (the aggregation of the underlying cluster data).
1. Select **New** on the left navigation menu.
If you want to add a Log Analytics workspace after you have deployed a cluster,
![Service Fabric Analytics in Marketplace](media/service-fabric-diagnostics-event-analysis-oms/service-fabric-analytics.png)
-4. In the Service Fabric Analytics creation window, select **Select a workspace** for the **OMS Workspace** field, and then **Create a new workspace**. Fill out the required entries. The only requirement here is that the subscription for the Service Fabric cluster and the workspace is the same. When your entries have been validated, your workspace starts to deploy. The deployment takes only a few minutes.
+4. In the Service Fabric Analytics creation window, select **Select a workspace** for the **OMS Workspace** field, and then **Create a new workspace**. Fill out the required entries. The only requirement is that the subscription for the Service Fabric cluster and the workspace is the same. When your entries have been validated, your workspace starts to deploy. The deployment takes only a few minutes.
5. When finished, select **Create** again at the bottom of the Service Fabric Analytics creation window. Make sure that the new workspace shows up under **OMS Workspace**. This action adds the solution to the workspace you created.
-If you are using Windows, continue with the following steps to connect Azure Monitor logs to the storage account where your cluster events are stored.
+If you're using Windows, continue with the following steps to connect Azure Monitor logs to the storage account where your cluster events are stored.
>[!NOTE] >The Service Fabric Analytics solution is only supported for Windows clusters. For Linux clusters, check out our article on [how to set up Azure Monitor logs for Linux clusters](service-fabric-diagnostics-oms-syslog.md).
If you are using Windows, continue with the following steps to connect Azure Mon
1. The workspace needs to be connected to the diagnostics data coming from your cluster. Go to the resource group in which you created the Service Fabric Analytics solution. Select **ServiceFabric\<nameOfWorkspace\>** and go to its overview page. From there, you can change solution settings, workspace settings, and access the Log Analytics workspace.
-2. On the left navigation menu, click on **Overview tab**,under **Connect a Data Source Tab** select **Storage accounts logs**.
+2. On the left navigation menu, select **Overview tab**, under **Connect a Data Source Tab** select **Storage accounts logs**.
3. On the **Storage account logs** page, select **Add** at the top to add your cluster's logs to the workspace.
-4. Select **Storage account** to add the appropriate account created in your cluster. If you used the default name, the storage account is **sfdg\<resourceGroupName\>**. You can also confirm this with the Azure Resource Manager template used to deploy your cluster, by checking the value used for **applicationDiagnosticsStorageAccountName**. If the name does not show up, scroll down and select **Load more**. Select the storage account name.
+4. Select **Storage account** to add the appropriate account created in your cluster. If you used the default name, the storage account is **sfdg\<resourceGroupName\>**. You can also confirm this with the Azure Resource Manager template used to deploy your cluster, by checking the value used for **applicationDiagnosticsStorageAccountName**. If the name doesn't show up, scroll down and select **Load more**. Select the storage account name.
5. Specify the Data Type. Set it to **Service Fabric Events**.
If you are using Windows, continue with the following steps to connect Azure Mon
The account now shows up as part of your storage account logs in your workspace's data sources.
-You have added the Service Fabric Analytics solution in an Log Analytics workspace that's now correctly connected to your cluster's platform and application log table. You can add additional sources to the workspace in the same way.
+You've added the Service Fabric Analytics solution in a Log Analytics workspace that's now correctly connected to your cluster's platform and application log table. You can add additional sources to the workspace in the same way.
## Deploy Azure Monitor logs with Azure Resource Manager
When you deploy a cluster by using a Resource Manager template, the template cre
You can use and modify [this sample template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/5-VM-Windows-OMS-UnSecure) to meet your requirements. This template does the following
-* Creates a 5 node Service Fabric cluster
+* Creates a five-node Service Fabric cluster
* Creates a Log Analytics workspace and Service Fabric solution
-* Configures the Log Analytics agent to collect and send 2 sample performance counters to the workspace
+* Configures the Log Analytics agent to collect and send two sample performance counters to the workspace
* Configures WAD to collect Service Fabric and sends them to Azure storage tables (WADServiceFabric*EventTable) * Configures the Log Analytics workspace to read the events from these tables
Set-AzOperationalInsightsIntelligencePack -ResourceGroupName $ResourceGroup -Wor
```
-When you're done, follow the steps in the preceding section to connect Azure Monitor logs to the appropriate storage account.
+When you finish, follow the steps in the preceding section to connect Azure Monitor logs to the appropriate storage account.
You can also add other solutions or make other modifications to your Log Analytics workspace by using PowerShell. To learn more, see [Manage Azure Monitor logs using PowerShell](../azure-monitor/logs/powershell-workspace-configuration.md).
service-fabric Service Fabric Diagnostics Oms Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-oms-syslog.md
Title: Monitor Linux cluster events in Azure Service Fabric
-description: Learn how to monitor a Service Fabric Linux cluster events by writing Service Fabric platform events to Syslog.
+description: Learn how to monitor Service Fabric Linux cluster events by writing Service Fabric platform events to Syslog.
Last updated 07/14/2022
# Service Fabric Linux cluster events in Syslog
-Service Fabric exposes a set of platform events to inform you of important activity in your cluster. The full list of events that are exposed is available [here](service-fabric-diagnostics-event-generation-operational.md). There are variety of ways through which these events can be consumed. In this article, we are going to discuss how to configure Service Fabric to write these events to Syslog.
-
+Service Fabric exposes a set of platform events to inform you of important activity in your cluster. The full list of events that are exposed is available [here](service-fabric-diagnostics-event-generation-operational.md). There are various ways through which these events can be consumed. In this article, we discuss how to configure Service Fabric to write these events to Syslog.
## Introduction
-In the 6.4 release, the SyslogConsumer has been introduced to send the Service Fabric platform events to Syslog for Linux clusters. Once turned on, events will automatically flow to Syslog which can be collected and sent by the Log Analytics Agent.
+In the 6.4 release, the SyslogConsumer was introduced to send the Service Fabric platform events to Syslog for Linux clusters. Once turned on, events automatically flow to Syslog which can be collected and sent by the Log Analytics Agent.
-Each Syslog event has 4 components
+Each Syslog event has four components
* Facility * Identity * Message * Severity
-The SyslogConsumer writes all platform events using Facility `Local0`. You can update to any valid facility by changing the config. The Identity used is `ServiceFabric`. The Message field contains the whole event serialized in JSON so that it can be queried and consumed by a variety of tools.
+The SyslogConsumer writes all platform events using Facility `Local0`. You can update to any valid facility by changing the config. The Identity used is `ServiceFabric`. The Message field contains the whole event serialized in JSON so that it can be queried and consumed by various tools.
## Enable SyslogConsumer
To enable the SyslogConsumer, you need to perform an upgrade of your cluster. Th
``` Here are the changes to call out
-1. In the Common section, there is a new parameter called `LinuxStructuredTracesEnabled`. **This is required to have Linux events structured and serialized when sent to Syslog.**
-2. In the Diagnostics section, a new ConsumerInstance: SyslogConsumer has been added. This tells the platform there is another consumer of the events.
-3. The new section SyslogConsumer needs to have `IsEnabled` as `true`. It is configured to use the Local0 facility automatically. You can override this by adding another parameter.
+1. In the Common section, there's a new parameter called `LinuxStructuredTracesEnabled`. **This is required to have Linux events structured and serialized when sent to Syslog.**
+2. In the Diagnostics section, a new ConsumerInstance: SyslogConsumer was added. This tells the platform that there's another consumer of the events.
+3. The new section SyslogConsumer needs to have `IsEnabled` as `true`. It's configured to use the Local0 facility automatically. You can override this by adding another parameter.
```json {
Here are the changes to call out
``` ## Azure Monitor logs integration
-You can read these Syslog events in a monitoring tool such as Azure Monitor logs. You can create a Log Analytics workspace by using the Azure Marketplace using these [instructions].(../azure-monitor/logs/quick-create-workspace.md)
+You can read these Syslog events in a monitoring tool such as Azure Monitor logs. You can create a Log Analytics workspace by using the Azure Marketplace using these [instructions](../azure-monitor/logs/quick-create-workspace.md).
+ You also need to add the Log Analytics agent to your cluster to collect and send this data to the workspace. This is the same agent used to collect performance counters.
-1. Navigate to the `Advanced Settings` blade
+1. Navigate to the `Advanced Settings` section
![Workspace Settings](media/service-fabric-diagnostics-oms-syslog/workspace-settings.png)
-2. Click `Data`
-3. Click `Syslog`
+2. Select `Data`
+3. Select `Syslog`
4. Configure Local0 as the Facility to track. You can add another Facility if you changed it in fabricSettings ![Configure Syslog](media/service-fabric-diagnostics-oms-syslog/syslog-configure.png) 5. Head over to the query explorer by clicking `Logs` in the workspace resource's menu to start querying ![Workspace logs](media/service-fabric-diagnostics-oms-syslog/workspace-logs.png)
-6. You can query against the `Syslog` table looking for `ServiceFabric` as the ProcessName. The query below is an example of how to parse the JSON in the event and display its contents
+6. You can query against the `Syslog` table looking for `ServiceFabric` as the ProcessName. The following query is an example of how to parse the JSON in the event and display its contents
```kusto Syslog | where ProcessName == "ServiceFabric" | extend $payload = parse_json(SyslogMessage) | project $payload
service-fabric Service Fabric Diagnostics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-overview.md
- Title: Azure Service Fabric Monitoring and Diagnostics Overview
-description: Learn about monitoring and diagnostics for Azure Service Fabric clusters, applications, and services.
----- Previously updated : 07/14/2022--
-# Monitoring and diagnostics for Azure Service Fabric
-
-This article provides an overview of monitoring and diagnostics for Azure Service Fabric. Monitoring and diagnostics are critical to developing, testing, and deploying workloads in any cloud environment. For example, you can track how your applications are used, the actions taken by the Service Fabric platform, your resource utilization with performance counters, and the overall health of your cluster. You can use this information to diagnose and correct issues, and prevent them from occurring in the future. The next few sections will briefly explain each area of Service Fabric monitoring to consider for production workloads.
--
-## Application monitoring
-Application monitoring tracks how features and components of your application are being used. You want to monitor your applications to make sure issues that impact users are caught. The responsibility of application monitoring is on the users developing an application and its services since it is unique to the business logic of your application. Monitoring your applications can be useful in the following scenarios:
-* How much traffic is my application experiencing? - Do you need to scale your services to meet user demands or address a potential bottleneck in your application?
-* Are my service to service calls successful and tracked?
-* What actions are taken by the users of my application? - Collecting telemetry can guide future feature development and better diagnostics for application errors
-* Is my application throwing unhandled exceptions?
-* What is happening within the services running inside my containers?
-
-The great thing about application monitoring is that developers can use whatever tools and framework they'd like since it lives within the context of your application! You can learn more about the Azure solution for application monitoring with Azure Monitor Application Insights in [Event analysis with Application Insights](service-fabric-diagnostics-event-analysis-appinsights.md).
-We also have a tutorial with how to [set this up for .NET Applications](service-fabric-tutorial-monitoring-aspnet.md). This tutorial goes over how to install the right tools, an example to write custom telemetry in your application, and viewing the application diagnostics and telemetry in the Azure portal.
--
-## Platform (Cluster) monitoring
-A user is in control over what telemetry comes from their application since a user writes the code itself, but what about the diagnostics from the Service Fabric platform? One of Service Fabric's goals is to keep applications resilient to hardware failures. This goal is achieved through the platform's system services' ability to detect infrastructure issues and rapidly failover workloads to other nodes in the cluster. But in this particular case, what if the system services themselves have issues? Or if in attempting to deploy or move a workload, rules for the placement of services are violated? Service Fabric provides diagnostics for these and more to make sure you are informed about activity taking place in your cluster. Some sample scenarios for cluster monitoring include:
-
-Service Fabric provides a comprehensive set of events out of the box. These [Service Fabric events](service-fabric-diagnostics-events.md) can be accessed through the EventStore or the operational channel (event channel exposed by the platform).
-
-* Service Fabric event channels - On Windows, Service Fabric events are available from a single ETW provider with a set of relevant `logLevelKeywordFilters` used to pick between Operational and Data & Messaging channels - this is the way in which we separate out outgoing Service Fabric events to be filtered on as needed. On Linux, Service Fabric events come through LTTng and are put into one Storage table, from where they can be filtered as needed. These channels contain curated, structured events that can be used to better understand the state of your cluster. Diagnostics are enabled by default at the cluster creation time, which create an Azure Storage table where the events from these channels are sent for you to query in the future.
-
-* EventStore - The EventStore is a feature offered by the platform that provides Service Fabric platform events available in the Service Fabric Explorer and through REST API. You can see a snapshot view of what's going on in your cluster for each entity e.g. node, service, application and query based on the time of the event. You can also Read more about the EventStore at the [EventStore Overview](service-fabric-diagnostics-eventstore.md).
-
-![Screenshot shows the EVENTS tab of the Nodes pane several events, including a NodeDown event.](media/service-fabric-diagnostics-overview/eventstore.png)
-
-The diagnostics provided are in the form of a comprehensive set of events out of the box. These [Service Fabric events](service-fabric-diagnostics-events.md) illustrate actions done by the platform on different entities such as Nodes, Applications, Services, Partitions etc. In the last scenario above, if a node were to go down, the platform would emit a `NodeDown` event and you could be notified immediately by your monitoring tool of choice. Other common examples include `ApplicationUpgradeRollbackStarted` or `PartitionReconfigured` during a failover. **The same events are available on both Windows and Linux clusters.**
-
-The events are sent through standard channels on both Windows and Linux and can be read by any monitoring tool that supports these. The Azure Monitor solution is Azure Monitor logs. Feel free to read more about our [Azure Monitor logs integration](service-fabric-diagnostics-event-analysis-oms.md) which includes a custom operational dashboard for your cluster and some sample queries from which you can create alerts. More cluster monitoring concepts are available at [Platform level event and log generation](service-fabric-diagnostics-event-generation-infra.md).
-
-### Health monitoring
-The Service Fabric platform includes a health model, which provides extensible health reporting for the status of entities in a cluster. Each node, application, service, partition, replica, or instance, has a continuously updatable health status. The health status can either be "OK", "Warning", or "Error". Think of Service Fabric events as verbs done by the cluster to various entities and health as an adjective for each entity. Each time the health of a particular entity transitions, an event will also be emitted. This way you can set up queries and alerts for health events in your monitoring tool of choice, just like any other event.
-
-Additionally, we even let users override health for entities. If your application is going through an upgrade and you have validation tests failing, you can write to Service Fabric Health using the Health API to indicate your application is no longer healthy, and Service Fabric will automatically rollback the upgrade! For more on the health model, check out the [introduction to Service Fabric health monitoring](service-fabric-health-introduction.md)
-
-![SFX health dashboard](media/service-fabric-diagnostics-overview/sfx-healthstatus.png)
--
-### Watchdogs
-Generally, a watchdog is a separate service that watches health and load across services, pings endpoints, and reports unexpected health events in the cluster. This can help prevent errors that may not be detected based only on the performance of a single service. Watchdogs are also a good place to host code that performs remedial actions that don't require user interaction, such as cleaning up log files in storage at certain time intervals. If you want a fully implemented, open source SF watchdog service that includes an easy-to-use watchdog extensibility model and that runs in both Windows and Linux clusters, see the [FabricObserver](https://github.com/microsoft/service-fabric-observer) project. FabricObserver is production-ready software. We encourage you to deploy FabricObserver to your test and production clusters and extend it to meet your needs either through its plug-in model or by forking it and writing your own built-in observers. The former (plug-ins) is the recommended approach.
-
-## Infrastructure (performance) monitoring
-Now that we've covered the diagnostics in your application and the platform, how do we know the hardware is functioning as expected? Monitoring your underlying infrastructure is a key part of understanding the state of your cluster and your resource utilization. Measuring system performance depends on many factors that can be subjective depending on your workloads. These factors are typically measured through performance counters. These performance counters can come from a variety of sources including the operating system, the .NET framework, or the Service Fabric platform itself. Some scenarios in which they would be useful are
-
-* Am I utilizing my hardware efficiently? Do you want to use your hardware at 90% CPU or 10% CPU. This comes in handy when scaling your cluster, or optimizing your application's processes.
-* Can I predict infrastructure issues proactively? - many issues are preceded by sudden changes (drops) in performance, so you can use performance counters such as network I/O and CPU utilization to predict and diagnose the issues proactively.
-
-A list of performance counters that should be collected at the infrastructure level can be found at [Performance metrics](service-fabric-diagnostics-event-generation-perf.md).
-
-Service Fabric also provides a set of performance counters for the Reliable Services and Actors programming models. If you are using either of these models, these performance counters can information to ensure that your actors are spinning up and down correctly, or that your reliable service requests are being handled fast enough. For more information, see [Monitoring for Reliable Service Remoting](service-fabric-reliable-serviceremoting-diagnostics.md#performance-counters) and [Performance monitoring for Reliable Actors](service-fabric-reliable-actors-diagnostics.md#performance-counters).
-
-The Azure Monitor solution to collect these is Azure Monitor logs just like platform level monitoring. You should use the [Log Analytics agent](service-fabric-diagnostics-oms-agent.md) to collect the appropriate performance counters, and view them in Azure Monitor logs.
-
-## Recommended Setup
-Now that we've gone over each area of monitoring and example scenarios, here is a summary of the Azure monitoring tools and set up needed to monitor all areas above.
-
-* Application monitoring with [Application Insights](service-fabric-tutorial-monitoring-aspnet.md)
-* Cluster monitoring with [Diagnostics Agent](service-fabric-diagnostics-event-aggregation-wad.md) and [Azure Monitor logs](service-fabric-diagnostics-oms-setup.md)
-* Infrastructure monitoring with [Azure Monitor logs](service-fabric-diagnostics-oms-agent.md)
-
-You can also use and modify the sample ARM template located [here](service-fabric-diagnostics-oms-setup.md#deploy-azure-monitor-logs-with-azure-resource-manager) to automate deployment of all necessary resources and agents.
-
-## Other logging solutions
-
-Although the two solutions we recommended, [Azure Monitor logs](service-fabric-diagnostics-event-analysis-oms.md) and [Application Insights](service-fabric-diagnostics-event-analysis-appinsights.md) have built in integration with Service Fabric, many events are written out through ETW providers and are extensible with other logging solutions. You should also look into the [Elastic Stack](https://www.elastic.co/products) (especially if you are considering running a cluster in an offline environment), [Dynatrace](https://www.dynatrace.com/), or any other platform of your preference. We have a list of integrated partners available [here](service-fabric-diagnostics-partners.md).
-
-The key points for any platform you choose should include how comfortable you are with the user interface, the querying capabilities, the custom visualizations and dashboards available, and the additional tools they provide to enhance your monitoring experience.
-
-## Next steps
-
-* For getting started with instrumenting your applications, see [Application level event and log generation](service-fabric-diagnostics-event-generation-app.md).
-* Go through the steps to set up Application Insights for your application with [Monitor and diagnose an ASP.NET Core application on Service Fabric](service-fabric-tutorial-monitoring-aspnet.md).
-* Learn more about monitoring the platform and the events Service Fabric provides for you at [Platform level event and log generation](service-fabric-diagnostics-event-generation-infra.md).
-* Configure the Azure Monitor logs integration with Service Fabric at [Set up Azure Monitor logs for a cluster](service-fabric-diagnostics-oms-setup.md)
-* Learn how to set up Azure Monitor logs for monitoring containers - [Monitoring and Diagnostics for Windows Containers in Azure Service Fabric](service-fabric-tutorial-monitoring-wincontainers.md).
-* See example diagnostics problems and solutions with Service Fabric in [diagnosing common scenarios](service-fabric-diagnostics-common-scenarios.md)
-* Check out other diagnostics products that integrate with Service Fabric in [Service Fabric diagnostic partners](service-fabric-diagnostics-partners.md)
-* Learn about general monitoring recommendations for Azure resources - [Best Practices - Monitoring and diagnostics](/azure/architecture/best-practices/monitoring).
service-fabric Service Fabric Diagnostics Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-partners.md
Last updated 07/14/2022
# Azure Service Fabric Monitoring Partners
-This article illustrates how one can monitor their Service Fabric applications, clusters, and infrastructure with a handful of partner solutions. We have worked with each of the partners below to create integrated offerings for Service Fabric.
+This article illustrates how one can monitor their Service Fabric applications, clusters, and infrastructure with a handful of partner solutions. We worked with each of the following partners to create integrated offerings for Service Fabric.
## Dynatrace
-Our integration with Dynatrace provides many out of the box features to monitor your Service Fabric clusters. Installing the Dynatrace OneAgent on your VMSS instances gives you performance counters and a topology of your Service Fabric deployment down to the App level. Dynatrace is also a great choice for on-premises monitoring. Check out more of the features listed in the [announcement](https://www.dynatrace.com/news/blog/automatic-end-to-end-service-fabric-monitoring-with-dynatrace/) and [instructions](https://www.dynatrace.com/news/blog/automatic-end-to-end-service-fabric-monitoring-with-dynatrace/) to enable Dynatrace on your cluster.
+Our integration with Dynatrace provides many out of the box features to monitor your Service Fabric clusters. Installing the Dynatrace OneAgent on your Azure Virtual Machine Scale Sets instances gives you performance counters and a topology of your Service Fabric deployment down to the App level. Dynatrace is also a great choice for on-premises monitoring. Check out more of the features listed in the [announcement](https://www.dynatrace.com/news/blog/automatic-end-to-end-service-fabric-monitoring-with-dynatrace/) and [instructions](https://www.dynatrace.com/news/blog/automatic-end-to-end-service-fabric-monitoring-with-dynatrace/) to enable Dynatrace on your cluster.
## Datadog
-Datadog has an extension for VMSS for both Windows and Linux instances. Using Datadog you can collect Windows event logs and thereby collect Service Fabric platform events on Windows. Check out the instructions on how to send your diagnostics data to Datadog [here](https://www.datadoghq.com/blog/azure-monitoring-enhancements/#integrate-with-azure-service-fabric).
+Datadog has an extension for Virtual Machine Scale Sets for both Windows and Linux instances. Using Datadog you can collect Windows event logs and thereby collect Service Fabric platform events on Windows. Check out the instructions on how to send your diagnostics data to Datadog [here](https://www.datadoghq.com/blog/azure-monitoring-enhancements/#integrate-with-azure-service-fabric).
## AppDynamics
Humio is a log collection service that can gather logs from your applications an
## Next steps
-* Get an [overview of monitoring and diagnostics](service-fabric-diagnostics-overview.md) in Service Fabric
+* Get an [overview of monitoring and diagnostics](monitor-service-fabric.md) in Service Fabric
* Learn how to [diagnose common scenarios](service-fabric-diagnostics-common-scenarios.md) with our first party tools
service-fabric Service Fabric Diagnostics Perf Wad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-perf-wad.md
Title: Performance monitoring with Windows Azure Diagnostics
-description: Use Windows Azure Diagnostics to collect performance counters for your Azure Service Fabric clusters.
+ Title: Performance monitoring with Azure Diagnostics
+description: Use Azure Diagnostics to collect performance counters for your Azure Service Fabric clusters.
This document covers steps required to set up collection of performance counters
To collect performance counters via WAD, you need to modify the configuration appropriately in your cluster's Resource Manager template. Follow these steps to add a performance counter you want to collect to your template and run a Resource Manager resource upgrade.
-1. Find the WAD configuration in your cluster's template - find `WadCfg`. You will be adding performance counters to collect under the `DiagnosticMonitorConfiguration`.
+1. Find the WAD configuration in your cluster's template - find `WadCfg`. You'll add performance counters to collect under the `DiagnosticMonitorConfiguration`.
2. Set up your configuration to collect performance counters by adding the following section to your `DiagnosticMonitorConfiguration`.
To collect performance counters via WAD, you need to modify the configuration ap
3. Add the performance counters you would like to collect to the `PerformanceCounterConfiguration` that was declared in the previous step. Each counter you would like to collect is defined with a `counterSpecifier`, `sampleRate`, `unit`, `annotation`, and any relevant `sinks`.
-Here is an example of a configuration with the counter for the *Total Processor Time* (the amount of time the CPU was in use for processing operations) and *Service Fabric Actor Method Invocations per Second*, one of the Service Fabric custom performance counters. Refer to [Reliable Actor Performance Counters](service-fabric-reliable-actors-diagnostics.md#list-of-events-and-performance-counters) and [Reliable Service Performance Counters](service-fabric-reliable-serviceremoting-diagnostics.md#list-of-performance-counters) for a full list of Service Fabric custom perf counters.
+Here's an example of a configuration with the counter for the *Total Processor Time* (the amount of time the CPU was in use for processing operations) and *Service Fabric Actor Method Invocations per Second*, one of the Service Fabric custom performance counters. Refer to [Reliable Actor Performance Counters](service-fabric-reliable-actors-diagnostics.md#list-of-events-and-performance-counters) and [Reliable Service Performance Counters](service-fabric-reliable-serviceremoting-diagnostics.md#list-of-performance-counters) for a full list of Service Fabric custom perf counters.
```json "WadCfg": {
Here is an example of a configuration with the counter for the *Total Processor
}, ```
- The sample rate for the counter can be modified as per your needs. The format for it is `PT<time><unit>`, so if you want the counter collected every second, then you should set the `"sampleRate": "PT15S"`.
+ The sample rate for the counter can be modified as per your needs. The format is `PT<time><unit>`, so if you want the counter collected every second, then you should set the `"sampleRate": "PT15S"`.
- You can also use variables in your ARM template to collect an array of performance counters, which can come in handy when you collect performance counters per process. In the below example, we are collecting processor time and garbage collector time per process and then 2 performance counters on the nodes themselves all using variables.
+ You can also use variables in your ARM template to collect an array of performance counters, which can come in handy when you collect performance counters per process. In the following example, we collect processor time and garbage collector time per process and then two performance counters on the nodes themselves all using variables.
```json "variables": {
Here is an example of a configuration with the counter for the *Total Processor
.... ```
-1. Once you have added the appropriate performance counters that need to be collected, you need to upgrade your cluster resource so that these changes are reflected in your running cluster. Save your modified `template.json` and open up PowerShell. You can upgrade your cluster using `New-AzResourceGroupDeployment`. The call requires the name of the resource group, the updated template file, and the parameters file, and prompts Resource Manager to make appropriate changes to the resources that you updated. Once you are signed into your account and are in the right subscription, use the following command to run the upgrade:
+1. Once you have added the appropriate performance counters that need to be collected, you need to upgrade your cluster resource so that these changes are reflected in your running cluster. Save your modified `template.json` and open up PowerShell. You can upgrade your cluster using `New-AzResourceGroupDeployment`. The call requires the name of the resource group, the updated template file, and the parameters file, and prompts Resource Manager to make appropriate changes to the resources that you updated. Once you're signed into your account and are in the right subscription, use the following command to run the upgrade:
```sh New-AzResourceGroupDeployment -ResourceGroupName <ResourceGroup> -TemplateFile <PathToTemplateFile> -TemplateParameterFile <PathToParametersFile> -Verbose
Here is an example of a configuration with the counter for the *Total Processor
1. Once the upgrade finishes rolling out (takes between 15-45 minutes depending on whether it's the first deployment and the size of your resource group), WAD should be collecting the performance counters and sending them to the table named WADPerformanceCountersTable in the storage account associated with your cluster. See your performance counters in Application Insights by [adding the AI Sink to the Resource Manager template](service-fabric-diagnostics-event-aggregation-wad.md#add-the-application-insights-sink-to-the-resource-manager-template). ## Next steps
-* Collect more performance counters for your cluster. See [Performance metrics](service-fabric-diagnostics-event-generation-perf.md) for a list of counters you should collect.
-* [Use monitoring and diagnostics with a Windows VM and Azure Resource Manager templates](../virtual-machines/extensions/diagnostics-template.md) to make further modifications to your `WadCfg`, including configuring additional storage accounts to send diagnostics data to.
-* Visit the [WadCfg builder](https://azure.github.io/azure-diagnostics-tools/config-builder/) to build a template from scratch and make sure your syntax is correct.(https://azure.github.io/azure-diagnostics-tools/config-builder/) to build a template from scratch and make sure your syntax is correct.
+* Collect more performance counters for your cluster. See [Performance metrics](monitor-service-fabric-reference.md#performance-metrics) for a list of counters you should collect.
+* [Use monitoring and diagnostics with a Windows VM and Azure Resource Manager templates](../virtual-machines/extensions/diagnostics-template.md) to make further modifications to your `WadCfg`, including configuring more storage accounts to send diagnostics data to.
+* Visit the [WadCfg builder](https://azure.github.io/azure-diagnostics-tools/config-builder/) to build a template from scratch and make sure your syntax is correct to build a template from scratch and make sure your syntax is correct.
service-fabric Service Fabric How To Diagnostics Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-how-to-diagnostics-log.md
Some third-party providers use the approach described in the preceding section,
## Next steps -- Read more information about [application monitoring in Service Fabric](service-fabric-diagnostics-event-generation-app.md).
+- Read more information about [application monitoring in Service Fabric](monitor-service-fabric.md#application-monitoring).
- Read about logging with [EventFlow](service-fabric-diagnostics-event-aggregation-eventflow.md) and [Windows Azure Diagnostics](service-fabric-diagnostics-event-aggregation-wad.md).
service-fabric Service Fabric Linux Windows Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-linux-windows-differences.md
There are some features that are supported on Windows but not on Linux. The foll
* Domain Name System (DNS) service for Service Fabric services (DNS service is supported for containers on Linux) * CLI command equivalents of certain PowerShell commands detailed in [PowerShell cmdlets that don't work against a Linux Service Fabric Cluster](#powershell-cmdlets-that-dont-work-against-a-linux-service-fabric-cluster). Most of these cmdlets only apply to standalone clusters. * [Differences in log implementation that can affect scalability](service-fabric-concepts-scalability.md#choosing-a-platform)
-* [Difference in Service Fabric Events Channel](service-fabric-diagnostics-overview.md#platform-cluster-monitoring)
+* [Difference in Service Fabric Events Channel](monitor-service-fabric.md#platform-cluster-monitoring)
## PowerShell cmdlets that don't work against a Linux Service Fabric cluster
service-fabric Service Fabric Tutorial Monitor Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-monitor-cluster.md
Last updated 07/14/2022
# Tutorial: Monitor a Service Fabric cluster in Azure
-Monitoring and diagnostics are critical to developing, testing, and deploying workloads in any cloud environment. This tutorial is part two of a series, and shows you how to monitor and diagnose a Service Fabric cluster using events, performance counters, and health reports. For more information, read the overview about [cluster monitoring](service-fabric-diagnostics-overview.md#platform-cluster-monitoring) and [infrastructure monitoring](service-fabric-diagnostics-overview.md#infrastructure-performance-monitoring).
+Monitoring and diagnostics are critical to developing, testing, and deploying workloads in any cloud environment. This tutorial is part two of a series, and shows you how to monitor and diagnose a Service Fabric cluster using events, performance counters, and health reports. For more information, read the overview about [cluster monitoring](monitor-service-fabric.md#platform-cluster-monitoring) and [infrastructure monitoring](monitor-service-fabric.md#infrastructure-performance-monitoring).
In this tutorial, you learn how to:
Before you begin this tutorial:
* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) * Install [Azure PowerShell](/powershell/azure/install-azure-powershell) or [Azure CLI](/cli/azure/install-azure-cli). * Create a secure [Windows cluster](service-fabric-tutorial-create-vnet-and-windows-cluster.md)
-* Setup [diagnostics collection](service-fabric-tutorial-create-vnet-and-windows-cluster.md#configurediagnostics_anchor) for the cluster
+* Set up [diagnostics collection](service-fabric-tutorial-create-vnet-and-windows-cluster.md#configurediagnostics_anchor) for the cluster
* Enable the [EventStore service](service-fabric-tutorial-create-vnet-and-windows-cluster.md#configureeventstore_anchor) in the cluster * Configure [Azure Monitor logs and the Log Analytics agent](service-fabric-tutorial-create-vnet-and-windows-cluster.md#configureloganalytics_anchor) for the cluster
To access the Service Fabric Analytics solution, go to the [Azure portal](https:
Select the resource **ServiceFabric(mysfomsworkspace)**.
-In **Overview** you see tiles in the form of a graph for each of the solutions enabled, including one for Service Fabric. Click the **Service Fabric** graph to continue to the Service Fabric Analytics solution.
+In **Overview** you see tiles in the form of a graph for each of the solutions enabled, including one for Service Fabric. Select the **Service Fabric** graph to continue to the Service Fabric Analytics solution.
![Screenshot that shows the Service Fabric graph.](media/service-fabric-tutorial-monitor-cluster/oms-service-fabric-summary.png)
The following image shows the home page of the Service Fabric Analytics solution
### View Service Fabric Events, including actions on nodes
-On the Service Fabric Analytics page, click on the graph for **Cluster Events**. The logs for all the system events that have been collected appear. For reference, these are from the **WADServiceFabricSystemEventsTable** in the Azure Storage account, and similarly the reliable services and actors events you see next are from those respective tables.
+On the Service Fabric Analytics page, click on the graph for **Cluster Events**. The logs for all the system events that have been collected appear. For reference, these are from the **WADServiceFabricSystemEventsTable** in the Azure Storage account, and similarly the reliable services and actors events you see next are from those respective tables.
![Query Operational Channel](media/service-fabric-tutorial-monitor-cluster/oms-service-fabric-events.png)
ServiceFabricOperationalEvent
| sort by TimeGenerated ```
-Returns Health Reports with HealthState == 3 (Error) and extract additional properties from the EventMessage field:
+Returns Health Reports with HealthState == 3 (Error) and extract more properties from the EventMessage field:
```kusto ServiceFabricOperationalEvent
app('PlunkoServiceFabricCluster').traces
### View Service Fabric application events
-You can view events for the reliable services and reliable actors applications deployed on the cluster. On the Service Fabric Analytics page, click the graph for **Application Events**.
+You can view events for the reliable services and reliable actors applications deployed on the cluster. On the Service Fabric Analytics page, select the graph for **Application Events**.
Run the following query to view events from your reliable services applications: ```kusto
To view performance counters, go to the [Azure portal](https://portal.azure.com)
Select the resource **ServiceFabric(mysfomsworkspace)**, then **Log Analytics Workspace**, and then **Advanced Settings**.
-Click **Data**, then click **Windows Performance Counters**. There is a list of default counters you can choose to enable and you can set the interval for collection too. You can also add [additional performance counters](service-fabric-diagnostics-event-generation-perf.md) to collect. The proper format is referenced in this [article](/windows/desktop/PerfCtrs/specifying-a-counter-path). Click **Save**, then click **OK**.
+Select **Data**, then choose **Windows Performance Counters**. There's a list of default counters you can choose to enable and you can set the interval for collection too. You can also add [additional performance counters](monitor-service-fabric-reference.md#performance-metrics) to collect. The proper format is referenced in this [article](/windows/desktop/PerfCtrs/specifying-a-counter-path). Click **Save**, then select **OK**.
-Close the Advanced Settings blade and select **Workspace summary** under the **General** heading. For each of the solutions enabled there is a graphical tile, including one for Service Fabric. Click the **Service Fabric** graph to continue to the Service Fabric Analytics solution.
+Close the Advanced Settings blade and select **Workspace summary** under the **General** heading. For each of the solutions enabled there's a graphical tile, including one for Service Fabric. Select the **Service Fabric** graph to continue to the Service Fabric Analytics solution.
-There are graphical tiles for operational channel and reliable services events. The graphical representation of the data flowing in for the counters you have selected will appear under **Node Metrics**.
+There are graphical tiles for operational channel and reliable services events. The graphical representation of the data flowing in for the counters you selected appear under **Node Metrics**.
-Select the **Container Metric** graph to see additional details. You can also query on performance counter data similarly to cluster events and filter on the nodes, perf counter name, and values using the Kusto query language.
+Select the **Container Metric** graph to see more details. You can also query on performance counter data similarly to cluster events and filter on the nodes, perf counter name, and values using the Kusto query language.
## Query the EventStore service The [EventStore service](service-fabric-diagnostics-eventstore.md) provides a way to understand the state of your cluster or workloads at a given point in time. The EventStore is a stateful Service Fabric service that maintains events from the cluster. The events are exposed through the [Service Fabric Explorer](service-fabric-visualizing-your-cluster.md), REST, and APIs. EventStore queries the cluster directly to get diagnostics data on any entity in your cluster
service-fabric Service Fabric Tutorial Monitoring Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-monitoring-aspnet.md
When you finish making these changes, select **Start** in the application so tha
## Related content
-* Learn more about [monitoring and diagnostics in Service Fabric](service-fabric-diagnostics-overview.md).
+* Learn more about [monitoring and diagnostics in Service Fabric](monitor-service-fabric.md).
* Review [Service Fabric event analysis by using Application Insights](service-fabric-diagnostics-event-analysis-appinsights.md). * Learn more about [Application Insights](/azure/application-insights/).
service-fabric Service Fabric Tutorial Monitoring Wincontainers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-monitoring-wincontainers.md
In this tutorial, you learn how to:
> * Use a Log Analytics workspace to view and query logs from your containers and nodes > * Configure the Log Analytics agent to pick up container and node metrics - ## Prerequisites Before you begin this tutorial, you should:
Before you begin this tutorial, you should:
## Setting up Azure Monitor logs with your cluster in the Resource Manager template
-In the case that you used the [template provided](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/5-VM-Windows-OMS-UnSecure) in the first part of this tutorial, it should include the following additions to a generic Service Fabric Azure Resource Manager template. In case the case that you have a cluster of your own that you are looking to set up for monitoring containers with Azure Monitor logs:
+In the case that you used the [template provided](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/5-VM-Windows-OMS-UnSecure) in the first part of this tutorial, it should include the following additions to a generic Service Fabric Azure Resource Manager template. In case you have a cluster of your own that you're looking to set up for monitoring containers with Azure Monitor logs:
* Make the following changes to your Resource Manager template. * Deploy it using PowerShell to upgrade your cluster by [deploying the template](./service-fabric-cluster-creation-via-arm.md). Azure Resource Manager realizes that the resource exists, so will roll it out as an upgrade.
Make the following changes in your *template.json*:
"omsSolution": "ServiceFabric" ```
-3. Add the Microsoft Monitoring Agent as a virtual machine extension. Find virtual machine scale sets resource: *resources* > *"apiVersion": "[variables('vmssApiVersion')]"*. Under the *properties* > *virtualMachineProfile* > *extensionProfile* > *extensions*, add the following extension description under the *ServiceFabricNode* extension:
+3. Add the Microsoft Monitoring Agent as a virtual machine extension. Find virtual machine scale sets resource: *resources* > *"apiVersion"*: *"[variables('vmssApiVersion')]"*. Under the *properties* > *virtualMachineProfile* > *extensionProfile* > *extensions*, add the following extension description under the *ServiceFabricNode* extension:
```json {
Make the following changes in your *template.json*:
}, ```
-[Here](https://github.com/Azure-Samples/service-fabric-cluster-templates/blob/d2ffa318581fc23ac7f1b0ab2b52db1a0d7b4ba7/5-VM-Windows-OMS-UnSecure/sfclusteroms.json) is a sample template (used in part one of this tutorial) that has all of these changes that you can reference as needed. These changes will add an Log Analytics workspace to your resource group. The workspace will be configured to pick up Service Fabric platform events from the storage tables configured with the [Windows Azure Diagnostics](service-fabric-diagnostics-event-aggregation-wad.md) agent. The Log Analytics agent (Microsoft Monitoring Agent) has also been added to each node in your cluster as a virtual machine extension - this means that as you scale your cluster, the agent is automatically configured on each machine and hooked up to the same workspace.
+[Here](https://github.com/Azure-Samples/service-fabric-cluster-templates/blob/d2ffa318581fc23ac7f1b0ab2b52db1a0d7b4ba7/5-VM-Windows-OMS-UnSecure/sfclusteroms.json) is a sample template (used in part one of this tutorial) that has all of these changes that you can reference as needed. These changes add a Log Analytics workspace to your resource group. The workspace is configured to pick up Service Fabric platform events from the storage tables configured with the [Windows Azure Diagnostics](service-fabric-diagnostics-event-aggregation-wad.md) agent. The Log Analytics agent (Microsoft Monitoring Agent) has also been added to each node in your cluster as a virtual machine extension - this means that as you scale your cluster, the agent is automatically configured on each machine and hooked up to the same workspace.
-Deploy the template with your new changes to upgrade your current cluster. You should see the log analytics resources in your resource group once this has completed. When the cluster is ready, deploy your containerized application to it. In the next step, we will set up monitoring the containers.
+Deploy the template with your new changes to upgrade your current cluster. You should see the log analytics resources in your resource group once this has completed. When the cluster is ready, deploy your containerized application to it. In the next step, we set up monitoring the containers.
## Add the Container Monitoring Solution to your Log Analytics workspace
To set up the Container solution in your workspace, search for *Container Monito
![Adding Containers solution](./media/service-fabric-tutorial-monitoring-wincontainers/containers-solution.png)
-When prompted for the *Log Analytics workspace*, select the workspace that was created in your resource group, and select **Create**. This adds a *Container Monitoring Solution* to your workspace, initiating the Log Analytics agent deployed by the template to start collecting docker logs and stats.
+When prompted for the *Log Analytics workspace*, select the workspace that was created in your resource group, and select **Create**. This adds a *Container Monitoring Solution* to your workspace, initiating the Log Analytics agent deployed by the template to start collecting docker logs and stats.
Navigate back to your *resource group*, where you should now see the newly added monitoring solution. If you select it, the landing page should show you the number of container images you have running.
Navigate back to your *resource group*, where you should now see the newly added
![Container solution landing page](./media/service-fabric-tutorial-monitoring-wincontainers/solution-landing.png)
-Selecting the **Container Monitor Solution** will take you to a more detailed dashboard, which allows you to scroll through multiple panels as well as run queries in Azure Monitor logs.
+Selecting the **Container Monitor Solution** takes you to a more detailed dashboard, which allows you to scroll through multiple panels as well as run queries in Azure Monitor logs.
-Since the agent is picking up docker logs, it defaults to showing *stdout* and *stderr*. If you scroll horizontally, you will see container image inventory, status, metrics, and sample queries that you could run to get more helpful data.
+Since the agent is picking up docker logs, it defaults to showing *stdout* and *stderr*. If you scroll horizontally, you'll see container image inventory, status, metrics, and sample queries that you could run to get more helpful data.
![Container solution dashboard](./media/service-fabric-tutorial-monitoring-wincontainers/container-metrics.png)
-Clicking into any of these panels will take you to the Kusto query that is generating the displayed value. Change the query to *\** to see all the different kinds of logs that are being picked up. From here, you can query or filter for container performance, logs, or look at Service Fabric platform events. Your agents are also constantly emitting a heartbeat from each node, that you can look at to make sure data is still being gathered from all your machines if your cluster configuration changes.
+Selecting any of these panels takes you to the Kusto query that is generating the displayed value. Change the query to *\** to see all the different kinds of logs that are being picked up. From here, you can query or filter for container performance, logs, or look at Service Fabric platform events. Your agents are also constantly emitting a heartbeat from each node, that you can look at to make sure data is still being gathered from all your machines if your cluster configuration changes.
![Container query](./media/service-fabric-tutorial-monitoring-wincontainers/query-sample.png)
Clicking into any of these panels will take you to the Kusto query that is gener
Another benefit of using the Log Analytics agent is the ability to change the performance counters you want to pick up through the log analytics UI experience, rather than having to configure the Azure diagnostics agent and do a Resource Manager template based upgrade each time. To do this, select on **OMS Workspace** on the landing page of your Container Monitoring (or Service Fabric) solution.
-This will take you to your Log Analytics workspace, where you can view your solutions, create custom dashboards, as well as configure the Log Analytics agent.
+This takes you to your Log Analytics workspace, where you can view your solutions, create custom dashboards, as well as configure the Log Analytics agent.
* Select **Advanced Settings** to open the Advanced Settings menu. * Select **Connected Sources** > **Windows Servers** to verify that you have *5 Windows Computers Connected*.
-* Select **Data** > **Windows Performance Counters** to search for and add new performance counters. Here you will see a list of recommendations from Azure Monitor logs for performance counters you can collect as well as the option to search for other counters. Verify that **Processor(_Total)\% Processor Time** and **Memory(*)\Available MBytes** counters are being collected.
+* Select **Data** > **Windows Performance Counters** to search for and add new performance counters. Here you'll see a list of recommendations from Azure Monitor logs for performance counters you can collect as well as the option to search for other counters. Verify that **Processor(_Total)\% Processor Time** and **Memory(*)\Available MBytes** counters are being collected.
-**refresh** your Container Monitoring Solution in a few minutes, and you should start to see *Computer Performance* data coming in. This will help you understand how your resources are being used. You can also use these metrics to make appropriate decisions about scaling your cluster, or to confirm if a cluster is balancing your load as expected.
+**Refresh** your Container Monitoring Solution in a few minutes, and you should start to see *Computer Performance* data coming in. This helps you understand how your resources are being used. You can also use these metrics to make appropriate decisions about scaling your cluster, or to confirm if a cluster is balancing your load as expected.
*Note: Make sure your time filters are set appropriately for you to consume these metrics.*
Now that you have configured monitoring for your containerized application, try:
* Configuring Azure Monitor logs for a Linux cluster, following similar steps as this tutorial. Reference [this template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/5-VM-Ubuntu-1-NodeType-Secure-OMS) to make changes in your Resource Manager template. * Configure Azure Monitor logs to set up [automated alerting](../azure-monitor/alerts/alerts-overview.md) to aid in detecting and diagnostics.
-* Explore Service Fabric's list of [recommended performance counters](service-fabric-diagnostics-event-generation-perf.md) to configure for your clusters.
+* Explore Service Fabric's list of recommended [performance counters](monitor-service-fabric-reference.md#performance-metrics) to configure for your clusters.
* Get familiarized with the [log search and querying](../azure-monitor/logs/log-query-overview.md) features offered as part of Azure Monitor logs.
storage Storage Use Azcopy Migrate On Premises Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-migrate-on-premises-data.md
Copy the AzCopy command to a text editor. Update the parameter values of the AzC
These examples assume that your folder is named `myFolder`, your storage account name is `mystorageaccount` and your container name is `mycontainer`. > [!NOTE]
-> The Linux example appends a SAS token. You'll need to provide one in your command. The current version of AzCopy V10 doesn't support Microsoft Entra authorization in cron jobs.
+> The Linux example appends a SAS token. You'll need to provide one in your command. To utilize Microsoft Entra authentication in cron jobs, ensure you configure the AZCOPY_AUTO_LOGIN_TYPE environment variable appropriately.
# [Linux](#tab/linux)
synapse-analytics Gateway Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/gateway-ip-addresses.md
The table below lists the individual Gateway IP addresses and also Gateway IP address ranges per region.
-Periodically, we will retire Gateways using old hardware and migrate the traffic to new Gateways as per the process outlined at [Azure SQL Database traffic migration to newer Gateways](/azure/azure-sql/database/gateway-migration). We strongly encourage customers to use the **Gateway IP address subnets** in order to not be impacted by this activity in a region.
+Periodically, we will retire Gateways using old hardware and migrate the traffic to new Gateways as per the process outlined at [Azure SQL Database traffic migration to newer Gateways](/azure/azure-sql/database/gateway-migration).
+We strongly encourage customers to move away from relying on **any individual Gateway IP address** (since these will be retired in the future). Instead, allow network traffic to reach both the individual Gateway IP addresses and Gateway IP address subnets in a region.
> [!IMPORTANT]
-> - Logins for SQL Database or dedicated SQL pools (formerly SQL DW) in Azure Synapse can land on **any of the Gateways in a region**. For consistent connectivity to SQL Database or dedicated SQL pools (formerly SQL DW) in Azure Synapse, allow network traffic to and from **ALL** Gateway IP addresses and Gateway IP address subnets for the region.
-> - Use the Gateway IP addresses in this section if you're using a Proxy connection policy to connect to Azure SQL Database. If you're using the Redirect connection policy, refer to the [Azure IP Ranges and Service Tags - Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519) for a list of your region's IP addresses to allow.
+> - Logins for SQL Database or dedicated SQL pools (formerly SQL DW) in Azure Synapse can land on **any of the individual Gateway IP addresses or Gateway IP address subnets in a region**. For consistent connectivity to SQL Database or dedicated SQL pools (formerly SQL DW) in Azure Synapse, allow network traffic to and from **all the individual Gateway IP addresses and Gateway IP address subnets in a region**.
+> - Use the individual Gateway IP addresses and Gateway IP address subnets in this section if you're using a Proxy connection policy to connect to Azure SQL Database. If you're using the Redirect connection policy, refer to the [Azure IP Ranges and Service Tags - Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519) for a list of your region's IP addresses to allow.
| Region name | Gateway IP addresses | Gateway IP address subnets | | | | |
synapse-analytics Apache Spark History Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-history-server.md
Send feedback with issues by selecting **Provide us feedback**.
![Screenshot showing Spark application and job graph feedback.](./media/apache-spark-history-server/sparkui-graph-feedback.png)
+### Stage number limit
+
+For performance consideration, by default the graph is only available when the Spark application has less than 500 stages. If there are too many stages, it will fail with an error like this:
+
+`` The number of stages in this application exceeds limit (500), graph page is disabled in this case.``
+
+As a workaround, before starting a Spark application, please apply this Spark configuration to increase the limit:
+
+`` spark.ui.enhancement.maxGraphStages 1000 ``
+
+But please notice that this may cause bad performance of the page and the API, because the content can be too large for browser to fetch and render.
+ ## Explore the Diagnosis tab in Apache Spark history server To access the Diagnosis tab, select a job ID. Then select **Diagnosis** on the tool menu to get the job Diagnosis view. The diagnosis tab includes **Data Skew**, **Time Skew**, and **Executor Usage Analysis**.
synapse-analytics Disable Geo Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/disable-geo-backup.md
description: How-to guide for disabling geo-backups for a dedicated SQL pool (fo
Previously updated : 01/09/2024 Last updated : 07/23/2024
In this article, you learn to disable geo-backups for your [dedicated SQL pool (
Follow these steps to disable geo-backups for your dedicated SQL pool (formerly SQL DW): > [!NOTE]
-> If you disable geo-backups, you will no longer be able to recover your dedicated SQL pool (formerly SQL DW) to another Azure region.
+> If you disable geo-backups, you will no longer be able to recover your dedicated SQL pool (formerly SQL DW) to another Azure region.
+>
+> - Disabling geo-backup results in the deletion of all existing geo-backups associated with the instance.
+> - Once geo-backup is disabled, you cannot use existing geo-backups.
+> - If the instance is active at the time of disabling geo-backup, all geo-backups will be deleted.
+> - If the instance is paused, geo-backups will be deleted upon resuming the instance.
1. Sign in to your [Azure portal](https://portal.azure.com/) account. 1. Select the dedicated SQL pool (formerly SQL DW) resource where you would like to disable geo-backups.
update-manager Manage Pre Post Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-pre-post-events.md
Title: Manage the pre and post maintenance configuration events (preview) in Azure Update Manager
+ Title: Manage the pre and post (preview) maintenance configuration events in Azure Update Manager
description: The article provides the steps to manage the pre and post maintenance events in Azure Update Manager. Previously updated : 07/09/2024 Last updated : 07/24/2024
-# Manage pre and post events maintenance configuration events (preview)
+# Manage pre and post events (preview) maintenance configuration events
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers :heavy_check_mark: Azure VMs.
To self-register your subscription for public preview, follow these steps:
1. On the **All services** page, search for **Preview features**. 1. On the **Preview Features** page, search and select **Pre and Post Events**. 1. Select the feature and then select **Register** to register the subscription.
-
+ :::image type="content" source="./media/tutorial-using-functions/register-feature.png" alt-text="Screenshot that shows how to register the preview feature." lightbox="./media/tutorial-using-functions/register-feature.png"::: #### [Azure CLI](#tab/cli)
To delete pre and post events, follow these steps:
## Next steps-- For an overview of pre and post events (preview) in Azure Update Manager, refer [here](pre-post-scripts-overview.md)
+- For an overview of pre and post events in Azure Update Manager, refer [here](pre-post-scripts-overview.md)
- To learn on how to create pre and post events, see [pre and post maintenance configuration events](pre-post-events-schedule-maintenance-configuration.md). - To learn how to use pre and post events to turn on and off your VMs using Webhooks, refer [here](tutorial-webhooks-using-runbooks.md). - To learn how to use pre and post events to turn on and off your VMs using Azure Functions, refer [here](tutorial-using-functions.md).
update-manager Pre Post Events Schedule Maintenance Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/pre-post-events-schedule-maintenance-configuration.md
Title: Create the pre and post maintenance configuration events (preview) in Azure Update Manager
+ Title: Create the pre and post (preview) maintenance configuration events in Azure Update Manager
description: The article provides the steps to create the pre and post maintenance events in Azure Update Manager. Previously updated : 07/09/2024 Last updated : 07/24/2024
This article describes on how to create pre and post events in Azure Update Mana
## Event Grid in schedule maintenance configurations
-Azure Update Manager leverages Event grid to create and manage pre and post events. For more information, go through the [overview of Event Grid](../event-grid/overview.md). To trigger an event either before or after a schedule maintenance window, you require the following:
+Azure Update Manager leverages Event Grid to create and manage pre and post events. For more information, go through the [overview of Event Grid](../event-grid/overview.md). To trigger an event either before or after a schedule maintenance window, you require the following:
1. **Schedule maintenance configuration** - You can create Pre and post events for a schedule maintenance configuration in Azure Update Manager. For more information, see [schedule updates using maintenance configurations](scheduled-patching.md). 1. **Action to be performed in the pre or post event** - You can use the [Event handlers](../event-grid/event-handlers.md) (Endpoints) supported by Event Grid to define actions or tasks. Here are examples on how to create Azure Automation Runbooks via Webhooks and Azure Functions. Within these Event handlers/Endpoints, you must define the actions that should be performed as part of pre and post events.
Azure Update Manager leverages Event grid to create and manage pre and post even
1. **Pre and post event** - You can follow the steps shared in the following section to create a pre and post event for schedule maintenance configuration. To learn more about the terms used in the Basics tab of Event Grid, see [Event Grid](../event-grid/concepts.md) terms.
-## Create a pre and post event (preview)
+## Create a pre and post event
::: zone pivot="new-mc"
PUT /subscriptions/<subscription Id>/resourceGroups/<resource group name>/provid
## Next steps-- For an overview of pre and post events (preview) in Azure Update Manager, refer [here](pre-post-scripts-overview.md).
+- For an overview of pre and post events in Azure Update Manager, refer [here](pre-post-scripts-overview.md).
- To learn on how to manage pre and post events or to cancel a schedule run, see [pre and post maintenance configuration events](manage-pre-post-events.md). - To learn how to use pre and post events to turn on and off your VMs using Webhooks, refer [here](tutorial-webhooks-using-runbooks.md). - To learn how to use pre and post events to turn on and off your VMs using Azure Functions, refer [here](tutorial-using-functions.md).
update-manager Pre Post Scripts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/pre-post-scripts-overview.md
Title: An overview of pre and post events (preview) in your Azure Update Manager
-description: This article provides an overview on pre and post events (preview) and its requirements.
+description: This article provides an overview on pre and post events and its requirements.
Previously updated : 06/15/2024 Last updated : 07/24/2024
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-The pre and post events (preview) in Azure Update Manager allow you to perform certain tasks automatically before and after a scheduled maintenance configuration. For more information on how to create schedule maintenance configurations, see [Schedule recurring updates for machines by using the Azure portal and Azure Policy](scheduled-patching.md). For example, using pre and post events, you can execute the following tasks on machines that are part of a schedule. The following list isn't exhaustive, and you can create pre and post events as per your need.
+The pre and post events in Azure Update Manager allow you to perform certain tasks automatically before and after a scheduled maintenance configuration. For more information on how to create schedule maintenance configurations, see [Schedule recurring updates for machines by using the Azure portal and Azure Policy](scheduled-patching.md). For example, using pre and post events, you can execute the following tasks on machines that are part of a schedule. The following list isn't exhaustive, and you can create pre and post events as per your need.
## Sample tasks
We recommend that you're watchful of the following:
- The status of the pre and post event run can be checked in the event handler you chose. ## Next steps-
+- To learn on how to create pre and post events, see [pre and post maintenance configuration events](pre-post-events-schedule-maintenance-configuration.md).
- To learn on how to configure pre and post events or to cancel a schedule run, see [pre and post maintenance configuration events](manage-pre-post-events.md).
+- To learn how to use pre and post events to turn on and off your VMs using Webhooks, refer [here](tutorial-webhooks-using-runbooks.md).
+- To learn how to use pre and post events to turn on and off your VMs using Azure Functions, refer [here](tutorial-using-functions.md).
update-manager Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/query-logs.md
Title: Query logs and results from Update Manager
-description: This article provides details on how you can review logs and search results from Azure Update Manager by using Azure Resource Graph.
+ Title: Query resources with Azure Resource Graph in Azure Update Manager
+description: This article provides details on how Access Azure Update Manager operations data using Azure Resource Graph.
Previously updated : 11/21/2023 Last updated : 07/23/2024
-# Overview of query logs in Azure Update Manager
+# Access Azure Update Manager operations data using Azure Resource Graph
-Logs created from operations like update assessments and installations are stored by Azure Update Manager in [Azure Resource Graph](../governance/resource-graph/overview.md). Resource Graph is a service in Azure designed to be the store for Azure service details without any cost or deployment requirements. Update Manager uses Resource Graph to store its results. You can view the update history of the last 30 days from the resources.
+Logs created from operations like update assessments and installations are stored by Azure Update Manager in [Azure Resource Graph](../governance/resource-graph/overview.md). Resource Graph is a service in Azure designed to be the store for Azure service details without any cost or deployment requirements. Update Manager uses Resource Graph to store its results. You can view the update assessment history for the last 7 days and update installations history for the last 30 days from the Resource Graph.
This article describes the structure of the logs from Update Manager and how you can use [Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md) to analyze them in support of your reporting, visualizing, and export needs.
If the property for the resource type is `configurationassignments`, it includes
## Next steps -- For details of sample queries, see [Sample query logs](sample-query-logs.md).-- To troubleshoot issues, see [Troubleshoot Update Manager](troubleshoot.md).
+- For sample queries to access Azure Update Manager operations data, see [Sample Azure Resource Graph queries to access Azure Update Manager operations data](sample-query-logs.md).
+- To troubleshoot issues with Azure Update Manager, see [Troubleshoot issues with Azure Update Manager](troubleshoot.md).
update-manager Tutorial Using Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/tutorial-using-functions.md
Title: Create pre and post events using Azure Functions.
+ Title: Create pre and post events (preview) using Azure Functions.
description: In this tutorial, you learn how to create the pre and post events using Azure Functions. Previously updated : 07/15/2024 Last updated : 07/24/2024 #Customer intent: As an IT admin, I want create pre and post events using Azure Functions.
-# Tutorial: Create pre and post events using Azure Functions
+# Tutorial: Create pre and post events (preview) using Azure Functions
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure VMs :heavy_check_mark: Azure Arc-enabled servers.
In this tutorial, you learn how to:
You can also use Azure Storage accounts and Event hub to store, send, and receive events. Learn more on [how to create Event hub](../event-hubs/event-hubs-create.md) and [Storage queues](../event-hubs/event-hubs-create.md). ## Next steps
-Learn about [managing multiple machines](manage-multiple-machines.md).
+- Learn more on the [overview of pre and post events in Azure Update Manager](pre-post-scripts-overview.md).
+- Learn more on [how to create pre and post events](pre-post-events-schedule-maintenance-configuration.md)
+- To learn on how to manage pre and post events or to cancel a schedule run, see [pre and post maintenance configuration events](manage-pre-post-events.md).
+- Learn more on [Create pre and post events using a webhook with Automation](manage-multiple-machines.md).
update-manager Tutorial Webhooks Using Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/tutorial-webhooks-using-runbooks.md
Title: Create pre and post events using a webhook with Automation runbooks.
+ Title: Create pre and post events (preview) using a webhook with Automation runbooks.
description: In this tutorial, you learn how to create the pre and post events using webhook with Automation runbooks. Previously updated : 12/07/2023 Last updated : 07/24/2024 #Customer intent: As an IT admin, I want create pre and post events using a webhook with Automation runbooks.
-# Tutorial: Create pre and post events using a webhook with Automation
+# Tutorial: Create pre and post events (preview) using a webhook with Automation
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure VMs :heavy_check_mark: Azure Arc-enabled servers.
-Pre and post events, also known as pre/post-scripts, allow you to execute user-defined actions before and after the schedule patch installation. One of the most common scenarios is to start and stop a VM. With pre-events, you can run a prepatching script to start the VM before initiating the schedule patching process. Once the schedule patching is complete, and the server is rebooted, a post-patching script can be executed to safely shutdown the VM
+Pre and post events, also known as pre/post-scripts, allow you to execute user-defined actions before and after the schedule patch installation. One of the most common scenarios is to start and stop a VM. With pre-events, you can run a prepatching script to start the VM before initiating the schedule patching process. Once the schedule patching is complete, and the server is rebooted, a post-patching script can be executed to safely shut down the VM.
This tutorial explains how to create pre and post events to start and stop a VM in a schedule patch workflow using a webhook.
Invoke-AzRestMethod `
## Next steps
-Learn about [managing multiple machines](manage-multiple-machines.md).
+- Learn more on the [overview of pre and post events in Azure Update Manager](pre-post-scripts-overview.md).
+- Learn more on [how to create pre and post events](pre-post-events-schedule-maintenance-configuration.md)
+- To learn on how to manage pre and post events or to cancel a schedule run, see [pre and post maintenance configuration events](manage-pre-post-events.md).
+- Learn more on [how Create pre and post events using Azure Functions](tutorial-using-functions.md).
update-manager Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/whats-new.md
Previously updated : 07/05/2024 Last updated : 07/24/2024 # What's new in Azure Update Manager [Azure Update Manager](overview.md) helps you manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. This article summarizes new releases and features in Azure Update Manager. + ## June 2024 ### New region support
virtual-desktop Client Device Redirection Intune https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/client-device-redirection-intune.md
Now that you configure Intune to manage device redirection on personal devices,
Configuring redirection settings for Windows App and the Remote Desktop app on a client device using Microsoft Intune has the following limitation: -- When you configure client device redirection for the Remote Desktop app on iOS and iPadOS, multifactor authentication (MFA) requests might get stuck in a loop. A common scenario of this issue happens when the Remote Desktop app is being run on an Intune enrolled iPhone and the same iPhone is being used to receive MFA requests from the Microsoft Authenticator app when signing into the Remote Desktop app. To work around this issue, use the Remote Desktop app on a different device from the device being used to receive MFA requests, such as an iPad. This issue doesn't occur with Windows App.
+- When you configure client device redirection for the Remote Desktop or Windows App on iOS and iPadOS, multifactor authentication (MFA) requests might get stuck in a loop. A common scenario of this issue happens when the Remote Desktop or Windows App is being run on an Intune enrolled iPhone and the same iPhone is being used to receive MFA requests from the Microsoft Authenticator app when signing into the Remote Desktop or Windows App. To work around this issue, use the Remote Desktop or Windows App on a different device (such as an iPad) from the device being used to receive MFA requests (an iPhone).
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
zone_pivot_groups: azure-virtual-desktop-windows-clients Previously updated : 07/17/2024 Last updated : 07/23/2024 # What's new in the Remote Desktop client for Windows
virtual-machines Disks High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-high-availability.md
Single VMs using only [Premium SSD disks](disks-types.md#premium-ssds) as the OS
### Use zone-redundant storage disks
-Zone-redundant storage (ZRS) disks synchronously replicate data across three availability zones, which are separated groups of data centers in a region that have independent power, cooling, and networking infrastructure. With ZRS disks, your data is accessible even in the event of a zonal outage. ZRS disks have limitations, see [Zone-redundant storage for managed disks](disks-redundancy.md#zone-redundant-storage-for-managed-disks) for details.
+Zone-redundant storage (ZRS) disks synchronously replicate data across three availability zones, which are separated groups of data centers in a region that have independent power, cooling, and networking infrastructure. With ZRS disks, you can [force detach](https://learn.microsoft.com/rest/api/compute/virtual-machines/attach-detach-data-disks?view=rest-compute-2024-03-01&tabs=HTTP#diskdetachoptiontypes&preserve-view=true)(in preview) your ZRS data disks even in the event of a zonal outage. ZRS disks have limitations, see [Zone-redundant storage for managed disks](disks-redundancy.md#zone-redundant-storage-for-managed-disks) for details.
## Recommendations for applications running on multiple VMs
virtual-machines Disks Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-redundancy.md
If your workflow doesn't support application-level synchronous writes across zon
Zone-redundant storage (ZRS) synchronously replicates your Azure managed disk across three Azure availability zones in the region you select. Each availability zone is a separate physical location with independent power, cooling, and networking. ZRS disks provide at least 99.9999999999% (12 9's) of durability over a given year.
-A ZRS disk lets you recover from failures in availability zones. If a zone went down and your virtual machine (VM) wasn't affected, then your workloads continue running. But if your VM was affected by an outage and you want to recover before it's resolved, you can either take a snapshot or make a copy of your ZRS disks. Once you've created new disks, attach them to a VM.
+A ZRS disk lets you recover from failures in availability zones. If a zone went down and your virtual machine (VM) wasn't affected, then your workloads continue running. But if your VM was affected by an outage and you want to recover before it's resolved, you can [force detach](https://learn.microsoft.com/rest/api/compute/virtual-machines/attach-detach-data-disks?view=rest-compute-2024-03-01&tabs=HTTP#diskdetachoptiontypes&preserve-view=true) (in preview) the ZRS data disks from the impacted VM and attach them to another VM.
ZRS disks can also be shared between VMs for improved availability with clustered or distributed applications like SQL FCI, SAP ASCS/SCS, or GFS2. A shared ZRS disk can be attached to primary and secondary VMs in different zones to take advantage of both ZRS and [availability zones](../availability-zones/az-overview.md). If your primary zone fails, you can quickly fail over to the secondary VM using [SCSI persistent reservation](disks-shared-enable.md#supported-scsi-pr-commands). For more information on ZRS disks, see [Zone Redundant Storage (ZRS) option for Azure Disks for high availability](https://youtu.be/RSHmhmdHXcY).
For more information on ZRS disks, see [Zone Redundant Storage (ZRS) option for
[!INCLUDE [disk-storage-zrs-limitations](../../includes/disk-storage-zrs-limitations.md)]
+Force detach (in preview) is supported for ZRS data disks but not supported for ZRS OS disk.
+ ### Regional availability [!INCLUDE [disk-storage-zrs-regions](../../includes/disk-storage-zrs-regions.md)]
Except for more write latency, disks using ZRS are identical to disks using LRS,
## Next steps - To learn how to create a ZRS disk, see [Deploy a ZRS managed disk](disks-deploy-zrs.md).-- To convert an LRS disk to ZRS, see [Convert a disk from LRS to ZRS](disks-migrate-lrs-zrs.md).
+- To convert an LRS disk to ZRS, see [Convert a disk from LRS to ZRS](disks-migrate-lrs-zrs.md).
+- More about [force detach](https://learn.microsoft.com/rest/api/compute/virtual-machines/attach-detach-data-disks?view=rest-compute-2024-03-01&tabs=HTTP#diskdetachoptiontypes&preserve-view=true)
virtual-machines Dlsv5 Dldsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dlsv5-dldsv5-series.md
- Title: Dlsv5 and Dldsv5
-description: Specifications for the Dlsv5 and Dldsv5-series VMs.
----- Previously updated : 02/16/2023---
-# Dlsv5 and Dldsv5-series
-
-The Dlsv5 and Dldsv5-series Virtual Machines runs on Intel® Xeon® Platinum 8473C (Sapphire Rapids), or Intel® Xeon® Platinum 8370C (Ice Lake) processor in a [hyper threaded](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html) configuration. This new processor features an all core turbo clock speed of 3.5 GHz with [Intel&reg; Turbo Boost Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel&reg; Advanced-Vector Extensions 512 (Intel&reg; AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html) and [Intel&reg; Deep Learning Boost](https://software.intel.com/content/www/us/en/develop/topics/ai/deep-learning-boost.html). The Dlsv5 and Dldsv5 VM series provides 2GiBs of RAM per vCPU and optimized for workloads that require less RAM per vCPU than standard VM sizes. Target workloads include web servers, gaming, video encoding, AI/ML, and batch processing.
-
-## Dlsv5-series
-Dlsv5-series virtual machines run on Intel® Xeon® Platinum 8473C (Sapphire Rapids), or Intel® Xeon® Platinum 8370C (Ice Lake) processor reaching an all core turbo clock speed of up to 3.5 GHz. These virtual machines offer up to 96 vCPU and 192 GiB of RAM. These VM sizes can reduce cost when running non-memory intensive applications.
-
-Dlsv5-series virtual machines do not have any temporary storage thus lowering the price of entry. You can attach Standard SSD, Standard HDD, and Premium SSD disk types. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/).
--
-[Premium Storage](premium-storage-performance.md): Supported<br>
-[Premium Storage caching](premium-storage-performance.md): Supported<br>
-[Live Migration](maintenance-and-updates.md): Supported<br>
-[Memory Preserving Updates](maintenance-and-updates.md): Supported<br>
-[VM Generation Support](generation-2.md): Generation 1 and 2<br>
-[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md)<sup>1</sup>: Required <br>
-[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
-[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
-<br>
-
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps<sup>*</sup> | Max burst uncached disk throughput: IOPS/MBps3 | Max NICs |Max network bandwidth (Mbps) |
-||||||||| |
-| Standard_D2ls_v5 | 2 | 4 | Remote Storage Only | 4 | 3750/85 | 10000/1200 | 2 | 12500 |
-| Standard_D4ls_v5 | 4 | 8 | Remote Storage Only | 8 | 6400/145 | 20000/1200 | 2 | 12500 |
-| Standard_D8ls_v5 | 8 | 16 | Remote Storage Only | 16 | 12800/290 | 20000/1200 | 4 | 12500 |
-| Standard_D16ls_v5 | 16 | 32 | Remote Storage Only | 32 | 25600/600 | 40000/1200 | 8 | 12500 |
-| Standard_D32ls_v5 | 32 | 64 | Remote Storage Only | 32 | 51200/865 | 80000/2000 | 8 | 16000 |
-| Standard_D48ls_v5 | 48 | 96 | Remote Storage Only | 32 | 76800/1315 | 80000/3000 | 8 | 24000 |
-| Standard_D64ls_v5 | 64 | 128 | Remote Storage Only | 32 | 80000/1735 | 80000/3000 | 8 | 30000 |
-| Standard_D96ls_v5 | 96 | 192 | Remote Storage Only | 32 | 80000/2600 | 80000/4000 |8 | 35000 |
-
-<sup>*</sup> These IOPs values can be guaranteed by using [Gen2 VMs](generation-2.md)<br>
-<sup>1</sup> Accelerated networking is required and turned on by default on all Dlsv5 virtual machines.<br>
-
-## Dldsv5-series
-
-Dldsv5-series virtual machines run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) processor reaching an all core turbo clock speed of up to 3.5 GHz. These virtual machines offer up to 96 vCPU and 192 GiB of RAM as well as fast, local SSD storage up to 3,600 GiB. These VM sizes can reduce cost when running non-memory intensive applications.
-
-Dldsv5-series virtual machines support Standard SSD, Standard HDD, and Premium SSD disk types. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/).
--
-[Premium Storage](premium-storage-performance.md): Supported<br>
-[Premium Storage caching](premium-storage-performance.md): Supported<br>
-[Live Migration](maintenance-and-updates.md): Supported<br>
-[Memory Preserving Updates](maintenance-and-updates.md): Supported<br>
-[VM Generation Support](generation-2.md): Generation 1 and 2<br>
-[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md)<sup>1</sup>: Required <br>
-[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
-[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
-<br>
--
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>3</sup> | Max NICs | Max network bandwidth (Mbps) |
-|||||||||||
-| Standard_D2lds_v5 | 2 | 4 | 75 | 4 | 9000/125 | 3750/85 | 10000/1200 | 2 | 12500 |
-| Standard_D4lds_v5 | 4 | 8 | 150 | 8 | 19000/250 | 6400/145 | 20000/1200 | 2 | 12500 |
-| Standard_D8lds_v5 | 8 | 16 | 300 | 16 | 38000/500 | 12800/290 | 20000/1200 | 4 | 12500 |
-| Standard_D16lds_v5 | 16 | 32 | 600 | 32 | 75000/1000 | 25600/600 | 40000/1200 | 8 | 12500 |
-| Standard_D32lds_v5 | 32 | 64 | 1200 | 32 | 150000/2000 | 51200/865 | 80000/2000 | 8 | 16000 |
-| Standard_D48lds_v5 | 48 | 96 | 1800 | 32 | 225000/3000 | 76800/1315 | 80000/3000 | 8 | 24000 |
-| Standard_D64lds_v5 | 64 | 128 | 2400 | 32 | 300000/4000 | 80000/1735 | 80000/3000 | 8 | 30000 |
-| Standard_D96lds_v5 | 96 | 192 | 3600 | 32 | 450000/4000 | 80000/2600 | 80000/4000 | 8 | 35000 |
-
-<sup>*</sup> These IOPs values can be guaranteed by using [Gen2 VMs](generation-2.md)<br>
-<sup>1</sup> Accelerated networking is required and turned on by default on all Dldsv5 virtual machines.<br>
-<sup>2</sup> Dldsv5-series virtual machines can [burst](disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
--
-## Other sizes and information
--- [General purpose](sizes-general.md)-- [Memory optimized](sizes-memory.md)-- [Storage optimized](sizes-storage.md)-- [GPU optimized](sizes-gpu.md)-- [High performance compute](sizes-hpc.md)-- [Previous generations](sizes-previous-gen.md)-
-Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
-
-For more information on Disks Types: [Disk Types](./disks-types.md#ultra-disks)
-
-## Next steps
-
-Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines Enable Nvme Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/enable-nvme-interface.md
For more information about enabling the NVMe interface on virtual machines creat
- [Azure portal - Plan ID: 2022-datacenter-azure-edition](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-azure-edition) - [Azure portal - Plan ID: 2022-datacenter-azure-edition-core](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-azure-edition-core) - [Azure portal - Plan ID: 2022-datacenter-azure-edition-core-smalldisk](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-azure-edition-core-smalldisk)
+- [Azure portal - Plan ID: 2022-datacenter-azure-edition-hotpatch](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-azure-edition-hotpatch)
+- [Azure portal - Plan ID: 2022-datacenter-azure-edition-hotpatch-smalldisk](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-azure-edition-hotpatch-smalldisk)
+
virtual-machines Share Gallery Community https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery-community.md
There are some limitations for sharing your gallery to the community:
- You can't convert an existing private gallery(RBAC enabled gallery) to Community gallery. - You can't use a third party image from Marketplace and publish it to the community. For a list of approved operating system base images, please see: [approved base images](https://go.microsoft.com/fwlink/?linkid=2245050). - Encrypted images are not supported
+- Not available in Government clouds
- Image resources need to be created in the same region as the gallery. For example, if you create a gallery in West US, the image definitions and image versions should be created in West US if you want to make them available. - You can't share [VM Applications](vm-applications.md) to the community yet.
virtual-machines D Family https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/d-family.md
## Series in family
-### Dpsv6-series
+### Dpsv6 and Dplsv6-series
+#### [Dpsv6-series](#tab/dpsv6)
[!INCLUDE [dpsv6-series-summary](./includes/dpsv6-series-summary.md)] [View the full Dpsv6-series page](./dpsv6-series.md). [!INCLUDE [dpsv6-series-specs](./includes/dpsv6-series-specs.md)] -
-### Dplsv6-series
+#### [Dplsv6-series](#tab/dplsv6)
[!INCLUDE [dplsv6-series-summary](./includes/dplsv6-series-summary.md)] [View the full Dplsv6-series page](./dplsv6-series.md). [!INCLUDE [dplsv6-series-specs](./includes/dplsv6-series-specs.md)] -
-### Dpdsv6-series
+
+### Dpdsv6 and Dpldsv6-series
+#### [Dpdsv6-series](#tab/dpdsv6)
[!INCLUDE [dpdsv6-series-summary](./includes/dpdsv6-series-summary.md)] [View the full Dpdsv6-series page](./dpdsv6-series.md). [!INCLUDE [dpdsv6-series-specs](./includes/dpdsv6-series-specs.md)] -
-### Dpldsv6-series
+#### [Dpldsv6-series](#tab/dpldsv6)
[!INCLUDE [dpldsv6-series-summary](./includes/dpldsv6-series-summary.md)] [View the full Dpldsv6-series page](./dpldsv6-series.md). [!INCLUDE [dpldsv6-series-specs](./includes/dpldsv6-series-specs.md)] -+ ### Dasv6 and Dadsv6-series [!INCLUDE [dasv6-dadsv6-series-summary](./includes/dasv6-dadsv6-series-summary.md)]
### Dlsv5 and Dldsv5-series
+#### [Dlsv5-series](#tab/dlsv5)
+
+[View the full Dlsv5-series page](./dlsv5-series.md).
+
-[View the full Dlsv5 and Dldsv5-series page](../../dlsv5-dldsv5-series.md).
+#### [Dldsv5-series](#tab/dldsv5)
+[View the full Dldsv5-series page](./dldsv5-series.md).
++ ### Dv4 and Dsv4-series [!INCLUDE [dv4-dsv4-series-summary](./includes/dv4-dsv4-series-summary.md)]
virtual-machines Dldsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dldsv5-series.md
+
+ Title: Dldsv5 size series
+description: Information on and specifications of the Dldsv5-series sizes
++++ Last updated : 07/18/2024++++
+# Dldsv5 sizes series
++
+## Host specifications
+
+## Feature support
+Premium Storage: Supported<br>
+Premium Storage caching: Supported<br>
+Live Migration: Supported<br>
+Memory Preserving Updates: Supported<br>
+VM Generation Support: Generation 1 and 2<br>
+Accelerated Networking: Required <br>
+Ephemeral OS Disks: Supported <br>
+Nested Virtualization: Supported <br>
+
+## Sizes in series
+
+### [Basics](#tab/sizebasic)
+
+vCPUs (Qty.) and Memory for each size
+
+| Size Name | vCPUs (Qty.) | Memory (GB) |
+| | | |
+| Standard_D2lds_v5 | 2 | 4 |
+| Standard_D4lds_v5 | 4 | 8 |
+| Standard_D8lds_v5 | 8 | 16 |
+| Standard_D16lds_v5 | 16 | 32 |
+| Standard_D32lds_v5 | 32 | 64 |
+| Standard_D48lds_v5 | 48 | 96 |
+| Standard_D64lds_v5 | 64 | 128 |
+| Standard_D96lds_v5 | 96 | 192 |
+
+#### VM Basics resources
+- [What are vCPUs (Qty.)](../../../virtual-machines/managed-disks-overview.md)
+- [Check vCPU quotas](../../../virtual-machines/quotas.md)
+
+### [Local Storage](#tab/sizestoragelocal)
+
+Local (temp) storage info for each size
+
+| Size Name | Max Temp Storage (Qty.) | Temp Storage Size (GiB) | Temp ReadWrite Storage IOPS | Temp ReadWrite Storage Speed (MBps) | Temp ReadOnly Storage IOPS | Temp ReadOnly Storage Speed (MBps) |
+| | | | | | | |
+| Standard_D2lds_v5 | 1 | 75 | 9000 | 125 | | |
+| Standard_D4lds_v5 | 1 | 150 | 19000 | 250 | | |
+| Standard_D8lds_v5 | 1 | 300 | 38000 | 500 | | |
+| Standard_D16lds_v5 | 1 | 600 | 75000 | 1000 | | |
+| Standard_D32lds_v5 | 1 | 1200 | 150000 | 2000 | | |
+| Standard_D48lds_v5 | 1 | 1800 | 225000 | 3000 | | |
+| Standard_D64lds_v5 | 1 | 2400 | 300000 | 4000 | | |
+| Standard_D96lds_v5 | 1 | 3600 | 450000 | 4000 | | |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly (R-O) or ReadWrite (R-W). For uncached data disk operation, the host cache mode is set to None.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
+
+### [Remote Storage](#tab/sizestorageremote)
+
+Remote (uncached) storage info for each size
+
+| Size Name | Max Remote Storage (Qty.) | Uncached Storage IOPS | Uncached Storage Speed (MBps) | Uncached Storage Burst<sup>1</sup> IOPS | Uncached Storage Burst<sup>1</sup> Speed (MBps) | Uncached Special<sup>2</sup> Storage IOPS | Uncached Special<sup>2</sup> Storage Speed (MBps) | Uncached Burst<sup>1</sup> Special2 Storage IOPS | Uncached Burst<sup>1</sup> Special Storage Speed (MBps) |
+| | | | | | | | | | |
+| Standard_D2lds_v5 | 4 | 3750 | 85 | 10000 | 1200 | | | | |
+| Standard_D4lds_v5 | 8 | 6400 | 145 | 20000 | 1200 | | | | |
+| Standard_D8lds_v5 | 16 | 12800 | 290 | 20000 | 1200 | | | | |
+| Standard_D16lds_v5 | 32 | 25600 | 600 | 40000 | 1200 | | | | |
+| Standard_D32lds_v5 | 32 | 51200 | 865 | 80000 | 2000 | | | | |
+| Standard_D48lds_v5 | 32 | 76800 | 1315 | 80000 | 3000 | | | | |
+| Standard_D64lds_v5 | 32 | 80000 | 1735 | 80000 | 3000 | | | | |
+| Standard_D96lds_v5 | 32 | 80000 | 2600 | 80000 | 4000 | | | | |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>These sizes support [bursting](../../disk-bursting.md) to temporarily increase disk performance. Burst speeds can be maintained for up to 30 minutes at a time.
+- <sup>2</sup>Special Storage refers to either [Ultra Disk](../../../virtual-machines/disks-enable-ultra-ssd.md) or [Premium SSD v2](../../../virtual-machines/disks-deploy-premium-v2.md) storage.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
++
+### [Network](#tab/sizenetwork)
+
+Network interface info for each size
+
+| Size Name | Max NICs (Qty.) | Max Bandwidth (Mbps) |
+| | | |
+| Standard_D2lds_v5 | 2 | 12500 |
+| Standard_D4lds_v5 | 2 | 12500 |
+| Standard_D8lds_v5 | 4 | 12500 |
+| Standard_D16lds_v5 | 8 | 12500 |
+| Standard_D32lds_v5 | 8 | 16000 |
+| Standard_D48lds_v5 | 8 | 24000 |
+| Standard_D64lds_v5 | 8 | 30000 |
+| Standard_D96lds_v5 | 8 | 35000 |
+
+#### Networking resources
+- [Virtual networks and virtual machines in Azure](../../../virtual-network/network-overview.md)
+- [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+
+#### Table definitions
+- Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+- Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](../../../virtual-network/virtual-network-optimize-network-bandwidth.md).
+- To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](../../../virtual-network/virtual-network-bandwidth-testing.md).
+
+### [Accelerators](#tab/sizeaccelerators)
+
+Accelerator (GPUs, FPGAs, etc.) info for each size
+
+> [!NOTE]
+> No accelerators are present in this series.
+++
virtual-machines Dlsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dlsv5-series.md
+
+ Title: Dlsv5 size series
+description: Information on and specifications of the Dlsv5-series sizes
++++ Last updated : 07/18/2024++++
+# Dlsv5 sizes series
++
+## Host specifications
+
+## Feature support
+Premium Storage: Supported<br>
+Premium Storage caching: Supported<br>
+Live Migration: Supported<br>
+Memory Preserving Updates: Supported<br>
+VM Generation Support: Generation 1 and 2<br>
+Accelerated Networking: Required <br>
+Ephemeral OS Disks: Not Supported <br>
+Nested Virtualization: Supported <br>
+<br>
+
+## Sizes in series
+
+### [Basics](#tab/sizebasic)
+
+vCPUs (Qty.) and Memory for each size
+
+| Size Name | vCPUs (Qty.) | Memory (GB) |
+| | | |
+| Standard_D2ls_v5 | 2 | 4 |
+| Standard_D4ls_v5 | 4 | 8 |
+| Standard_D8ls_v5 | 8 | 16 |
+| Standard_D16ls_v5 | 16 | 32 |
+| Standard_D32ls_v5 | 32 | 64 |
+| Standard_D48ls_v5 | 48 | 96 |
+| Standard_D64ls_v5 | 64 | 128 |
+| Standard_D96ls_v5 | 96 | 192 |
+
+#### VM Basics resources
+- [What are vCPUs (Qty.)](../../../virtual-machines/managed-disks-overview.md)
+- [Check vCPU quotas](../../../virtual-machines/quotas.md)
+
+### [Local Storage](#tab/sizestoragelocal)
+
+Local (temp) storage info for each size
+
+> [!NOTE]
+> No local storage present in this series. For similar sizes with local storage, see the [Dldsv5-series](./dldsv5-series.md).
+>
+> For frequently asked questions, see [Azure VM sizes with no local temp disk](../../azure-vms-no-temp-disk.yml).
+++
+### [Remote Storage](#tab/sizestorageremote)
+
+Remote (uncached) storage info for each size
+
+| Size Name | Max Remote Storage (Qty.) | Uncached Storage IOPS | Uncached Storage Speed (MBps) | Uncached Storage Burst<sup>1</sup> IOPS | Uncached Storage Burst<sup>1</sup> Speed (MBps) | Uncached Special<sup>2</sup> Storage IOPS | Uncached Special<sup>2</sup> Storage Speed (MBps) | Uncached Burst<sup>1</sup> Special2 Storage IOPS | Uncached Burst<sup>1</sup> Special Storage Speed (MBps) |
+| | | | | | | | | | |
+| Standard_D2ls_v5 | 4 | 3750 | 85 | 10000 | 1200 | | | | |
+| Standard_D4ls_v5 | 8 | 6400 | 145 | 20000 | 1200 | | | | |
+| Standard_D8ls_v5 | 16 | 12800 | 290 | 20000 | 1200 | | | | |
+| Standard_D16ls_v5 | 32 | 25600 | 600 | 40000 | 1200 | | | | |
+| Standard_D32ls_v5 | 32 | 51200 | 865 | 80000 | 2000 | | | | |
+| Standard_D48ls_v5 | 32 | 76800 | 1315 | 80000 | 3000 | | | | |
+| Standard_D64ls_v5 | 32 | 80000 | 1735 | 80000 | 3000 | | | | |
+| Standard_D96ls_v5 | 32 | 80000 | 2600 | 80000 | 4000 | | | | |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>These sizes support [bursting](../../disk-bursting.md) to temporarily increase disk performance. Burst speeds can be maintained for up to 30 minutes at a time.
+- <sup>2</sup>Special Storage refers to either [Ultra Disk](../../../virtual-machines/disks-enable-ultra-ssd.md) or [Premium SSD v2](../../../virtual-machines/disks-deploy-premium-v2.md) storage.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
++
+### [Network](#tab/sizenetwork)
+
+Network interface info for each size
+
+| Size Name | Max NICs (Qty.) | Max Bandwidth (Mbps) |
+| | | |
+| Standard_D2ls_v5 | 2 | 12500 |
+| Standard_D4ls_v5 | 2 | 12500 |
+| Standard_D8ls_v5 | 4 | 12500 |
+| Standard_D16ls_v5 | 8 | 12500 |
+| Standard_D32ls_v5 | 8 | 16000 |
+| Standard_D48ls_v5 | 8 | 24000 |
+| Standard_D64ls_v5 | 8 | 30000 |
+| Standard_D96ls_v5 | 8 | 35000 |
+
+#### Networking resources
+- [Virtual networks and virtual machines in Azure](../../../virtual-network/network-overview.md)
+- [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+
+#### Table definitions
+- Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+- Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](../../../virtual-network/virtual-network-optimize-network-bandwidth.md).
+- To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](../../../virtual-network/virtual-network-bandwidth-testing.md).
+
+### [Accelerators](#tab/sizeaccelerators)
+
+Accelerator (GPUs, FPGAs, etc.) info for each size
+
+> [!NOTE]
+> No accelerators are present in this series.
+++
virtual-machines Dpdsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dpdsv6-series.md
- build-2024 Previously updated : 05/09/2024 Last updated : 07/22/2024
vCPUs (Qty.) and Memory for each size
-| Size Name | vCPUs (Qty.) | Memory (GB) |
+| Size Name | vCPUs (Qty.) | Memory (GiB) |
| | | | | Standard_D2pds_v6 | 2 | 8 | | Standard_D4pds_v6 | 4 | 16 |
vCPUs (Qty.) and Memory for each size
| Standard_D96pds_v6 | 96 | 384 | > [!NOTE]
-> The Dpdsv6 VM series will only work on OS images that are tagged with NVMe support. If your current OS image is not supported for NVMe, you’ll see an error message. NVMe support is available on the most popular OS images, and we continuously improve the OS image coverage.
+> The Dpdsv6 VM series will only work on OS images that support NVMe (i.e. NVMe drivers required for the local storage). If your current OS image doesn't have NVMe support, youΓÇÖll see an error message. NVMe support is available on the most popular OS images, and we're continuously improving OS image compatibility.
#### VM Basics resources - [What are vCPUs (Qty.)](../../../virtual-machines/managed-disks-overview.md) - [Check vCPU quotas](../../../virtual-machines/quotas.md)-- [Introduction to Azure compute units (ACUs)](../../../virtual-machines/acu.md) ### [Local Storage](#tab/sizestoragelocal)
virtual-machines Dpldsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dpldsv6-series.md
- build-2024 Previously updated : 05/09/2024 Last updated : 07/22/2024
vCPUs (Qty.) and Memory for each size
-| Size Name | vCPUs (Qty.) | Memory (GB) |
+| Size Name | vCPUs (Qty.) | Memory (GiB) |
| | | | | Standard_D2plds_v6 | 2 | 4 | | Standard_D4plds_v6 | 4 | 8 |
vCPUs (Qty.) and Memory for each size
| Standard_D96plds_v6 | 96 | 192 | > [!NOTE]
-> The Dpldsv6 VM series will only work on OS images that are tagged with NVMe support. If your current OS image is not supported for NVMe, youΓÇÖll see an error message. NVMe support is available on the most popular OS images, and we continuously improve the OS image coverage.
+> The Dpldsv6 VM series will only work on OS images that support NVMe (i.e. NVMe drivers required for the local storage). If your current OS image doesn't have NVMe support, youΓÇÖll see an error message. NVMe support is available on the most popular OS images, and we're continuously improving OS image compatibility.
#### VM Basics resources - [What are vCPUs (Qty.)](../../../virtual-machines/managed-disks-overview.md) - [Check vCPU quotas](../../../virtual-machines/quotas.md)-- [Introduction to Azure compute units (ACUs)](../../../virtual-machines/acu.md) ### [Local Storage](#tab/sizestoragelocal)
virtual-machines Dplsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dplsv6-series.md
- build-2024 Previously updated : 05/09/2024 Last updated : 07/22/2024
vCPUs (Qty.) and Memory for each size
-| Size Name | vCPUs (Qty.) | Memory (GB) |
+| Size Name | vCPUs (Qty.) | Memory (GiB) |
| | | | | Standard_D2pls_v6 | 2 | 4 | | Standard_D4pls_v6 | 4 | 8 |
vCPUs (Qty.) and Memory for each size
#### VM Basics resources - [What are vCPUs (Qty.)](../../../virtual-machines/managed-disks-overview.md) - [Check vCPU quotas](../../../virtual-machines/quotas.md)-- [Introduction to Azure compute units (ACUs)](../../../virtual-machines/acu.md) ### [Local Storage](#tab/sizestoragelocal)
virtual-machines Dpsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dpsv6-series.md
- build-2024 Previously updated : 05/09/2024 Last updated : 07/22/2024
vCPUs (Qty.) and Memory for each size
-| Size Name | vCPUs (Qty.) | Memory (GB) |
+| Size Name | vCPUs (Qty.) | Memory (GiB) |
| | | | | Standard_D2ps_v6 | 2 | 8 | | Standard_D4ps_v6 | 4 | 16 |
vCPUs (Qty.) and Memory for each size
#### VM Basics resources - [What are vCPUs (Qty.)](../../../virtual-machines/managed-disks-overview.md) - [Check vCPU quotas](../../../virtual-machines/quotas.md)-- [Introduction to Azure compute units (ACUs)](../../../virtual-machines/acu.md) ### [Local Storage](#tab/sizestoragelocal)
virtual-machines E Family https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/memory-optimized/e-family.md
Previously updated : 04/18/2024 Last updated : 07/22/2024
## Series in family
+### Epsv6 and Epdsv6-series
+#### [Epsv6-series](#tab/epsv6)
+
+[View the full Epsv6-series page](./epsv6-series.md).
++
+#### [Epdsv6-series](#tab/epdsv6)
+
+[View the full Epdsv6-series page](./epdsv6-series.md).
+++
+### Easv6 and Eadsv6-series
+
+[View the full Easv6 and Eadsv6-series page](../../easv6-eadsv6-series.md).
++ ### Ev5 and Esv5-series [!INCLUDE [ev5-esv5-series-summary](./includes/ev5-esv5-series-summary.md)]
[!INCLUDE [easv5-eadsv5-series-specs](./includes/easv5-eadsv5-series-specs.md)]
-### Easv6 and Eadsv6-series
-
-[View the full Easv6 and Eadsv6-series page](../../easv6-eadsv6-series.md).
--- ### Epsv5 and Epdsv5-series [!INCLUDE [epsv5-epdsv5-series-summary](./includes/epsv5-epdsv5-series-summary.md)]
virtual-machines Epdsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/memory-optimized/epdsv6-series.md
- build-2024 Previously updated : 05/09/2024 Last updated : 07/22/2024
vCPUs (Qty.) and Memory for each size
-| Size Name | vCPUs (Qty.) | Memory (GB) |
+| Size Name | vCPUs (Qty.) | Memory (GiB) |
| | | | | Standard_E2pds_v6 | 2 | 16 | | Standard_E4pds_v6 | 4 | 32 |
vCPUs (Qty.) and Memory for each size
| Standard_E96pds_v6 | 96 | 672 | > [!NOTE]
-> The Epdsv6 VM series will only work on OS images that are tagged with NVMe support. If your current OS image is not supported for NVMe, youΓÇÖll see an error message. NVMe support is available on the most popular OS images, and we continuously improve the OS image coverage.
+> The Epdsv6 VM series will only work on OS images that support NVMe (i.e. NVMe drivers required for the local storage). If your current OS image doesn't have NVMe support, youΓÇÖll see an error message. NVMe support is available on the most popular OS images, and we're continuously improving OS image compatibility.
#### VM Basics resources - [What are vCPUs (Qty.)](../../../virtual-machines/managed-disks-overview.md) - [Check vCPU quotas](../../../virtual-machines/quotas.md)-- [Introduction to Azure compute units (ACUs)](../../../virtual-machines/acu.md) ### [Local Storage](#tab/sizestoragelocal)
virtual-machines Epsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/memory-optimized/epsv6-series.md
- build-2024 Previously updated : 05/09/2024 Last updated : 07/22/2024
vCPUs (Qty.) and Memory for each size:
-| Size Name | vCPUs (Qty.) | Memory (GB) |
+| Size Name | vCPUs (Qty.) | Memory (GiB) |
| | | | | Standard_E2ps_v6 | 2 | 16 | | Standard_E4ps_v6 | 4 | 32 |
vCPUs (Qty.) and Memory for each size:
#### VM Basics resources - [What are vCPUs (Qty.)](../../../virtual-machines/managed-disks-overview.md) - [Check vCPU quotas](../../../virtual-machines/quotas.md)-- [Introduction to Azure compute units (ACUs)](../../../virtual-machines/acu.md) ### [Local Storage](#tab/sizestoragelocal)
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/overview.md
General purpose VM sizes provide balanced CPU-to-memory ratio. Ideal for testing
|-||| | [A-family](./general-purpose/a-family.md) | Entry-level economical | [Av2-series](./general-purpose/a-family.md#av2-series) <br> [Previous-gen A-family series](./previous-gen-sizes-list.md#general-purpose-previous-gen-sizes) | | [B-family](./general-purpose/b-family.md) | Burstable | [Bsv2-series](./general-purpose/b-family.md#bsv2-series) <br> [Basv2-series](./general-purpose/b-family.md#basv2-series) <br> [Bpsv2-series](./general-purpose/b-family.md#bpsv2-series) |
-| [D-family](./general-purpose/d-family.md) | Enterprise-grade applications <br> Relational databases <br> In-memory caching <br> Data analytics | [Dpsv6-series](./general-purpose/d-family.md#dpsv6-series) and [Dplsv6-series](./general-purpose/d-family.md#dplsv6-series ) <br> [Dpdsv6-series](./general-purpose/d-family.md#dpdsv6-series) and [Dpldsv6-series](./general-purpose/d-family.md#dpldsv6-series) <br> [Dalsv6 and Daldsv6-series](./general-purpose/d-family.md#dalsv6-and-daldsv6-series) <br> [Dpsv5 and Dpdsv5-series](./general-purpose/d-family.md#dpsv5-and-dpdsv5-series) <br> [Dpldsv5 and Dpldsv5-series](./general-purpose/d-family.md#dplsv5-and-dpldsv5-series) <br> [Dlsv5 and Dldsv5-series](./general-purpose/d-family.md#dlsv5-and-dldsv5-series) <br> [Dv5 and Dsv5-series](./general-purpose/d-family.md#dv5-and-dsv5-series) <br> [Ddv5 and Ddsv5-series](./general-purpose/d-family.md#ddv5-and-ddsv5-series) <br> [Dasv5 and Dadsv5-series](./general-purpose/d-family.md#dasv5-and-dadsv5-series) <br> [Previous-gen D-family series](./previous-gen-sizes-list.md#general-purpose-previous-gen-sizes) |
+| [D-family](./general-purpose/d-family.md) | Enterprise-grade applications <br> Relational databases <br> In-memory caching <br> Data analytics | [Dpsv6-series and Dplsv6-series](./general-purpose/d-family.md#dpsv6-and-dplsv6-series ) <br> [Dpdsv6-series and Dpldsv6-series](./general-purpose/d-family.md#dpdsv6-and-dpldsv6-series) <br> [Dalsv6 and Daldsv6-series](./general-purpose/d-family.md#dalsv6-and-daldsv6-series) <br> [Dpsv5 and Dpdsv5-series](./general-purpose/d-family.md#dpsv5-and-dpdsv5-series) <br> [Dpldsv5 and Dpldsv5-series](./general-purpose/d-family.md#dplsv5-and-dpldsv5-series) <br> [Dlsv5 and Dldsv5-series](./general-purpose/d-family.md#dlsv5-and-dldsv5-series) <br> [Dv5 and Dsv5-series](./general-purpose/d-family.md#dv5-and-dsv5-series) <br> [Ddv5 and Ddsv5-series](./general-purpose/d-family.md#ddv5-and-ddsv5-series) <br> [Dasv5 and Dadsv5-series](./general-purpose/d-family.md#dasv5-and-dadsv5-series) <br> [Previous-gen D-family series](./previous-gen-sizes-list.md#general-purpose-previous-gen-sizes) |
| [DC-family](./general-purpose/dc-family.md) | D-family with confidential computing | [DCasv5 and DCadsv5-series](./general-purpose/dc-family.md#dcasv5-and-dcadsv5-series) <br> [DCas_cc_v5 and DCads_cc_v5-series](./general-purpose/dc-family.md#dcas_cc_v5-and-dcads_cc_v5-series) <br> [DCesv5 and DCedsv5-series](./general-purpose/dc-family.md#dcesv5-and-dcedsv5-series) <br> [DCsv3 and DCdsv3-series](./general-purpose/dc-family.md#dcsv3-and-dcdsv3-series) <br> [Previous-gen DC-family](./previous-gen-sizes-list.md#general-purpose-previous-gen-sizes)|
List of memory optimized VM sizes with links to each series' family page section
| Family | Workloads | Series List | |-|||
-| [E-family](./memory-optimized/e-family.md) | Relational databases <br> Medium to large caches <br> In-memory analytics |[Easv6 and Eadsv6-series](./memory-optimized/e-family.md#easv6-and-eadsv6-series)<br> [Ev5 and Esv5-series](./memory-optimized/e-family.md#ev5-and-esv5-series)<br> [Edv5 and Edsv5-series](./memory-optimized/e-family.md#edv5-and-edsv5-series)<br> [Easv5 and Eadsv5-series](./memory-optimized/e-family.md#easv5-and-eadsv5-series)<br> [Epsv5 and Epdsv5-series](./memory-optimized/e-family.md#epsv5-and-epdsv5-series)<br> [Previous-gen families](./previous-gen-sizes-list.md#memory-optimized-previous-gen-sizes) |
+| [E-family](./memory-optimized/e-family.md) | Relational databases <br> Medium to large caches <br> In-memory analytics |[Epsv6 and Epdsv6-series](./memory-optimized/e-family.md#epsv6-and-epdsv6-series)<br> [Easv6 and Eadsv6-series](./memory-optimized/e-family.md#easv6-and-eadsv6-series)<br> [Ev5 and Esv5-series](./memory-optimized/e-family.md#ev5-and-esv5-series)<br> [Edv5 and Edsv5-series](./memory-optimized/e-family.md#edv5-and-edsv5-series)<br> [Easv5 and Eadsv5-series](./memory-optimized/e-family.md#easv5-and-eadsv5-series)<br> [Epsv5 and Epdsv5-series](./memory-optimized/e-family.md#epsv5-and-epdsv5-series)<br> [Previous-gen families](./previous-gen-sizes-list.md#memory-optimized-previous-gen-sizes) |
| [Eb-family](./memory-optimized/e-family.md) | E-family with High remote storage performance | [Ebdsv5 and Ebsv5-series](./memory-optimized/eb-family.md#ebdsv5-and-ebsv5-series) | | [EC-family](./memory-optimized/ec-family.md) | E-family with confidential computing | [ECasv5 and ECadsv5-series](./memory-optimized/ec-family.md#ecasv5-and-ecadsv5-series)<br> [ECas_cc_v5 and ECads_cc_v5-series](./memory-optimized/ec-family.md#ecasccv5-and-ecadsccv5-series)<br> [ECesv5 and ECedsv5-series](./memory-optimized/ec-family.md#ecesv5-and-ecedsv5-series) | | [M-family](./memory-optimized/m-family.md) | Extremely large databases <br> Large amounts of memory | [Msv3 and Mdsv3-series](./memory-optimized/m-family.md#msv3-and-mdsv3-series)<br> [Mv2-series](./memory-optimized/m-family.md#mv2-series)<br> [Msv2 and Mdsv2-series](./memory-optimized/m-family.md#msv2-and-mdsv2-series) |
virtual-network Monitor Public Ip Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/monitor-public-ip-reference.md
Title: Monitoring Public IP addresses data reference-
-description: Important reference material needed when you monitor Public IP addresses
+ Title: Monitoring data reference for Public IP addresses
+description: This article contains important reference material you need when you monitor Azure Public IP addresses.
Last updated : 07/21/2024++ Previously updated : 08/24/2023- -
-# Monitoring Public IP addresses data reference
+# Public IP addresses monitoring data reference
-See [Monitoring Public IP address](monitor-public-ip.md) for details on collecting and analyzing monitoring data for Public IP addresses.
-## Metrics
+See [Monitor Public IP addresses](monitor-public-ip.md) for details on the data you can collect for Public IP addresses and how to use it.
-This section lists all the automatically collected platform metrics collected for Public IP addresses.
-|Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
-|-|--|
-| Public IP Addresses | [Microsoft.Network/publicIPAddresses](/azure/azure-monitor/platform/metrics-supported#microsoftnetworkpublicipaddresses) |
+### Supported metrics for Microsoft.Network/publicIPAddresses
-For more information, see a list of [all platform metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
+The following table lists the metrics available for the Microsoft.Network/publicIPAddresses resource type.
-## Metric Dimensions
-For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
-Public IP Addresses have the following dimensions associated with its metrics.
-| Dimension Name | Description |
-| - | -- |
-| **Port** | The port of the traffic. |
-| **Direction** | The direction of the traffic, inbound or outbound. |
-## Resource logs
+| Dimension name | Description |
+|:|:|
+| Port | The port of the traffic. |
+| Direction | The direction of the traffic: inbound or outbound. |
-This section lists the types of resource logs you can collect for Public IP addresses.
-For reference, see a list of [all resource logs category types supported in Azure Monitor](/azure/azure-monitor/platform/resource-logs-schema).
+### Supported resource logs for Microsoft.Network/publicIPAddresses
-This section lists all the resource log category types collected for Public IP addresses.
-|Resource Log Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
-|-|--|
-| Public IP addresses | [Microsoft.Network/publicIPAddresses](/azure/azure-monitor/platform/resource-logs-categories#microsoftnetworkpublicipaddresses) |
-## Azure Monitor Logs tables
+### Public IP addresses Microsoft.Network/publicIPAddresses
-This section refers to all of the Azure Monitor Logs Kusto tables relevant to Public IP addresses and available for query by Log Analytics.
+- [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity#columns)
+- [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics#columns)
+- [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics#columns)
-For more information, see [Azure Monitor Logs table reference organized by resource type](/azure/azure-monitor/reference/tables/tables-resourcetype#public-ip-addresses)
-## Activity log
+- [Networking resource provider operations](/azure/role-based-access-control/resource-provider-operations#networking)
-The following table lists the operations that Public IP addresses may record in the Activity log. This is a subset of the possible entries your might find in the activity log.
+The following table lists the operations that Public IP addresses can record in the Activity log. This list is a subset of the possible entries you might find in the activity log.
| Namespace | Description | |:|:| | Microsoft.Network/publicIPAddresses/read | Gets a public ip address definition. |
-| Microsoft.Network/publicIPAddresses/write | Creates a public Ip address or updates an existing public Ip address. |
-| Microsoft.Network/publicIPAddresses/delete | Deletes a public Ip address. |
-| Microsoft.Network/publicIPAddresses/join/action | Joins a public ip address. Not Alertable. |
-| Microsoft.Network/publicIPAddresses/dnsAliases/read | Gets a Public Ip Address Dns Alias resource |
-| Microsoft.Network/publicIPAddresses/dnsAliases/write | Creates a Public Ip Address Dns Alias resource |
-| Microsoft.Network/publicIPAddresses/dnsAliases/delete | Deletes a Public Ip Address Dns Alias resource |
-| Microsoft.Network/publicIPAddresses/providers/Microsoft.Insights/diagnosticSettings/read | Get the diagnostic settings of Public IP Address |
-| Microsoft.Network/publicIPAddresses/providers/Microsoft.Insights/diagnosticSettings/write | Create or update the diagnostic settings of Public IP Address |
-| Microsoft.Network/publicIPAddresses/providers/Microsoft.Insights/logDefinitions/read | Get the log definitions of Public IP Address |
-| Microsoft.Network/publicIPAddresses/providers/Microsoft.Insights/metricDefinitions/read | Get the metrics definitions of Public IP Address |
-
-See [all the possible resource provider operations in the activity log](../../role-based-access-control/resource-provider-operations.md).
-
-For more information on the schema of Activity Log entries, see [Activity Log schema](../../azure-monitor/essentials/activity-log-schema.md).
-
-## See Also
--- See [Monitoring Azure Public IP Address](monitor-public-ip.md) for a description of monitoring Azure Public IP addresses.--- See [Monitoring Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+| Microsoft.Network/publicIPAddresses/write | Creates a public IP address or updates an existing public IP address. |
+| Microsoft.Network/publicIPAddresses/delete | Deletes a public IP address. |
+| Microsoft.Network/publicIPAddresses/join/action | Joins a public IP address. Not Alertable. |
+| Microsoft.Network/publicIPAddresses/dnsAliases/read | Gets a Public IP Address Dns Alias resource. |
+| Microsoft.Network/publicIPAddresses/dnsAliases/write | Creates a Public IP Address Dns Alias resource. |
+| Microsoft.Network/publicIPAddresses/dnsAliases/delete | Deletes a Public IP Address Dns Alias resource. |
+| Microsoft.Network/publicIPAddresses/providers/Microsoft.Insights/diagnosticSettings/read | Get the diagnostic settings of Public IP Address. |
+| Microsoft.Network/publicIPAddresses/providers/Microsoft.Insights/diagnosticSettings/write | Create or update the diagnostic settings of Public IP Address. |
+| Microsoft.Network/publicIPAddresses/providers/Microsoft.Insights/logDefinitions/read | Get the log definitions of Public IP Address. |
+| Microsoft.Network/publicIPAddresses/providers/Microsoft.Insights/metricDefinitions/read | Get the metrics definitions of Public IP Address. |
+
+## Related content
+
+- See [Monitor Public IP addresses](monitor-public-ip.md) for a description of monitoring Public IP addresses.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
virtual-network Monitor Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/monitor-public-ip.md
Title: Monitoring Public IP addresses-
-description: Start here to learn how to monitor Public IP addresses
+ Title: Monitor Public IP addresses
+description: Start here to learn how to monitor Azure Public IP addresses by using Azure Monitor.
Last updated : 07/21/2024++ Previously updated : 08/24/2023- -
-# Monitoring Public IP addresses
+# Monitor Public IP addresses
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
-This article describes the monitoring data generated by Public IP Address. Public IP Addresses use [Azure Monitor](../../azure-monitor/overview.md). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md).
-## *Public IP Addresses* insights
+Public IP Address insights provide:
-Some services in Azure have a special focused pre-built monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "insights".
+- Traffic data
+- DDoS information
-Public IP Address insights provides:
-* Traffic data
+For more information about the resource types for Public IP addresses, see [Public IP addresses monitoring data reference](monitor-public-ip-reference.md).
-* DDoS information
-## Monitoring data
-Public IP Addresses collect the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data-from-azure-resources).
+For a list of available metrics for Public IP addresses, see [Public IP addresses monitoring data reference](monitor-public-ip-reference.md#metrics).
-See [Monitoring *Public IP Addresses* data reference](monitor-public-ip-reference.md) for detailed information on the metrics and logs metrics created by Public IP Addresses.
-## Collection and routing
+For the available resource log categories, their associated Log Analytics tables, and the log schemas for Public IP addresses, see [Public IP addresses monitoring data reference](monitor-public-ip-reference.md#resource-logs).
-Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
-Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations.
-For general guidance, see [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for *Public IP Addresses* are listed in [Monitoring *Public IP Addresses* data reference](monitor-public-ip-reference.md).
-## Creating a diagnostic setting
-You can create a diagnostic setting by using the Azure portal, PowerShell, or the Azure CLI.
+The following image is an example of the built-in queries for Public IP addresses that are found within the Long Analytics queries interface in the Azure portal.
-### Portal
-1. Sign-in to the [Azure Portal](https://portal.azure.com).
-2. In the search box at the top of the portal, enter **Public IP addresses**. Select **Public IP addresses** in the search results.
+### Public IP addresses alert rules
-3. Select your the public IP address that you want to enable the setting for. For this example, **myPublicIP** is used.
-
-4. In the **Monitoring** section of your public IP address, select **Diagnostic settings**.
-
-5. Select **+Add diagnostic setting**.
-
-6. Enter or select the following information in **Diagnostic setting**.
-
- | Setting | Value |
- | - | -- |
- | Diagnostic setting name | Enter a name for the diagnostic setting. |
- | **Logs** | |
- | Categories | Select **DDoSProtectionNotifications**, **DDoSMitigationFlowLogs**, and **DDoSMitigationReports**. |
- | **Metrics** | |
- | Select **AllMetrics**. |
-
-7. Select the **Destination details**. Some of the destination options are:
-
- * **Send to Log Analytics workspace**
-
- * Select the **Subscription** and **Log Analytics workspace**.
-
- * **Archive to a storage account**
-
- * Select the **Subscription** and **Storage account**.
-
- * **Stream to an event hub**
-
- * Select the **Subscription**, **Event hub namespace**, **Event hub name (optional)**, and **Event hub policy name**.
-
- * **Send to a partner solution**
-
- * Select the **Subscription** and **Destination**.
-
-8. Select **Save**.
-
-### PowerShell
-
-Sign in to Azure PowerShell:
-
-```azurepowershell
-Connect-AzAccount
-```
-
-#### Log analytics workspace
-
-To send resource logs to a Log Analytics workspace, enter these commands. In this example, **myResourceGroup**, **myLogAnalyticsWorkspace** and **myPublicIP** are used for the resource values. Replace these values with yours.
-
-```azurepowershell
-## Place the public IP in a variable. ##
-$ippara = @{
- ResourceGroupName = 'myResourceGroup'
- Name = 'myPublicIP'
-}
-$ip = Get-AzPublicIPAddress @ippara
-
-## Place the workspace in a variable. ##
-$wspara = @{
- ResourceGroupName = 'myResourceGroup'
- Name = 'myLogAnalyticsWorkspace'
-}
-$ws = Get-AzOperationalInsightsWorkspace @wspara
-
-## Enable the diagnostic setting. ##
-$diag = @{
- ResourceId = $ip.id
- Name = 'myDiagnosticSetting'
- Enabled = $true
- WorkspaceId = $ws.ResourceId
-}
-Set-AzDiagnosticSetting @diag
-```
-
-#### Storage account
-
-To send resource logs to a storage account, enter these commands. In this example, **myResourceGroup**, **mystorageaccount8675** and **myPublicIP** are used for the resource values. Replace these values with yours.
-
-```azurepowershell
-## Place the public IP in a variable. ##
-$ippara = @{
- ResourceGroupName = 'myResourceGroup'
- Name = 'myPublicIP'
-}
-$lb = Get-AzPublicIPAddress @ippara
-
-## Place the storage account in a variable. ##
-$storpara = @{
- ResourceGroupName = 'myResourceGroup'
- Name = 'mystorageaccount8675'
-}
-$storage = Get-AzStorageAccount @storpara
-
-## Enable the diagnostic setting. ##
-$diag = @{
- ResourceId = $ip.id
- Name = 'myDiagnosticSetting'
- StorageAccountId = $storage.id
- Enabled = $true
-}
-Set-AzDiagnosticSetting @diag
-```
-
-#### Event hub
-
-To send resource logs to an event hub namespace, enter these commands. In this example, **myResourceGroup**, **myeventhub8675** and **myPublicIP** are used for the resource values. Replace these values with yours.
-
-```azurepowershell
-## Place the public IP in a variable. ##
-$ippara = @{
- ResourceGroupName = 'myResourceGroup'
- Name = 'myPublicIP'
-}
-$lb = Get-AzPublicIPAddress @ippara
-
-## Place the event hub in a variable. ##
-$hubpara = @{
- ResourceGroupName = 'myResourceGroup'
- Name = 'myeventhub8675'
-}
-$eventhub = Get-AzEventHubNamespace @hubpara
-
-## Place the event hub authorization rule in a variable. ##
-$hubrule = @{
- ResourceGroupName = 'myResourceGroup'
- Namespace = 'myeventhub8675'
-}
-$eventhubrule = Get-AzEventHubAuthorizationRule @hubrule
-
-## Enable the diagnostic setting. ##
-$diag = @{
- ResourceId = $ip.id
- Name = 'myDiagnosticSetting'
- EventHubName = $eventhub.Name
- EventHubAuthorizationRuleId = $eventhubrule.Id
- Enabled = $true
-}
-Set-AzDiagnosticSetting @diag
-```
-
-### Azure CLI
-
-Sign in to Azure CLI:
-
-```azurecli
-az login
-```
-
-#### Log analytics workspace
-
-To send resource logs to a Log Analytics workspace, enter these commands. In this example, **myResourceGroup**, **myLogAnalyticsWorkspace** and **myPublicIP** are used for the resource values. Replace these values with yours.
-
-```azurecli
-ipid=$(az network public-ip show \
- --name myPublicIP \
- --resource-group myResourceGroup \
- --query id \
- --output tsv)
-
-wsid=$(az monitor log-analytics workspace show \
- --resource-group myResourceGroup \
- --workspace-name myLogAnalyticsWorkspace \
- --query id \
- --output tsv)
-
-az monitor diagnostic-settings create \
- --name myDiagnosticSetting \
- --resource $ipid \
- --logs '[{"category": "DDoSProtectionNotifications","enabled": true},{"category": "DDoSMitigationFlowLogs","enabled": true},{"category": "DDoSMitigationReports","enabled": true}]' \
- --metrics '[{"category": "AllMetrics","enabled": true}]' \
- --workspace $wsid
-```
-
-#### Storage account
-
-To send resource logs to a storage account, enter these commands. In this example, **myResourceGroup**, **mystorageaccount8675** and **myPublicIP** are used for the resource values. Replace these values with yours.
-
-```azurecli
-ipid=$(az network public-ip show \
- --name myPublicIP \
- --resource-group myResourceGroup \
- --query id \
- --output tsv)
-
-storid=$(az storage account show \
- --name mystorageaccount8675 \
- --resource-group myResourceGroup \
- --query id \
- --output tsv)
-
-az monitor diagnostic-settings create \
- --name myDiagnosticSetting \
- --resource $ipid \
- --logs '[{"category": "DDoSProtectionNotifications","enabled": true},{"category": "DDoSMitigationFlowLogs","enabled": true},{"category": "DDoSMitigationReports","enabled": true}]' \
- --metrics '[{"category": "AllMetrics","enabled": true}]' \
- --storage-account $storid
-```
-
-#### Event hub
-
-To send resource logs to an event hub namespace, enter these commands. In this example, **myResourceGroup**, **myeventhub8675** and **myPublicIP** are used for the resource values. Replace these values with yours.
-
-```azurecli
-ipid=$(az network public-ip show \
- --name myPublicIP \
- --resource-group myResourceGroup \
- --query id \
- --output tsv)
-
-az monitor diagnostic-settings create \
- --name myDiagnosticSetting \
- --resource $ipid \
- --event-hub myeventhub8675 \
- --event-hub-rule RootManageSharedAccessKey \
- --logs '[{"category": "DDoSProtectionNotifications","enabled": true},{"category": "DDoSMitigationFlowLogs","enabled": true},{"category": "DDoSMitigationReports","enabled": true}]' \
- --metrics '[{"category": "AllMetrics","enabled": true}]'
-```
-
-The metrics and logs you can collect are discussed in the following sections.
-
-## Analyzing metrics
-
-You can analyze metrics for *Public IP Addresses* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
-
-For a list of the platform metrics collected for Public IP Address, see [Monitoring *Public IP Addresses* data reference](monitor-public-ip-reference.md#metrics) .
-
-For reference, you can see a list of [all resource metrics supported in Azure Monitor](../../azure-monitor/essentials/metrics-supported.md).
-
-## Analyzing logs
-
-Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
-
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../../azure-monitor/essentials/resource-logs-schema.md).
-
-The [Activity log](../../azure-monitor/essentials/activity-log.md) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
-
-For a list of the types of resource logs collected for Public IP addresses, see [Monitoring *Public IP Addresses* data reference](monitor-public-ip-reference.md#resource-logs).
-
-For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring *Public IP Addresses* data reference](monitor-public-ip-reference.md#azure-monitor-logs-tables).
-
-### Sample Kusto queries
-
-> [!IMPORTANT]
-> When you select **Logs** from the Public IP menu, Log Analytics is opened with the query scope set to the current Public IP address. This means that log queries will only include data from that resource. If you want to run a query that includes data from other resources or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../../azure-monitor/logs/scope.md) for details.
-
-For a list of common queries for Public IP addresses, see the [Log Analytics queries interface](../../azure-monitor/logs/queries.md).
-
-The following is an example of the built in queries for Public IP addresses that are found within the Long Analytics queries interface in the Azure portal.
--
-## Alerts
-
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/alerts/alerts-metric-overview), [logs](/azure/azure-monitor/alerts/alerts-unified-log), and the [activity log](/azure/azure-monitor/alerts/activity-log-alerts). Different types of alerts have benefits and drawbacks.
-
-The following table lists common and recommended alert rules for Public IP addresses.
+The following table lists some suggested alert rules for Public IP addresses. These alerts are just examples. You can set alerts for any metric, log entry, or activity log entry listed in the [Public IP addresses monitoring data reference](monitor-public-ip-reference.md).
| Alert type | Condition | Description | |:|:|:|
-| Under DDoS attack or not | **GreaterThan** 0. </br> **1** is currently under attack. </br> **0** indicates normal activity | As part of Azure's edge protection, public IP addresses are monitored for DDoS attacks. An alert will allow you to be notified if your public IP address is affected. |
+| Under DDoS attack or not | **GreaterThan** 0.</br> **1** is currently under attack.</br> **0** indicates normal activity | As part of Azure's edge protection, public IP addresses are monitored for DDoS attacks. An alert allows you to be notified if your public IP address is affected. |
-## Next steps
-- See [Monitoring *Public IP Addresses* data reference](monitor-public-ip-reference.md) for a reference of the metrics, logs, and other important values created by Public IP Address.
+## Related content
-- See [Monitoring Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Public IP addresses monitoring data reference](monitor-public-ip-reference.md) for a reference of the metrics, logs, and other important values created for Public IP addresses.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources.
virtual-wan About Virtual Hub Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-virtual-hub-routing-preference.md
You can select one of the three possible virtual hub routing preference configur
* When there are multiple virtual hubs in a Virtual WAN scenario, a virtual hub selects the best routes using the route selection algorithm described above, and then advertises them to the other virtual hubs in the virtual WAN. * For a given set of destination route-prefixes, if the ExpressRoute routes are preferred and the ExpressRoute connection subsequently goes down, then routes from S2S VPN or SD-WAN NVA connections will be preferred for traffic destined to the same route-prefixes. When the ExpressRoute connection is restored, traffic destined for these route-prefixes might continue to prefer the S2S VPN or SD-WAN NVA connections. To prevent this from happening, you need to configure your on-premises device to utilize AS-Path prepending for the routes being advertised to your S2S VPN Gateway and SD-WAN NVA, as you need to ensure the AS-Path length is longer for VPN/NVA routes than ExpressRoute routes.
+* When processing routes from remote hubs, routes learnt from hubs with routing intent private routing policies are always preferred over routes from hubs without routing intent. This is to ensure customer traffic takes the secure path when a secure path is available. To avoid asymmetric routing, enable Routing Intent on all hubs in Virtual WAN.
## Routing scenarios
vpn-gateway About Gateway Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/about-gateway-skus.md
description: Learn about VPN Gateway SKUs.
Previously updated : 01/23/2024 Last updated : 07/23/2024
For information about working with the legacy gateway SKUs (Basic, Standard, and
You specify the gateway SKU when you create your VPN Gateway. See the following article for steps: * [Azure portal](tutorial-create-gateway-portal.md)
-* [PowerShell](create-routebased-vpn-gateway-powershell.md)
+* [PowerShell](create-gateway-powershell.md)
* [Azure CLI](create-routebased-vpn-gateway-cli.md) ## <a name="resizechange"></a>Change or resize a SKU
vpn-gateway Create Gateway Basic Sku Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/create-gateway-basic-sku-powershell.md
Remove-AzResourceGroup -Name TestRG1
Once the gateway finishes creating, you can create a connection between your virtual network and another virtual network. Or, create a connection between your virtual network and an on-premises location. See the following articles:
-* [Create a site-to-site connection](vpn-gateway-create-site-to-site-rm-powershell.md)
+* [Add or remove a site-to-site connection](add-remove-site-to-site-connections.md)
* [Create a point-to-site connection](vpn-gateway-howto-point-to-site-rm-ps.md) * [Create a connection to another virtual network](vpn-gateway-vnet-vnet-rm-ps.md)
vpn-gateway Create Gateway Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/create-gateway-powershell.md
+
+ Title: 'Create a virtual network gateway: PowerShell'
+
+description: Learn how to create a route-based virtual network gateway for a VPN connection to your on-premises network, or to connect virtual networks.
+++ Last updated : 07/23/2024++++
+# Create a VPN gateway using PowerShell
+
+This article helps you create an Azure VPN gateway using PowerShell. A VPN gateway is used when creating a VPN connection to your on-premises network. You can also use a VPN gateway to connect VNets. For more comprehensive information about some of the settings in this article, see [Create a VPN gateway - portal](tutorial-create-gateway-portal.md).
++
+A VPN gateway is one part of a connection architecture to help you securely access resources within a virtual network.
+
+* The left side of the diagram shows the virtual network and the VPN gateway that you create by using the steps in this article.
+* You can later add different types of connections, as shown on the right side of the diagram. For example, you can create [site-to-site](tutorial-site-to-site-portal.md) and [point-to-site](point-to-site-about.md) connections. To view different design architectures that you can build, see [VPN gateway design](design.md).
+
+The steps in this article create a virtual network, a subnet, a gateway subnet, and a route-based, zone-redundant active-active VPN gateway (virtual network gateway) using the Generation 2 VpnGw2AZ SKU. If you want to create a VPN gateway using the **Basic** SKU instead, see [Create a Basic SKU VPN gateway](create-gateway-basic-sku-powershell.md). Once the gateway creation completes, you can then create connections.
+
+Active-active gateways differ from active-standby gateways in the following ways:
+
+* Active-active gateways have two Gateway IP configurations and two public IP addresses.
+* Active-active gateways have active-active setting enabled.
+* The virtual network gateway SKU can't be Basic or Standard.
+
+For more information about active-active gateways, see [Highly Available cross-premises and VNet-to-VNet connectivity](vpn-gateway-highlyavailable.md).
+For more information about availability zones and zone redundant gateways, see [What are availability zones](https://learn.microsoft.com/azure/reliability/availability-zones-overview?toc=%2Fazure%2Fvpn-gateway%2Ftoc.json&tabs=azure-cli#availability-zones)?
+
+## Before you begin
+
+These steps require an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+### Working with Azure PowerShell
++
+## Create a resource group
+
+Create an Azure resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which Azure resources are deployed and managed. If you're running PowerShell locally, open your PowerShell console with elevated privileges and connect to Azure using the `Connect-AzAccount` command.
+
+```azurepowershell-interactive
+New-AzResourceGroup -Name TestRG1 -Location EastUS
+```
+
+## <a name="vnet"></a>Create a virtual network
+
+Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). The following example creates a virtual network named **VNet1** in the **EastUS** location:
+
+```azurepowershell-interactive
+$virtualnetwork = New-AzVirtualNetwork `
+ -ResourceGroupName TestRG1 `
+ -Location EastUS `
+ -Name VNet1 `
+ -AddressPrefix 10.1.0.0/16
+```
+
+Create a subnet configuration using the [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) cmdlet.
+
+```azurepowershell-interactive
+$subnetConfig = Add-AzVirtualNetworkSubnetConfig `
+ -Name Frontend `
+ -AddressPrefix 10.1.0.0/24 `
+ -VirtualNetwork $virtualnetwork
+```
+
+Set the subnet configuration for the virtual network using the [Set-AzVirtualNetwork](/powershell/module/az.network/Set-azVirtualNetwork) cmdlet.
+
+```azurepowershell-interactive
+$virtualnetwork | Set-AzVirtualNetwork
+```
+
+## <a name="gwsubnet"></a>Add a gateway subnet
+
+The gateway subnet contains the reserved IP addresses that the virtual network gateway services use. Use the following examples to add a gateway subnet:
+
+Set a variable for your virtual network.
+
+```azurepowershell-interactive
+$vnet = Get-AzVirtualNetwork -ResourceGroupName TestRG1 -Name VNet1
+```
+
+Create the gateway subnet using the [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/Add-azVirtualNetworkSubnetConfig) cmdlet.
+
+```azurepowershell-interactive
+Add-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -AddressPrefix 10.1.255.0/27 -VirtualNetwork $vnet
+```
+
+Set the subnet configuration for the virtual network using the [Set-AzVirtualNetwork](/powershell/module/az.network/Set-azVirtualNetwork) cmdlet.
+
+```azurepowershell-interactive
+$vnet | Set-AzVirtualNetwork
+```
+
+## <a name="PublicIP"></a>Request a public IP address
+
+Each VPN gateway must have an allocated public IP address. When you create a connection to a VPN gateway, this is the IP address that you specify. In this exercise, we create an active-active zone-redundant VPN gateway environment. That means that two Standard public IP addresses are required, one for each gateway, and we must also specify the Zone setting. This example specifies a zone-redundant configuration because it specifies all 3 regional zones.
+
+Use the following examples to request a public IP address for each gateway. The allocation method must be **Static**.
+
+```azurepowershell-interactive
+$gw1pip1 = New-AzPublicIpAddress -Name "VNet1GWpip1" -ResourceGroupName "TestRG1" -Location "EastUS" -AllocationMethod Static -Sku Standard -Zone 1,2,3
+ ```
+
+```azurepowershell-interactive
+$gw1pip2 = New-AzPublicIpAddress -Name "VNet1GWpip2" -ResourceGroupName "TestRG1" -Location "EastUS" -AllocationMethod Static -Sku Standard -Zone 1,2,3
+```
+
+## <a name="GatewayIPConfig"></a>Create the gateway IP address configuration
+
+The gateway configuration defines the subnet and the public IP address to use. Use the following example to create your gateway configuration.
+
+```azurepowershell-interactive
+$vnet = Get-AzVirtualNetwork -Name VNet1 -ResourceGroupName TestRG1
+$subnet = Get-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -VirtualNetwork $vnet
+
+$gwipconfig1 = New-AzVirtualNetworkGatewayIpConfig -Name gwipconfig1 -SubnetId $subnet.Id -PublicIpAddressId $gw1pip1.Id
+$gwipconfig2 = New-AzVirtualNetworkGatewayIpConfig -Name gwipconfig2 -SubnetId $subnet.Id -PublicIpAddressId $gw1pip2.Id
+```
+
+## <a name="CreateGateway"></a>Create the VPN gateway
+
+Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. Once the gateway is created, you can create a connection between your virtual network and another virtual network. Or, create a connection between your virtual network and an on-premises location.
+
+Create a VPN gateway using the [New-AzVirtualNetworkGateway](/powershell/module/az.network/New-azVirtualNetworkGateway) cmdlet. Notice in the examples that both public IP addresses are referenced and the gateway is configured as active-active. In the example, we add the optional `-Debug` switch.
+
+```azurepowershell-interactive
+New-AzVirtualNetworkGateway -Name VNet1GW -ResourceGroupName TestRG1 `
+-Location "East US" -IpConfigurations $gwipconfig1,$gwipconfig2 -GatewayType "Vpn" -VpnType RouteBased `
+-GatewaySku VpnGw2AZ -VpnGatewayGeneration Generation2 -EnableActiveActiveFeature -Debug
+```
+
+## <a name="viewgw"></a>View the VPN gateway
+
+You can view the VPN gateway using the [Get-AzVirtualNetworkGateway](/powershell/module/az.network/Get-azVirtualNetworkGateway) cmdlet.
+
+```azurepowershell-interactive
+Get-AzVirtualNetworkGateway -Name Vnet1GW -ResourceGroup TestRG1
+```
+
+## <a name="viewgwpip"></a>View the public IP addresses
+
+To view the public IP address for your VPN gateway, use the [Get-AzPublicIpAddress](/powershell/module/az.network/Get-azPublicIpAddress) cmdlet. Example:
+
+```azurepowershell-interactive
+Get-AzPublicIpAddress -Name VNet1GWpip1 -ResourceGroupName TestRG1
+```
+
+## Clean up resources
+
+When you no longer need the resources you created, use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to delete the resource group. This deletes the resource group and all of the resources it contains.
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name TestRG1
+```
+
+## Next steps
+
+Once the gateway has finished creating, you can create a connection between your virtual network and another virtual network. Or, create a connection between your virtual network and an on-premises location.
+
+* [Create a site-to-site connection](vpn-gateway-create-site-to-site-rm-powershell.md)<br><br>
+* [Create a point-to-site connection](vpn-gateway-howto-point-to-site-rm-ps.md)<br><br>
+* [Create a connection to another VNet](vpn-gateway-vnet-vnet-rm-ps.md)
vpn-gateway Vpn Gateway About Vpngateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpngateways.md
description: Learn what VPN Gateway is, and how to use a VPN gateway to connect
Previously updated : 02/29/2024 Last updated : 07/23/2024 # Customer intent: As someone with a basic network background, but is new to Azure, I want to understand the capabilities of Azure VPN Gateway so that I can securely connect to my Azure virtual networks.
vpn-gateway Vpn Gateway Vnet Vnet Rm Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-vnet-vnet-rm-ps.md
# Configure a VNet-to-VNet VPN gateway connection using PowerShell
-This article helps you connect virtual networks by using the VNet-to-VNet connection type. The virtual networks can be in the same or different regions, and from the same or different subscriptions. When you connect virtual networks from different subscriptions, the subscriptions don't need to be associated with the same tenant.
+This article helps you connect virtual networks by using the VNet-to-VNet connection type. The virtual networks can be in the same or different regions, and from the same or different subscriptions. When you connect virtual networks from different subscriptions, the subscriptions don't need to be associated with the same tenant. If you already have VNets that you want to connect and they're in the same subscription, you might want to use the [Azure portal](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) steps instead because the process is less complicated. Note that you can't connect VNets from different subscriptions using the Azure portal.
-In this exercise, you create the required virtual networks (VNets) and VPN gateways. We have steps to connect VNets within the same subscription, as well as steps and commands for the more complicated scenario to connect VNets in different subscriptions.
-
-The PowerShell cmdlet to create a connection is [New-AzVirtualNetworkGatewayConnection](/powershell/module/az.network/new-azvirtualnetworkgatewayconnection). The `-ConnectionType` is `Vnet2Vnet`. If you're connecting VNets from different subscriptions, use the steps in this article or in the [Azure CLI](vpn-gateway-howto-vnet-vnet-cli.md) article. If you already have VNets that you want to connect and they're in the same subscription, you might want to use the [Azure portal](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) steps instead because the process is less complicated. Note that you can't connect VNets from different subscriptions using the Azure portal.
+In this exercise, you create the required virtual networks (VNets) and VPN gateways. We have steps to connect VNets within the same subscription, as well as steps and commands for the more complicated scenario to connect VNets in different subscriptions. The PowerShell cmdlet to create a connection is [New-AzVirtualNetworkGatewayConnection](/powershell/module/az.network/new-azvirtualnetworkgatewayconnection). The `-ConnectionType` is `Vnet2Vnet`.
:::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/vnet-vnet-diagram.png" alt-text="VNet to VNet diagram." lightbox="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/vnet-vnet-diagram.png":::