Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
api-center | Enable Api Analysis Linting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/enable-api-analysis-linting.md | -# Enable linting and analysis for API governance in your API center +# Enable API analysis in your API center - self-managed -This article shows how to enable linting to analyze API definitions in your organization's [API center](overview.md) for conformance with your organizations's API style rules. Linting generates an analysis report that you can access in your API center. Use API linting and analysis to detect common errors and inconsistencies in your API definitions. +This article explains how to enable API analysis in [Azure API Center](overview.md) by manually setting up a linting engine and triggers. API analysis offers linting capabilities to analyze API definitions in your organization's API center. Linting ensures your API definitions adhere to organizational style rules, generating both individual and summary reports. Use API analysis to identify and correct common errors and inconsistencies in your API definitions. ++> [!NOTE] +> In preview, Azure API Center can also automatically set up a linting engine and any required dependencies and triggers. [Learn more](enable-managed-api-analysis-linting.md). > [!VIDEO https://www.youtube.com/embed/m0XATQaVhxA] ## Scenario overview -In this scenario, you analyze API definitions in your API center by using the [Spectral](https://github.com/stoplightio/spectral) open source linting engine. An Azure Functions app runs the linting engine in response to events in your API center. Spectral checks that the APIs defined in a JSON or YAML specification document conform to the rules in a customizable API style guide. A report of API compliance is generated that you can view in your API center. +In this scenario, you analyze API definitions in your API center by using the [Spectral](https://github.com/stoplightio/spectral) open source linting engine. An Azure Functions app runs the linting engine in response to events in your API center. Spectral checks that the APIs defined in a JSON or YAML specification document conform to the rules in a customizable API style guide. An analysis report is generated that you can view in your API center. The following diagram shows the steps to enable linting and analysis in your API center. To view the analysis report for an API definition in your API center: The **API Analysis Report** opens, and it displays the API definition and errors, warnings, and information based on the configured API style guide. The following screenshot shows an example of an API analysis report. ### API analysis summary To view a summary of analysis reports for all API definitions in your API center 1. In the portal, navigate to your API center. 1. In the left-hand menu, under **Governance**, select **API Analysis**. The summary appears. - :::image type="content" source="media/enable-api-analysis-linting/api-analysis-summary.png" alt-text="Screenshot of the API analysis summary in the portal."::: + :::image type="content" source="media/enable-api-analysis-linting/api-analysis-summary.png" alt-text="Screenshot of the API analysis summary in the portal." lightbox="media/enable-api-analysis-linting/api-analysis-summary.png"::: ## Related content Learn more about Event Grid: +* [Enable API analysis in your API center - Microsoft managed](enable-managed-api-analysis-linting.md) * [System topics in Azure Event Grid](../event-grid/system-topics.md) * [Event Grid push delivery - concepts](../event-grid/concepts.md) * [Event Grid schema for Azure API Center](../event-grid/event-schema-api-center.md) |
api-center | Enable Managed Api Analysis Linting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/enable-managed-api-analysis-linting.md | + + Title: Managed API linting and analysis - Azure API Center +description: Enable managed linting of API definitions in your API center to analyze compliance of APIs with the organization's API style guide. ++ Last updated : 08/23/2024++++# Customer intent: As an API developer or API program manager, I want to analyze the API definitions in my organization's API center for compliance with my organization's API style guide. +++# Enable API analysis in your API center - Microsoft managed ++This article explains how to enable API analysis in [Azure API Center](overview.md) without having to manage it yourself (preview). API analysis offers linting capabilities to analyze API definitions in your organization's API center. Linting ensures your API definitions adhere to organizational style rules, generating both individual and summary reports. Use API analysis to identify and correct common errors and inconsistencies in your API definitions. ++> [!NOTE] +> With managed linting and analysis, API Center sets up a linting engine and any required dependencies and triggers. You can also enable linting and analysis [manually](enable-api-analysis-linting.md). ++In this scenario: ++1. Add a linting ruleset (API style guide) in your API center using the Visual Studio Code extension for Azure API Center. +1. Azure API Center automatically runs linting when you add or update an API definition. It's also triggered for all API definitions when you deploy a ruleset to your API center. +1. Review API analysis reports in the Azure portal to see how your API definitions conform to the style guide. +1. Optionally customize the ruleset for your organization's APIs. Test the custom ruleset locally before deploying it to your API center. ++## Limitations ++* Currently, only OpenAPI specification documents in JSON or YAML format are analyzed. +* By default, you enable analysis with the [`spectral:oas` ruleset](https://docs.stoplight.io/docs/spectral/4dec24461f3af-open-api-rules). To learn more about the built-in rules, see the [Spectral GitHub repo](https://github.com/stoplightio/spectral/blob/develop/docs/reference/openapi-rules.md). +* Currently, you configure a single ruleset, and it's applied to all OpenAPI definitions in your API center. ++## Prerequisites ++* An API center in your Azure subscription. If you haven't created one already, see [Quickstart: Create your API center](set-up-api-center.md). +* [Visual Studio Code](https://code.visualstudio.com/) ++* The following Visual Studio Code extensions: + * [Azure API Center extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=apidev.azure-api-center) ++ > [!IMPORTANT] + > Enable managed API analysis using the API Center extension's pre-release version. When installing the extension, choose the pre-release version. Switch between release and pre-release versions any time via the extension's **Manage** button in the Extensions view. + * [Spectral extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=stoplight.spectral) + +## Enable API analysis using Visual Studio Code ++To enable API analysis using the default linting ruleset: ++1. In Visual Studio Code, open a folder that you'll use to manage rulesets for Azure API Center. +1. Select the Azure API Center icon from the Activity Bar. +1. In the API Center pane, expand the API center resource in which to enable API analysis. +1. Right-click **Rules** and select **Enable API Analysis**. ++ :::image type="content" source="media/enable-managed-api-analysis-linting/enable-analysis-visual-studio-code.png" alt-text="Screenshot of enabling API linting and analysis in Visual Studio Code."::: ++A message notifies you after API analysis is successfully enabled. A folder for your API center is created in `.api-center-rules`, at the root of your working folder. The folder for your API center contains: + +* A `ruleset.yml` file that defines the default API style guide used by the linting engine. +* A `functions` folder with an example custom function that you can use to extend the ruleset. ++With analysis enabled, the linting engine analyzes API definitions in your API center based on the default ruleset and generates API analysis reports. ++## View API analysis reports ++View an analysis summary and the analysis reports for your API definitions in the Azure portal. After API definitions are analyzed, the reports list errors, warnings, and information based on the configured API style guide. ++To view an analysis summary in your API center: ++1. In the portal, navigate to your API center. +1. In the left-hand menu, under **Governance**, select **API Analysis**. The summary appears. ++ :::image type="content" source="media/enable-api-analysis-linting/api-analysis-summary.png" alt-text="Screenshot of the API analysis summary in the portal." lightbox="media/enable-api-analysis-linting/api-analysis-summary.png"::: ++1. Optionally select the API Analysis Report icon for an API definition. The definition's API analysis report appears, as shown in the following screenshot. ++ :::image type="content" source="media/enable-api-analysis-linting/api-analysis-report.png" alt-text="Screenshot of an API analysis report in the portal." lightbox="media/enable-api-analysis-linting/api-analysis-report.png"::: ++ > [!TIP] + > You can also view the API analysis report by selecting **Analysis** from the API definition's menu bar. ++## Customize ruleset ++You can customize the default ruleset or replace it as your organization's API style guide. For example, you can [extend the ruleset](https://docs.stoplight.io/docs/spectral/83527ef2dd8c0-extending-rulesets) or add [custom functions](https://docs.stoplight.io/docs/spectral/a781e290eb9f9-custom-functions). ++To customize or replace the ruleset: ++1. In Visual Studio Code, open the `.api-center-rules` folder at the root of your working folder. +1. In the folder for the API center resource, open the `ruleset.yml` file. +1. Modify or replace the content as needed. +1. Save your changes to `ruleset.yml`. ++### Test ruleset locally ++Before deploying the custom ruleset to your API center, validate it locally. The Azure API Center extension for Visual Studio Code provides integrated support for API specification linting with Spectral. ++1. In Visual Studio Code, use the **Ctrl+Shift+P** keyboard shortcut to open the Command Palette. +1. Type **Azure API Center: Set active API Style Guide** and hit **Enter**. +1. Choose **Select Local File** and specify the `ruleset.yml` file that you customized. Hit **Enter**. ++ This step makes the custom ruleset the active API style guide for linting. ++Now, when you open an OpenAPI-based API definition file, a local linting operation is automatically triggered in Visual Studio Code. Results are displayed inline in the editor and in the **Problems** window (**View > Problems** or **Ctrl+Shift+M**). +++Review the linting results. Make any necessary adjustments to the ruleset and continue to test it locally until it performs the way you want. ++### Deploy ruleset to your API center ++To deploy the custom ruleset to your API center: ++1. In Visual Studio Code, select the Azure API Center icon from the Activity Bar. +1. In the API Center pane, expand the API center resource in which you customized the ruleset. +1. Right-click **Rules** and select **Deploy Rules to API Center**. ++A message notifies you after the rules are successfully deployed to your API center. The linting engine uses the updated ruleset to analyze API definitions. ++To see the results of linting with the updated ruleset, view the API analysis reports in the portal. ++## Related content ++* [Enable API analysis in your API center - self-managed](enable-api-analysis-linting.md) |
azure-app-configuration | Feature Management Dotnet Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/feature-management-dotnet-reference.md | Compared to normal feature flags, variant feature flags have two additional prop #### Defining Variants -Each variant has two properties: a name and a configuration. The name is used to refer to a specific variant, and the configuration is the value of that variant. The configuration can be set using either the `configuration_reference` or `configuration_value` properties. `configuration_reference` is a string path that references a section of the current configuration that contains the feature flag declaration. `configuration_value` is an inline configuration that can be a string, number, boolean, or configuration object. If both are specified, `configuration_value` is used. If neither are specified, the returned variant's `Configuration` property will be null. +Each variant has two properties: a name and a configuration. The name is used to refer to a specific variant, and the configuration is the value of that variant. The configuration can be set using `configuration_value` property. `configuration_value` is an inline configuration that can be a string, number, boolean, or configuration object. If `configuration_value` is not specified, the returned variant's `Configuration` property will be null. A list of all possible variants is defined for each feature under the `variants` property. A list of all possible variants is defined for each feature under the `variants` "variants": [ { "name": "Big", - "configuration_reference": "ShoppingCart:Big" + "configuration_value": { + "Size": 500 + } }, { "name": "Small", A list of all possible variants is defined for each feature under the `variants` ] } ]- }, -- "ShoppingCart": { - "Big": { - "Size": 600, - "Color": "green" - }, - "Small": { - "Size": 300, - "Color": "gray" - } } } ``` The process of allocating a feature's variants is determined by the `allocation` "variants": [ { "name": "Big", - "configuration_reference": "ShoppingCart:Big" + "configuration_value": "500px" }, { "name": "Small", |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md | Arc resource bridge supports the following Azure regions: * South Central US * Canada Central * Australia East+* Australia SouthEast + * West Europe * North Europe * UK South |
azure-monitor | Azure Monitor Agent Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md | Migration is a complex task. Start planning your migration to Azure Monitor Agen > - **Installation:** The ability to install the legacy agents will be removed from the Azure Portal and installation policies for legacy agents will be removed. You can still install the MMA agents extension as well as perform offline installations. > - **Customer Support:** You will not be able to get support for legacy agent issues. > - **OS Support:** Support for new Linux or Windows distros, including service packs, won't be added after the deprecation of the legacy agents.+> - Log Analytics Agent will continue to function but not be able to connect Log Analytics workspaces. +> - Log Analytics Agent can coexist with Azure Monitor Agent. Expect to see duplicate data if both agents are collecting the same data. ++  ## Benefits Using Azure Monitor agent, you get immediate benefits as shown below: |
azure-monitor | Convert Classic Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md | Legacy table: availabilityResults |:|:|:|:| |appId|string|ResourceGUID|string| |application_Version|string|AppVersion|string|-|appName|string|\_ResourceId|string| +|appName|string|\(removed)|| |client_Browser|string|ClientBrowser|string| |client_City|string|ClientCity|string| |client_CountryOrRegion|string|ClientCountryOrRegion|string| Legacy table: availabilityResults |`id`|string|`Id`|string| |`iKey`|string|`IKey`|string| |itemCount|int|ItemCount|int|-|itemId|string|\_ItemId|string| +|itemId|string|\(removed)|| |itemType|string|Type|String| |location|string|Location|string| |message|string|Message|string| Legacy table: browserTimings |:|:|:|:| |appId|string|ResourceGUID|string| |application_Version|string|AppVersion|string|-|appName|string|\_ResourceId|string| +|appName|string|\(removed)|| |client_Browser|string|ClientBrowser|string| |client_City|string|ClientCity|string| |client_CountryOrRegion|string|ClientCountryOrRegion|string| Legacy table: browserTimings |customMeasurements|dynamic|Measurements|Dynamic| |`iKey`|string|`IKey`|string| |itemCount|int|ItemCount|int|-|itemId|string|\_ItemId|string| +|itemId|string|\(removed)|| |itemType|string|Type|string| |name|string|Name|datetime| |networkDuration|real|NetworkDurationMs|real| Legacy table: dependencies |:|:|:|:| |appId|string|ResourceGUID|string| |application_Version|string|AppVersion|string|-|appName|string|\_ResourceId|string| +|appName|string|\(removed)|| |client_Browser|string|ClientBrowser|string| |client_City|string|ClientCity|string| |client_CountryOrRegion|string|ClientCountryOrRegion|string| Legacy table: dependencies |`id`|string|`Id`|string| |`iKey`|string|`IKey`|string| |itemCount|int|ItemCount|int|-|itemId|string|\_ItemId|string| +|itemId|string|\(removed)|| |itemType|string|Type|String| |name|string|Name|string| |operation_Id|string|OperationId|string| Legacy table: customEvents |:|:|:|:| |appId|string|ResourceGUID|string| |application_Version|string|AppVersion|string|-|appName|string|\_ResourceId|string| +|appName|string|\(removed)|| |client_Browser|string|ClientBrowser|string| |client_City|string|ClientCity|string| |client_CountryOrRegion|string|ClientCountryOrRegion|string| Legacy table: customEvents |customMeasurements|dynamic|Measurements|Dynamic| |`iKey`|string|`IKey`|string| |itemCount|int|ItemCount|int|-|itemId|string|\_ItemId|string| +|itemId|string|\(removed)|| |itemType|string|Type|string| |name|string|Name|string| |operation_Id|string|OperationId|string| Legacy table: customMetrics |:|:|:|:| |appId|string|ResourceGUID|string| |application_Version|string|AppVersion|string|-|appName|string|\_ResourceId|string| +|appName|string|\(removed)|| |client_Browser|string|ClientBrowser|string| |client_City|string|ClientCity|string| |client_CountryOrRegion|string|ClientCountryOrRegion|string| Legacy table: customMetrics |cloud_RoleName|string|AppRoleName|string| |customDimensions|dynamic|Properties|Dynamic| |`iKey`|string|`IKey`|string|-|itemId|string|\_ItemId|string| +|itemId|string|\(removed)|| |itemType|string|Type|string| |name|string|Name|string| |operation_Id|string|OperationId|string| Legacy table: customMetrics |user_Id|string|UserId|string| |value|real|(removed)|| |valueCount|int|ItemCount|int|-|valueMax|real|ValueMax|real| -|valueMin|real|ValueMin|real| -|valueSum|real|ValueSum|real| +|valueMax|real|Max|real| +|valueMin|real|Min|real| +|valueSum|real|Sum|real| +|valueStdDev|real|(removed)|| > [!NOTE] > Older versions of Application Insights SDKs are used to report standard deviation (`valueStdDev`) in the metrics pre-aggregation. Because adoption in metrics analysis was light, the field was removed and is no longer aggregated by the SDKs. If the value is received by the Application Insights data collection endpoint, it's dropped during ingestion and isn't sent to the Log Analytics workspace. If you want to use standard deviation in your analysis, use queries against Application Insights raw events. Legacy table: pageViews |:|:|:|:| |appId|string|ResourceGUID|string| |application_Version|string|AppVersion|string|-|appName|string|\_ResourceId|string| +|appName|string|\(removed)|| |client_Browser|string|ClientBrowser|string| |client_City|string|ClientCity|string| |client_CountryOrRegion|string|ClientCountryOrRegion|string| Legacy table: pageViews |`id`|string|`Id`|string| |`iKey`|string|`IKey`|string| |itemCount|int|ItemCount|int|-|itemId|string|\_ItemId|string| +|itemId|string|\(removed)|| |itemType|string|Type|String| |name|string|Name|string| |operation_Id|string|OperationId|string| Legacy table: performanceCounters |:|:|:|:| |appId|string|ResourceGUID|string| |application_Version|string|AppVersion|string|-|appName|string|\_ResourceId|string| +|appName|string|\(removed)|| |category|string|Category|string| |client_Browser|string|ClientBrowser|string| |client_City|string|ClientCity|string| Legacy table: performanceCounters |customDimensions|dynamic|Properties|Dynamic| |`iKey`|string|`IKey`|string| |instance|string|Instance|string|-|itemId|string|\_ItemId|string| +|itemId|string|\(removed)|| |itemType|string|Type|string| |name|string|Name|string| |operation_Id|string|OperationId|string| Legacy table: requests |:|:|:|:| |appId|string|ResourceGUID|string| |application_Version|string|AppVersion|string|-|appName|string|\_ResourceId|string| +|appName|string|\(removed)|| |client_Browser|string|ClientBrowser|string| |client_City|string|ClientCity|string| |client_CountryOrRegion|string|ClientCountryOrRegion|string| Legacy table: requests |`id`|string|`Id`|String| |`iKey`|string|`IKey`|string| |itemCount|int|ItemCount|int|-|itemId|string|\_ItemId|string| +|itemId|string|\(removed)|| |itemType|string|Type|String| |name|string|Name|String| |operation_Id|string|OperationId|string| Legacy table: exceptions |:|:|:|:| |appId|string|ResourceGUID|string| |application_Version|string|AppVersion|string|-|appName|string|\_ResourceId|string| +|appName|string|\(removed)|| |assembly|string|Assembly|string| |client_Browser|string|ClientBrowser|string| |client_City|string|ClientCity|string| Legacy table: exceptions |innermostMethod|string|InnermostMethod|string| |innermostType|string|InnermostType|string| |itemCount|int|ItemCount|int|-|itemId|string|\_ItemId|string| +|itemId|string|\(removed)|| |itemType|string|Type|string| |message|string|Message|string| |method|string|Method|string| Legacy table: traces |:|:|:|:| |appId|string|ResourceGUID|string| |application_Version|string|AppVersion|string|-|appName|string|\_ResourceId|string| +|appName|string|\(removed)|| |client_Browser|string|ClientBrowser|string| |client_City|string|ClientCity|string| |client_CountryOrRegion|string|ClientCountryOrRegion|string| Legacy table: traces |customMeasurements|dynamic|Measurements|dynamic| |`iKey`|string|`IKey`|string| |itemCount|int|ItemCount|int|-|itemId|string|\_ItemId|string| +|itemId|string|\(removed)|| |itemType|string|Type|string| |message|string|Message|string| |operation_Id|string|OperationId|string| |
azure-monitor | Autoscale Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-get-started.md | Title: Get started with autoscale in Azure description: "Learn how to scale your resource web app, cloud service, virtual machine, or Virtual Machine Scale Set in Azure." Previously updated : 11/29/2023 Last updated : 08/26/2024 # Get started with autoscale in Azure To discover the resources that you can autoscale, follow these steps. 1. Open the [Azure portal.](https://portal.azure.com) -1. Using the search bar at the top of the page, search for and select *Azure Monitor* +1. Search for and select *Azure Monitor* using the search bar at the top of the page. 1. Select **Autoscale** to view all the resources for which autoscale is applicable, along with their current autoscale status. To discover the resources that you can autoscale, follow these steps. :::image type="content" source="./media/autoscale-get-started/view-resources.png" lightbox="./media/autoscale-get-started/view-resources.png" alt-text="A screenshot showing resources that can use autoscale and their statuses."::: The page shows the instance count and the autoscale status for each resource. Autoscale statuses are:- - **Not configured**: You haven't enabled autoscale yet for this resource. - - **Enabled**: You've enabled autoscale for this resource. - - **Disabled**: You've disabled autoscale for this resource. + - **Not configured**: Autoscale isn't set up yet for this resource. + - **Enabled**: Autoscale is enabled for this resource. + - **Disabled**: Autoscale is disabled for this resource. You can also reach the scaling page by selecting **Scaling** from the **Settings** menu for each resource. Follow the steps below to create your first autoscale setting. :::image type="content" source="./media/autoscale-get-started/custom-scale.png" lightbox="./media/autoscale-get-started/custom-scale.png" alt-text="A screenshot showing the Configure tab of the Autoscale Settings page."::: -1. The default rule scales your resource by one instance if the CPU percentage is greater than 70 percent. Keep the default values and select **Add**. +1. The default rule scales your resource by one instance if the `Percentage CPU` metric is greater than 70 percent. -1. You've now created your first scale-out rule. Best practice is to have at least one scale in rule. To add another rule, select **Add a rule**. + Keep the default values and select **Add**. ++1. You've created your first scale-out rule. Best practice is to have at least one scale-in rule. To add another rule, select **Add a rule**. 1. Set **Operator** to *Less than*. 1. Set **Metric threshold to trigger scale action** to *20*. Follow the steps below to create your first autoscale setting. :::image type="content" source="./media/autoscale-get-started/scale-rule.png" lightbox="./media/autoscale-get-started/scale-rule.png" alt-text="A screenshot showing a scale rule."::: - You now have a scale setting that scales out and scales in based on CPU usage, but you're still limited to a maximum of one instance. + You have configured a scale setting that scales out and scales in based on CPU usage, but you're still limited to a maximum of one instance. Change the instance limits to allow for more instances. 1. Under **Instance limits** set **Maximum** to *3* Set your resource to scale to a single instance on a Sunday. 1. Select **Scale to a specific instance count**. You can also scale based on metrics and thresholds that are specific to this scale condition. 1. Enter *1* in the **Instance count** field.-+1. Select **Repeat specific days**. 1. Select **Sunday** 1. Set the **Start time** and **End time** for when the scale condition should be applied. Outside of this time range, the default scale condition applies. 1. Select **Save** Set Autoscale to scale differently for specific dates, when you know that there 1. Select **Add a rule** to define your scale-out and scale-in rules. Set the rules to be same as the default condition. 1. Set the **Maximum** instance limit to *10* 1. Set the **Default** instance limit to *3*+1. Select **Specify start/end dates** 1. Enter the **Start date** and **End date** for when the scale condition should be applied. 1. Select **Save** -You have now defined a scale condition for a specific day. When CPU usage is greater than 70%, an additional instance is added, up to a maximum of 10 instances to handle anticipated load. When CPU usage is below 20%, an instance is removed up to a minimum of 1 instance. By default, autoscale will scale to 3 instances when this scale condition becomes active. +You have now defined a scale condition for a specific day. When CPU usage is greater than 70%, an additional instance is added, up to a maximum of 10 instances to handle anticipated load. When CPU usage is below 20%, an instance is removed up to a minimum of 1 instance. By default, autoscale scales to 3 instances when this scale condition becomes active. ## Additional settings Autoscale is an Azure Resource Manager resource. Like other resources, you can s You can make changes in JSON directly, if necessary. These changes will be reflected after you save them. +### Predictive autoscale ++Predictive autoscale uses machine learning to help manage and scale Azure Virtual Machine Scale Sets with cyclical workload patterns. It forecasts the overall CPU load to your virtual machine scale set, based on your historical CPU usage patterns. It predicts the overall CPU load by observing and learning from historical usage. This process ensures that scale-out occurs in time to meet the demand. For more information, see [Predictive autoscale](autoscale-predictive.md). +++### Scale-in policy ++When scaling a Virtual machine Scale Set, the scale-in policy determines which virtual machines are selected for removal when a scale-in event occurs. The scale-in policy can be set to either **Default**, **NewestVM**, or **OldestVM**. For more information, see [Use custom scale-in policies with Azure Virtual Machine Scale Sets](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-scale-in-policy?WT.mc_id=Portal-Microsoft_Azure_Monitoring). ++++### Notify ++You can configure notifications to be sent when a scale event occurs. Notifications can be sent to an email address or to a webhook. For more information, see [Autoscale notifications](autoscale-webhook-email.md). ++ ### Cool-down period effects Autoscale uses a cool-down period. This period is the amount of time to wait after a scale operation before scaling again. The cool-down period allows the metrics to stabilize and avoids scaling more than once for the same condition. Cool-down applies to both scale-in and scale-out events. For example, if the cooldown is set to 10 minutes and Autoscale has just scaled-in, Autoscale won't attempt to scale again for another 10 minutes in either direction. For more information, see [Autoscale evaluation steps](autoscale-understanding-settings.md#autoscale-evaluation). ### Flapping -Flapping refers to a loop condition that causes a series of opposing scale events. Flapping happens when one scale event triggers an opposite scale event. For example, scaling in reduces the number of instances causing the CPU to rise in the remaining instances. This in turn triggers scale out event, which causes CPU usage to drop, repeating the process. For more information, see [Flapping in Autoscale](autoscale-flapping.md) and [Troubleshooting autoscale](autoscale-troubleshoot.md) +Flapping refers to a loop condition that causes a series of opposing scale events. Flapping happens when one scale event triggers an opposite scale event. For example, scaling in reduces the number of instances causing the CPU to rise in the remaining instances. This in turn triggers a scale-out event, which causes CPU usage to drop, repeating the process. For more information, see [Flapping in Autoscale](autoscale-flapping.md) and [Troubleshooting autoscale](autoscale-troubleshoot.md) ## Move autoscale to a different region This section describes how to move Azure autoscale to another region under the s ### Move -Use [REST API](/rest/api/monitor/autoscalesettings/createorupdate) to create an autoscale setting in the new environment. The autoscale setting created in the destination region will be a copy of the autoscale setting in the source region. +Use [REST API](/rest/api/monitor/autoscalesettings/createorupdate) to create an autoscale setting in the new environment. The autoscale setting created in the destination region is a copy of the autoscale setting in the source region. [Diagnostic settings](../essentials/diagnostic-settings.md) that were created in association with the autoscale setting in the source region can't be moved. You'll need to re-create diagnostic settings in the destination region, after the creation of autoscale settings is completed. To learn more about moving resources between regions and disaster recovery in Az - [Create an activity log alert to monitor all autoscale engine operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-alert) - [Create an activity log alert to monitor all failed autoscale scale-in/scale-out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-failed-alert)+- [Use autoscale actions to send email and webhook alert notifications in Azure Monitor ](autoscale-webhook-email.md) |
azure-monitor | Collect Custom Metrics Guestos Resource Manager Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-resource-manager-vm.md | +> [!NOTE] +> Azure Monitor Agent (AMA) collects monitoring data from the guest operating system of Azure and hybrid virtual machines and delivers it to Azure Monitor for use by features, insights, and other services such as [Microsoft Sentinel](../../sentintel/../sentinel/overview.md) and [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction). +> +>We recommend using the Azure Monitor Agent to collet logs and metrics from Virtual Machines. For more information, see [Azure Monitor Agent overview](../agents/azure-monitor-agent-overview.md). + Performance data from the guest OS of Azure virtual machines (VMs) isn't collected automatically like other [platform metrics](./monitor-azure-resource.md#monitoring-data). Install the Azure Monitor [Diagnostics extension](../agents/diagnostics-extension-overview.md) to collect guest OS metrics into the metrics database so that it can be used with all features of Azure Monitor Metrics. These features include near real time alerting, charting, routing, and access from a REST API. This article describes the process for sending guest OS performance metrics for a Windows VM to the metrics database by using an Azure Resource Manager template (ARM template). > [!NOTE] |
azure-monitor | Collect Custom Metrics Guestos Resource Manager Vmss | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-resource-manager-vmss.md | +> [!NOTE] +> Azure Monitor Agent (AMA) collects monitoring data from the guest operating system of Azure and hybrid virtual machines and Virtual Machine Scale Sets and delivers it to Azure Monitor for use by features, insights, and other services such as [Microsoft Sentinel](../../sentintel/../sentinel/overview.md) and [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction). +> +>We recommend using the Azure Monitor Agent to collet logs and metrics from Virtual Machine Scale Sets. For more information, see [Azure Monitor Agent overview](../agents/azure-monitor-agent-overview.md). + By using the Azure Monitor [Azure Diagnostics extension for Windows (WAD)](../agents/diagnostics-extension-overview.md), you can collect metrics and logs from the guest operating system (guest OS) that runs as part of a virtual machine, cloud service, or Azure Service Fabric cluster. The extension can send telemetry to many different locations listed in the previously linked article. This article describes the process to send guest OS performance metrics for a Wi If you're new to Resource Manager templates, learn about [template deployments](../../azure-resource-manager/management/overview.md) and their structure and syntax. + ## Prerequisites - Your subscription must be registered with [Microsoft.Insights](../../azure-resource-manager/management/resource-providers-and-types.md). |
azure-monitor | Metric Chart Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metric-chart-samples.md | This chart shows if the CPU usage for an App Service Plan was within the accepta ## Application availability by region -View your application's availability by region to identify which geographic locations are having problems. This chart shows the Application Insights availability metric. You can see that the monitored application has no problem with availability from the East US datacenter, but it's experiencing a partial availability problem from West US, and East Asia. +View your application's availability by region to identify which geographic locations are having problems. This chart shows the Application Insights availability metric. The chart shows that the monitored application has no problem with availability from the East US data center, but it's experiencing a partial availability problem from West US, and East Asia. :::image type="content" source="./media/metrics-charts/availability-by-location.png" alt-text="A screenshot showing a line chart of average availability by location." lightbox="./media/metrics-charts/availability-by-location.png"::: ### How to configure this chart -1. You must turn on [Application Insights availability](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) monitoring for your website. +1. Turn on [Application Insights availability](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) monitoring for your website. 1. Select your Application Insights resource. 1. Select the **Availability** metric. 1. Apply splitting on the **Run location** dimension. ## Volume of failed storage account transactions by API name -Your storage account resource is experiencing an excess volume of failed transactions. You can use the transactions metric to identify which API is responsible for the excess failure. Notice that the following chart is configured with the same dimension (API name) in splitting and filtered by failed response type: +Your storage account resource is experiencing an excess volume of failed transactions. Use the transactions metric to identify which API is responsible for the excess failure. Notice that the following chart is configured with the same dimension (API name) in splitting and filtered by failed response type: :::image type="content" source="./media/metrics-charts/split-and-filter-example.png" alt-text="A screenshot showing a chart of transactions by API name." lightbox="./media/metrics-charts/split-and-filter-example.png"::: ### How to configure this chart -1. In the Scope dropdown, select your Storage Account +1. In the scope dropdown, select your Storage Account 1. In the metric dropdown, select the **Transactions** metric. 1. Select **Add filter** and select **Response type** from the **Property** dropdown. 1. Select **CLientOtherError** from the **Values** dropdown. |
azure-monitor | Prometheus Remote Write Virtual Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-remote-write-virtual-machines.md | remote_write: - url: "<metrics ingestion endpoint for your Azure Monitor workspace>" # AzureAD configuration. # The Azure Cloud. Options are 'AzurePublic', 'AzureChina', or 'AzureGovernment'.- azuread: - cloud: 'AzurePublic' - managed_identity: - client_id: "<client-id of the managed identity>" - oauth: - client_id: "<client-id from the Entra app>" - client_secret: "<client secret from the Entra app>" - tenant_id: "<Azure subscription tenant Id>" + azuread: + cloud: 'AzurePublic' + managed_identity: + client_id: "<client-id of the managed identity>" + oauth: + client_id: "<client-id from the Entra app>" + client_secret: "<client secret from the Entra app>" + tenant_id: "<Azure subscription tenant Id>" ``` |
azure-monitor | Resource Manager Diagnostic Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-manager-diagnostic-settings.md | -To create a diagnostic setting for an Azure resource, add a resource of type `<resource namespace>/providers/diagnosticSettings` to the template. This article provides examples for some resource types, but the same pattern can be applied to other resource types. The collection of allowed logs and metrics will vary for each resource type. +To create a diagnostic setting for an Azure resource, add a resource of type `<resource namespace>/providers/diagnosticSettings` to the template. This article provides examples for some resource types, but the same pattern can be applied to other resource types. The collection of allowed logs and metrics varies for each resource type. [!INCLUDE [azure-monitor-samples](../../../includes/azure-monitor-resource-manager-samples.md)] resource setting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = { "value": "Send to all locations" }, "workspaceId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/MyResourceGroup/providers/microsoft.operationalinsights/workspaces/MyWorkspace" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/MyResourceGroup/providers/microsoft.operationalinsights/workspaces/MyWorkspace" }, "storageAccountId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount" }, "eventHubAuthorizationRuleId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNameSpace/authorizationrules/RootManageSharedAccessKey" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNameSpace/authorizationrules/RootManageSharedAccessKey" }, "eventHubName": { "value": "my-eventhub" resource setting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = { "value": "A new Diagnostic Settings configuration" }, "workspaceId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/MyResourceGroup/providers/microsoft.operationalinsights/workspaces/MyWorkspace" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/MyResourceGroup/providers/microsoft.operationalinsights/workspaces/MyWorkspace" }, "storageAccountId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount" }, "eventHubAuthorizationRuleId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNameSpace/authorizationrules/RootManageSharedAccessKey" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNameSpace/authorizationrules/RootManageSharedAccessKey" }, "eventHubName": { "value": "myEventhub" resource setting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = { "value": "MyVault" }, "workspaceId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/MyResourceGroup/providers/microsoft.operationalinsights/workspaces/MyWorkspace" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/MyResourceGroup/providers/microsoft.operationalinsights/workspaces/MyWorkspace" }, "storageAccountId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount" }, "eventHubAuthorizationRuleId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNameSpace/authorizationrules/RootManageSharedAccessKey" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNameSpace/authorizationrules/RootManageSharedAccessKey" }, "eventHubName": { "value": "my-eventhub" resource setting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = { "value": "MySqlDb" }, "workspaceId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/MyResourceGroup/providers/microsoft.operationalinsights/workspaces/MyWorkspace" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/MyResourceGroup/providers/microsoft.operationalinsights/workspaces/MyWorkspace" }, "storageAccountId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount" }, "eventHubAuthorizationRuleId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNameSpace/authorizationrules/RootManageSharedAccessKey" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNameSpace/authorizationrules/RootManageSharedAccessKey" }, "eventHubName": { "value": "my-eventhub" resource setting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = { "value": "Send to all locations" }, "diagnosticWorkspaceId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/MyResourceGroup/providers/microsoft.operationalinsights/workspaces/MyWorkspace" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/MyResourceGroup/providers/microsoft.operationalinsights/workspaces/MyWorkspace" }, "storageAccountId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount" }, "eventHubAuthorizationRuleId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNameSpace/authorizationrules/RootManageSharedAccessKey" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNameSpace/authorizationrules/RootManageSharedAccessKey" }, "eventHubName": { "value": "myEventhub" resource setting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = { "value": "Send to all locations" }, "diagnosticWorkspaceId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/MyResourceGroup/providers/microsoft.operationalinsights/workspaces/MyWorkspace" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/MyResourceGroup/providers/microsoft.operationalinsights/workspaces/MyWorkspace" }, "storageAccountId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount" }, "eventHubAuthorizationRuleId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNameSpace/authorizationrules/RootManageSharedAccessKey" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNameSpace/authorizationrules/RootManageSharedAccessKey" }, "eventHubName": { "value": "myEventhub" resource setting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = { "value": "my-vault" }, "workspaceId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/MyResourceGroup/providers/microsoft.operationalinsights/workspaces/MyWorkspace" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/MyResourceGroup/providers/microsoft.operationalinsights/workspaces/MyWorkspace" }, "storageAccountId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount" }, "eventHubAuthorizationRuleId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNameSpace/authorizationrules/RootManageSharedAccessKey" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNameSpace/authorizationrules/RootManageSharedAccessKey" }, "eventHubName": { "value": "my-eventhub" resource setting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = { "value": "MyWorkspace" }, "workspaceId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/MyResourceGroup/providers/microsoft.operationalinsights/workspaces/MyWorkspace" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/MyResourceGroup/providers/microsoft.operationalinsights/workspaces/MyWorkspace" }, "storageAccountId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount" }, "eventHubAuthorizationRuleId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNameSpace/authorizationrules/RootManageSharedAccessKey" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNameSpace/authorizationrules/RootManageSharedAccessKey" }, "eventHubName": { "value": "my-eventhub" resource queueSetting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' "value": "mystorageaccount" }, "workspaceId": {- "value": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/MyResourceGroup/providers/microsoft.operationalinsights/workspaces/MyWorkspace" + "value": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/MyResourceGroup/providers/microsoft.operationalinsights/workspaces/MyWorkspace" } } } |
azure-netapp-files | Configure Ldap Over Tls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-ldap-over-tls.md | You can use LDAP over TLS to secure communication between an Azure NetApp Files If you do not have a root CA certificate, you need to generate one and export it for use with LDAP over TLS authentication. -1. Follow [Install the Certification Authority](/windows-server/networking/core-network-guide/cncg/server-certs/install-the-certification-authority) to install and configure AD DS Certificate Authority. +1. Follow [Screenshot of the the Certification Authority.](/windows-server/networking/core-network-guide/cncg/server-certs/install-the-certification-authority) to install and configure AD DS Certificate Authority. -2. Follow [View certificates with the MMC snap-in](/dotnet/framework/wcf/feature-details/how-to-view-certificates-with-the-mmc-snap-in) to use the MMC snap-in and the Certificate Manager tool. +2. Follow [Screenshot of the view certificates with the MMC snap-in.](/dotnet/framework/wcf/feature-details/how-to-view-certificates-with-the-mmc-snap-in) to use the MMC snap-in and the Certificate Manager tool. Use the Certificate Manager snap-in to locate the root or issuing certificate for the local device. You should run the Certificate Management snap-in commands from one of the following settings: * A Windows-based client that has joined the domain and has the root certificate installed * Another machine in the domain containing the root certificate 3. Export the root CA certificate. - Root CA certificates can be exported from the Personal or Trusted Root Certification Authorities directory, as shown in the following examples: - ![screenshot that shows personal certificates](./media/configure-ldap-over-tls/personal-certificates.png) - ![screenshot that shows trusted root certification authorities](./media/configure-ldap-over-tls/trusted-root-certification-authorities.png) + Root CA certificates can be exported from the Personal or Trusted Root Certification Authorities directory. The following image shows the Personal Root Certification Authority directory: + ![Screenshot that shows personal certificates.](./media/configure-ldap-over-tls/personal-certificates.png). Ensure that the certificate is exported in the Base-64 encoded X.509 (.CER) format: - ![Certificate Export Wizard](./media/configure-ldap-over-tls/certificate-export-wizard.png) + ![Screenshot of the Certificate Export Wizard.](./media/configure-ldap-over-tls/certificate-export-wizard.png) ## Enable LDAP over TLS and upload root CA certificate |
azure-netapp-files | Manage Cool Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-cool-access.md | The storage with cool access feature provides options for the ΓÇ£coolness period * See [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md#resource-limits) for maximum number of volumes supported for cool access per subscription per region. * Considerations for using cool access with [cross-region replication](cross-region-replication-requirements-considerations.md) and [cross-zone replication](cross-zone-replication-introduction.md): * The cool access setting on the destination is updated automatically to match the source volume whenever the setting is changed on the source volume or during authorizing or performing a reverse resync of the replication. Changes to the cool access setting on the destination volume don't affect the setting on the source volume.+ * In cross-region or cross-zone replication configuration, you can enable cool access exclusively for destination volumes to enhance data protection and create cost savings without affecting latency in source volumes. * Considerations for using cool access with [snapshot restore](snapshots-restore-new-volume.md): * When restoring a snapshot of a cool access enabled volume to a new volume, the new volume inherits the cool access configuration from the parent volume. Once the new volume is created, the cool access settings can be modified. * You can't restore from a snapshot of a non-cool-access volume to a cool access volume. Likewise, you can't restore from a snapshot of a cool access volume to a non-cool-access volume. |
chaos-studio | Chaos Studio Chaos Experiments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-chaos-experiments.md | A chaos experiment is an Azure resource deployed to a subscription, resource gro Chaos experiments can target resources in a different subscription than the experiment if the subscription is within the same Azure tenant. Chaos experiments can target resources in a different region than the experiment if the region is a supported region for Chaos Studio. +## Documenting chaos experiments ++There are several methods for documenting chaos engineering. One approach is to use work items in Azure DevOps Boards or in GitHub Projects. By creating dedicated work items for each experiment, you can track the details, progress, and outcomes of your experiments in a structured manner. This documentation can include information such as the purpose of the experiment, the expected outcomes, the steps followed, the resources involved, and any observations or learnings from the experiment. ++| Aspect | Details | Description | +|-|-|--| +| Hypothesis | Define the objective and expected outcomes of the experiment | | +| Attack Layer | Identify which part of the system will be subjected to chaos experiments (e.g., network, database, application layer). | | +| Duration | Specify the time frame for the chaos experiment. | | +| Target | Determine the specific targets or components within the system. | | +| Environment | Define whether the experiment will be conducted in a production, staging, or development environment. | | +| Observations | Record any data or behavior observed during the experiment. | | +| Results | Summarize the findings and outcomes of the experiment. | | +| Action Items | List any action items or steps to be taken based on the results. | | +| | | | ++The hypothesis is a crucial aspect of a chaos experiment as it defines the objective and expected outcomes of the experiment. It helps in testing the system's ability to handle unexpected disruptions effectively. By formulating a clear hypothesis, you can focus your experiment on specific areas of the system and gather meaningful data to evaluate its resilience. +By leveraging the features of Azure DevOps Boards or GitHub Projects, you can collaborate with your team, assign tasks, set due dates, and track the overall progress of your chaos engineering initiatives. This documentation serves as a reference for future analysis, sharing knowledge, and improving the resilience of your systems. + ## Next steps Now that you understand what a chaos experiment is you're ready to: |
connectors | Connectors Create Api Servicebus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-servicebus.md | The Service Bus connector has different versions, based on [logic app workflow t | Logic app | Environment | Connector version | |--|-|-|-| **Consumption** | Multitenant Azure Logic Apps | Managed connector, which appears in the connector gallery under **Runtime** > **Shared**. For more information, review the following documentation: <br><br>- [Service Bus managed connector reference](/connectors/servicebus/) <br>- [Managed connectors in Azure Logic Apps](managed.md) | -| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector (Azure-hosted), which appears in the connector gallery under **Runtime** > **Shared**, and built-in connector, which appears in the connector gallery under **Runtime** > **In App** and is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in version usually provides better performance, capabilities, pricing, and so on. <br><br>**Note**: Service Bus built-in connector triggers follow the [*polling trigger*](introduction.md#triggers) pattern, which means that the trigger continually checks for messages in the queue or topic subscription. <br><br>For more information, review the following documentation: <br><br>- [Service Bus managed connector reference](/connectors/servicebus/) <br>- [Service Bus built-in connector operations](/azure/logic-apps/connectors/built-in/reference/servicebus) <br>- [Built-in connectors in Azure Logic Apps](built-in.md) | +| **Consumption** | Multitenant Azure Logic Apps | Managed connector, which appears in the connector gallery under **Runtime** > **Shared**. <br><br>**Note**: Service Bus managed connector triggers follow the [*long polling trigger* pattern](#service-bus-managed-triggers), which means that the trigger periodically checks for messages in the queue or topic subscription. For more information, review the following documentation: <br><br>- [Service Bus managed connector reference](/connectors/servicebus/) <br>- [Managed connectors in Azure Logic Apps](managed.md) | +| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector (Azure-hosted), which appears in the connector gallery under **Runtime** > **Shared**, and built-in connector, which appears in the connector gallery under **Runtime** > **In App** and is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). <br><br>The Service Bus managed connector triggers follow the [*long polling trigger* pattern](#service-bus-managed-triggers), which means that the trigger periodically checks for messages in the queue or topic subscription. <br><br>The Service Bus built-in connector triggers follow the [*push trigger* pattern](introduction.md#triggers) and usually provides better performance, capabilities, pricing, and so on. <br><br>For more information, review the following documentation: <br><br>- [Service Bus managed connector reference](/connectors/servicebus/) <br>- [Service Bus built-in connector operations](/azure/logic-apps/connectors/built-in/reference/servicebus) <br>- [Built-in connectors in Azure Logic Apps](built-in.md) | ## Prerequisites In Standard workflows that use the Service Bus built-in operations, you can incr To increase the timeout for sending a message, [add the **ServiceProviders.ServiceBus.MessageSenderOperationTimeout** app setting](../logic-apps/edit-app-settings-host-settings.md). +<a name="service-bus-managed-triggers"></a> ### Service Bus managed connector triggers * For the Service Bus managed connector, all triggers are *long-polling*. This trigger type processes all the messages and then waits 30 seconds for more messages to appear in the queue or topic subscription. If no messages appear in 30 seconds, the trigger run is skipped. Otherwise, the trigger continues reading messages until the queue or topic subscription is empty. The next trigger poll is based on the recurrence interval specified in the trigger's properties. To increase the timeout for sending a message, [add the **ServiceProviders.Servi ### Service Bus built-in connector triggers -Currently, configuration settings for the Service Bus built-in trigger are shared between the [Azure Functions host extension](../azure-functions/functions-bindings-service-bus.md#hostjson-settings), which is defined in your logic app's [**host.json** file](../logic-apps/edit-app-settings-host-settings.md), and the trigger settings defined in your logic app's workflow, which you can set up either through the designer or code view. This section covers both settings locations. +For the Service Bus built-in connector, all triggers follow the [*push trigger* pattern](introduction.md#triggers). Currently, configuration settings for the Service Bus built-in trigger are shared between the [Azure Functions host extension](../azure-functions/functions-bindings-service-bus.md#hostjson-settings), which is defined in your logic app's [**host.json** file](../logic-apps/edit-app-settings-host-settings.md), and the trigger settings defined in your logic app's workflow, which you can set up either through the designer or code view. This section covers both settings locations. * In Standard workflows, some triggers, such as the **When messages are available in a queue** trigger, can return one or more messages. When these triggers fire, they return between one and the number of messages. For this type of trigger and where the **Maximum message count** parameter isn't supported, you can still control the number of messages received by using the **maxMessageBatchSize** property in the **host.json** file. To find this file, see [Edit host and app settings for Standard logic apps](../logic-apps/edit-app-settings-host-settings.md). The steps to add and use a Service Bus trigger differ based on whether you want #### Built-in connector trigger -The built-in Service Bus connector is a stateless connector, by default. To run this connector's operations in stateful mode, see [Enable stateful mode for stateless built-in connectors](enable-stateful-affinity-built-in-connectors.md). +By default, the Service Bus built-in connector is a stateless connector. To run this connector's operations in stateful mode, see [Enable stateful mode for stateless built-in connectors](enable-stateful-affinity-built-in-connectors.md). Also, Service Bus built-in triggers follow the [*push trigger* pattern](introduction.md#triggers). 1. In the [Azure portal](https://portal.azure.com), and open your Standard logic app resource with blank workflow in the designer. The built-in Service Bus connector is a stateless connector, by default. To run ![Screenshot showing Standard workflow, Service Bus built-in trigger, and example trigger information.](./media/connectors-create-api-azure-service-bus/service-bus-trigger-built-in-standard.png) - > [!NOTE] - > - > This Service Bus trigger follows the *polling trigger* pattern, which means that the trigger continually checks for messages - > in the queue or topic subscription. For more general information about polling triggers, review [Triggers](introduction.md#triggers). - 1. Add any actions that your workflow needs. For example, you can add an action that sends email when a new message arrives. When your trigger checks your queue and finds a new message, your workflow runs your selected actions for the found message. The built-in Service Bus connector is a stateless connector, by default. To run #### Managed connector trigger +Service Bus managed triggers follow the [*long polling trigger* pattern](#service-bus-managed-triggers). + 1. In the [Azure portal](https://portal.azure.com), and open your Standard logic app resource and blank workflow in the designer. 1. In the designer, [follow these general steps to add the Azure Service Bus managed trigger that you want](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger). As long as this error happens only occasionally, the error is expected. When the * [Managed connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors) * [Built-in connectors for Azure Logic Apps](built-in.md)-* [What are connectors in Azure Logic Apps](introduction.md) +* [What are connectors in Azure Logic Apps](introduction.md) |
cost-management-billing | Subscription Disabled | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-disabled.md | If you're the Account Administrator or subscription Owner and you canceled a pay For other subscription types (for example, Enterprise Subscription), [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to have your subscription reactivated. +## Reactivation process time ++It can take up to 24 hours for your subscription to get reactivated after you pay your balance. + ## After reactivation After your subscription is reactivated, there might be a delay in creating or managing resources. If the delay exceeds 30 minutes, contact [Azure Billing Support](https://go.microsoft.com/fwlink/?linkid=2083458) for assistance. Most Azure resources automatically resume and don't require any action. However, we recommend that you check your Azure service resources and restart them if, if necessary. |
databox | Data Box Disk Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-overview.md | The Data Box Heavy device has the following features in this release. For information on region availability, go to [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=databox®ions=all). Data Box Disk can also be deployed in the Azure Government Cloud. For more information, see [What is Azure Government?](../azure-government/documentation-government-welcome.md). +Data Box Disk self-encrypting drives are generally available in the US, EU, and Japan. + ## Pricing For information on pricing, go to [Pricing page](https://azure.microsoft.com/pricing/details/databox/disk/). |
defender-for-iot | Dell Poweredge R350 E1800 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-poweredge-r350-e1800.md | The following image shows a view of the Dell PowerEdge R350 back panel: |2| 450-AKMP | Dual, Hot-Plug, Redundant Power Supply (1+1), 600W | ## Optional Components+ |Quantity|PN|Description| |-||-| |2| 450-AMJH | Dual, Hot-Plug, Power Supply, 700W MM HLAC (200-220Vac) Titanium, Redundant (1+1), by LiteOn, NAF| ## Optional Storage Controllers+ Multi-disk RAID arrays combine multiple physical drives into one logical drive for increased redundancy and performance. The optional modules below are tested in our lab for compatibility and sustained performance: |Quantity|PN|Description| Multi-disk RAID arrays combine multiple physical drives into one logical drive f |1| 405-ABBT | PERC H755 Controller Card (RAID10) | ## Optional port expansion+ Optional modules for additional monitoring ports can be installed: |Location |Type |Specifications | |
defender-for-iot | Dell Poweredge R360 E1800 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-poweredge-r360-e1800.md | The following image shows a view of the Dell PowerEdge R360 back panel: |1| 379-BCQY | iDRAC Group Manager, Disabled | |1| 470-AFBU | BOSS Blank | |1| 770-BCWN | ReadyRails Sliding Rails With Cable Management Arm |+|2| 450-AKMP | Dual, Hot-Plug, Redundant Power Supply (1+1), 600W MM **for US**<br> Dual, Hot-Plug, Redundant Power Supply (1+1), 700W MM HLAC (Only for 200-240Vac) titanium **for Europe** | ## Install Defender for IoT software on the DELL R360 |
event-grid | How To Filter Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/how-to-filter-events.md | New-AzEventGridTopic -ResourceGroupName gridResourceGroup -Location eastus2 -Nam $topicid = (Get-AzEventGridTopic -ResourceGroupName gridResourceGroup -Name $topicName).Id $expDate = '<mm/dd/yyyy hh:mm:ss>' | Get-Date-$AdvFilter1=@{operator="StringIn"; key="Data.color"; Values=@('blue', 'red', 'green')} +$AdvFilter1=@{operatorType="StringIn"; key="Data.color"; values=@('blue', 'red', 'green')} New-AzEventGridSubscription ` -ResourceId $topicid ` |
firewall | Service Tags | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/service-tags.md | Azure Firewall supports configuration of service tags via PowerShell, Azure CLI, ### Configure via Azure PowerShell -In this example, we must first get context to our previously created Azure Firewall instance. +In this example, we are making a change to an Azure Firewall using classic rules. We must first get context to our previously created Azure Firewall instance. ```Get the context to an existing Azure Firewall $FirewallName = "AzureFirewall" |
frontdoor | Front Door Http Headers Protocol | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-http-headers-protocol.md | Azure Front Door includes headers for an incoming request unless they're removed | X-Azure-SocketIP | `X-Azure-SocketIP: 127.0.0.1` </br> Represents the socket IP address associated with the TCP connection that the current request originated from. A request's client IP address might not be equal to its socket IP address because the client IP can be arbitrarily overwritten by a user.| | X-Azure-Ref | `X-Azure-Ref: 0zxV+XAAAAABKMMOjBv2NT4TY6SQVjC0zV1NURURHRTA2MTkANDM3YzgyY2QtMzYwYS00YTU0LTk0YzMtNWZmNzA3NjQ3Nzgz` </br> A unique reference string that identifies a request served by Azure Front Door. This string is used to search access logs and critical for troubleshooting.| | X-Azure-RequestChain | `X-Azure-RequestChain: hops=1` </br> A header that Front Door uses to detect request loops, and users shouldn't take a dependency on it. |-| X-Azure-FDID | `X-Azure-FDID: 55ce4ed1-4b06-4bf1-b40e-4638452104da` <br/> A reference string that identifies the request came from a specific Front Door resource. The value can be seen in the Azure portal or retrieved using the management API. You can use this header in combination with IP ACLs to lock down your endpoint to only accept requests from a specific Front Door resource. See the FAQ for [more detail](front-door-faq.yml#what-are-the-steps-to-restrict-the-access-to-my-backend-to-only-azure-front-door-) | +| X-Azure-FDID | `X-Azure-FDID: a0a0a0a0-bbbb-cccc-dddd-e1e1e1e1e1e1` <br/> A reference string that identifies the request came from a specific Front Door resource. The value can be seen in the Azure portal or retrieved using the management API. You can use this header in combination with IP ACLs to lock down your endpoint to only accept requests from a specific Front Door resource. See the FAQ for [more detail](front-door-faq.yml#what-are-the-steps-to-restrict-the-access-to-my-backend-to-only-azure-front-door-) | | X-Forwarded-For | `X-Forwarded-For: 127.0.0.1` </br> The X-Forwarded-For (XFF) HTTP header field often identifies the originating IP address of a client connecting to a web server through an HTTP proxy or load balancer. If there's an existing XFF header, then Front Door appends the client socket IP to it or adds the XFF header with the client socket IP. | | X-Forwarded-Host | `X-Forwarded-Host: contoso.azurefd.net` </br> The X-Forwarded-Host HTTP header field is a common method used to identify the original host requested by the client in the Host HTTP request header. This is because the host name from Azure Front Door might differ for the backend server handling the request. Any previous value is overridden by Azure Front Door. | | X-Forwarded-Proto | `X-Forwarded-Proto: http` </br> The `X-Forwarded-Proto` HTTP header field is often used to identify the originating protocol of an HTTP request. Front Door based on configuration might communicate with the backend by using HTTPS. This is true even if the request to the reverse proxy is HTTP. Any previous value will be overridden by Front Door. | |
governance | Australia Ism | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/australia-ism.md | Title: Regulatory Compliance details for Australian Government ISM PROTECTED description: Details of the Australian Government ISM PROTECTED Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Azure Security Benchmark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmark.md | Title: Regulatory Compliance details for Microsoft cloud security benchmark description: Details of the Microsoft cloud security benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||-|[\[Preview\]: System updates should be installed on your machines (powered by Update Center)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff85bf3e0-d513-442e-89c3-1784ad63382b) |Your machines are missing system, security, and critical updates. Software updates often include critical patches to security holes. Such holes are frequently exploited in malware attacks so it's vital to keep your software updated. To install all outstanding patches and secure your machines, follow the remediation steps. |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdatesV2_Audit.json) | |[Azure registry container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F090c7b07-b4ed-4561-ad20-e9075f3ccaff) |Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_AzureContainerRegistryVulnerabilityAssessment_Audit.json) | |[Azure running container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17f4b1cc-c55c-4d94-b1f9-2978f6ac2957) |Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_K8sRuningImagesVulnerabilityAssessmentBasedOnMDVM_Audit.json) | |[Machines should be configured to periodically check for missing system updates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd876905-5b84-4f73-ab2d-2e7a7c4568d9) |To ensure periodic assessments for missing system updates are triggered automatically every 24 hours, the AssessmentMode property should be set to 'AutomaticByPlatform'. Learn more about AssessmentMode property for Windows: [https://aka.ms/computevm-windowspatchassessmentmode,](https://aka.ms/computevm-windowspatchassessmentmode,) for Linux: [https://aka.ms/computevm-linuxpatchassessmentmode](https://aka.ms/computevm-linuxpatchassessmentmode). |Audit, Deny, Disabled |[3.7.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Update%20Manager/AzUpdateMgmtCenter_AutoAssessmentMode_Audit.json) | initiative definition. |[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) | |[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) | |[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |+|[System updates should be installed on your machines (powered by Update Center)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff85bf3e0-d513-442e-89c3-1784ad63382b) |Your machines are missing system, security, and critical updates. Software updates often include critical patches to security holes. Such holes are frequently exploited in malware attacks so it's vital to keep your software updated. To install all outstanding patches and secure your machines, follow the remediation steps. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdatesV2_Audit.json) | |[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) | |[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | |[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |
governance | Built In Initiatives | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md | Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Azure Machine Configuration, and more. Previously updated : 08/14/2024 Last updated : 08/26/2024 |
governance | Built In Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md | Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Azure Machine Configuration, and more. Previously updated : 08/14/2024 Last updated : 08/26/2024 |
governance | Canada Federal Pbmm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/canada-federal-pbmm.md | Title: Regulatory Compliance details for Canada Federal PBMM description: Details of the Canada Federal PBMM Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Cis Azure 1 1 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-1-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Cis Azure 1 3 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-3-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Cis Azure 1 4 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-4-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.4.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.4.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Cis Azure 2 0 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-2-0-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 2.0.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 2.0.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Cmmc L3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cmmc-l3.md | Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Fedramp High | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-high.md | Title: Regulatory Compliance details for FedRAMP High description: Details of the FedRAMP High Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Fedramp Moderate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-moderate.md | Title: Regulatory Compliance details for FedRAMP Moderate description: Details of the FedRAMP Moderate Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Gov Azure Security Benchmark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-azure-security-benchmark.md | Title: Regulatory Compliance details for Microsoft cloud security benchmark (Azure Government) description: Details of the Microsoft cloud security benchmark (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 initiative definition. |[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) | |[System updates on virtual machine scale sets should be installed](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) | |[System updates should be installed on your machines](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |+|[System updates should be installed on your machines (powered by Update Center)](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff85bf3e0-d513-442e-89c3-1784ad63382b) |Your machines are missing system, security, and critical updates. Software updates often include critical patches to security holes. Such holes are frequently exploited in malware attacks so it's vital to keep your software updated. To install all outstanding patches and secure your machines, follow the remediation steps. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_MissingSystemUpdatesV2_Audit.json) | |[Vulnerabilities in container security configurations should be remediated](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_ContainerBenchmark_Audit.json) | |[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_OSVulnerabilities_Audit.json) | |[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |
governance | Gov Cis Azure 1 1 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-1-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Gov Cis Azure 1 3 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-3-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Gov Cmmc L3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cmmc-l3.md | Title: Regulatory Compliance details for CMMC Level 3 (Azure Government) description: Details of the CMMC Level 3 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Gov Fedramp High | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-high.md | Title: Regulatory Compliance details for FedRAMP High (Azure Government) description: Details of the FedRAMP High (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Gov Fedramp Moderate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-moderate.md | Title: Regulatory Compliance details for FedRAMP Moderate (Azure Government) description: Details of the FedRAMP Moderate (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Gov Irs 1075 Sept2016 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-irs-1075-sept2016.md | Title: Regulatory Compliance details for IRS 1075 September 2016 (Azure Government) description: Details of the IRS 1075 September 2016 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Gov Iso 27001 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-iso-27001.md | Title: Regulatory Compliance details for ISO 27001:2013 (Azure Government) description: Details of the ISO 27001:2013 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Gov Nist Sp 800 171 R2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-171-r2.md | Title: Regulatory Compliance details for NIST SP 800-171 R2 (Azure Government) description: Details of the NIST SP 800-171 R2 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Gov Nist Sp 800 53 R4 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r4.md | Title: Regulatory Compliance details for NIST SP 800-53 Rev. 4 (Azure Government) description: Details of the NIST SP 800-53 Rev. 4 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Gov Nist Sp 800 53 R5 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r5.md | Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 (Azure Government) description: Details of the NIST SP 800-53 Rev. 5 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Gov Soc 2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-soc-2.md | Title: Regulatory Compliance details for System and Organization Controls (SOC) 2 (Azure Government) description: Details of the System and Organization Controls (SOC) 2 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Hipaa Hitrust 9 2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/hipaa-hitrust-9-2.md | Title: Regulatory Compliance details for HIPAA HITRUST 9.2 description: Details of the HIPAA HITRUST 9.2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Irs 1075 Sept2016 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/irs-1075-sept2016.md | Title: Regulatory Compliance details for IRS 1075 September 2016 description: Details of the IRS 1075 September 2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Iso 27001 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/iso-27001.md | Title: Regulatory Compliance details for ISO 27001:2013 description: Details of the ISO 27001:2013 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Mcfs Baseline Confidential | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/mcfs-baseline-confidential.md | Title: Regulatory Compliance details for Microsoft Cloud for Sovereignty Baseline Confidential Policies description: Details of the Microsoft Cloud for Sovereignty Baseline Confidential Policies Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Mcfs Baseline Global | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/mcfs-baseline-global.md | Title: Regulatory Compliance details for Microsoft Cloud for Sovereignty Baseline Global Policies description: Details of the Microsoft Cloud for Sovereignty Baseline Global Policies Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Nist Sp 800 171 R2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-171-r2.md | Title: Regulatory Compliance details for NIST SP 800-171 R2 description: Details of the NIST SP 800-171 R2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Nist Sp 800 53 R4 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r4.md | Title: Regulatory Compliance details for NIST SP 800-53 Rev. 4 description: Details of the NIST SP 800-53 Rev. 4 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Nist Sp 800 53 R5 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r5.md | Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 description: Details of the NIST SP 800-53 Rev. 5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Nl Bio Cloud Theme | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nl-bio-cloud-theme.md | Title: Regulatory Compliance details for NL BIO Cloud Theme description: Details of the NL BIO Cloud Theme Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Pci Dss 3 2 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-3-2-1.md | Title: Regulatory Compliance details for PCI DSS 3.2.1 description: Details of the PCI DSS 3.2.1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Pci Dss 4 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-4-0.md | Title: Regulatory Compliance details for PCI DSS v4.0 description: Details of the PCI DSS v4.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 initiative definition. |[Develop an incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b4e134f-1e4c-2bff-573e-082d85479b6e) |CMA_0145 - Develop an incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0145.json) | |[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) | |[Set automated notifications for new and trending cloud applications in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf38215f-70c4-0cd6-40c2-c52d86690a45) |CMA_0495 - Set automated notifications for new and trending cloud applications in your organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0495.json) |+|[Set file integrity rules in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9e1a2a94-cf7e-47de-b28e-d445ecc63902) |CMA_M1000 - Set file integrity rules in your organization |Manual, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_M1000.json) | ### Network intrusions and unexpected file changes are detected and responded to initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Employ automatic shutdown/restart when violations are detected](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8a7ec3-11cc-a2d3-8cd0-eedf074424a4) |CMA_C1715 - Employ automatic shutdown/restart when violations are detected |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1715.json) |+|[Set file integrity rules in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9e1a2a94-cf7e-47de-b28e-d445ecc63902) |CMA_M1000 - Set file integrity rules in your organization |Manual, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_M1000.json) | |[Verify software, firmware and information integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdb28735f-518f-870e-15b4-49623cbe3aa0) |CMA_0542 - Verify software, firmware and information integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0542.json) | |[View and configure system diagnostic data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0123edae-3567-a05a-9b05-b53ebe9d3e7e) |CMA_0544 - View and configure system diagnostic data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0544.json) | |
governance | Rbi Itf Banks 2016 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi-itf-banks-2016.md | Title: Regulatory Compliance details for Reserve Bank of India IT Framework for Banks v2016 description: Details of the Reserve Bank of India IT Framework for Banks v2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Rbi Itf Nbfc 2017 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi-itf-nbfc-2017.md | Title: Regulatory Compliance details for Reserve Bank of India - IT Framework for NBFC description: Details of the Reserve Bank of India - IT Framework for NBFC Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Rmit Malaysia | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rmit-malaysia.md | Title: Regulatory Compliance details for RMIT Malaysia description: Details of the RMIT Malaysia Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Soc 2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/soc-2.md | Title: Regulatory Compliance details for System and Organization Controls (SOC) 2 description: Details of the System and Organization Controls (SOC) 2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Spain Ens | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/spain-ens.md | Title: Regulatory Compliance details for Spain ENS description: Details of the Spain ENS Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 initiative definition. |[Configure Azure Defender for servers to be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e86a5b6-b9bd-49d1-8e21-4bb8a0862222) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |DeployIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Azure_Defender_Servers_DINE.json) | |[Configure Azure Defender for SQL servers on machines to be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50ea7265-7d8c-429e-9a7d-ca1f410191c3) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |DeployIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Azure_Defender_SQLServers_DINE.json) | |[Configure Azure Defender to be enabled on SQL managed instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc5a62eb0-c65a-4220-8a4d-f70dd4ca95dd) |Enable Azure Defender on your Azure SQL Managed Instances to detect anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. |DeployIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/TdOnManagedInstances_DINE.json) |-|[Configure Azure Kubernetes Service clusters to enable Defender profile](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F64def556-fbad-4622-930e-72d1d5589bf5) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.Defender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers: [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](/azure/defender-for-cloud/defender-for-containers-introduction). |DeployIfNotExists, Disabled |[4.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_AKS_SecurityProfile_DINE.json) | +|[Configure Azure Kubernetes Service clusters to enable Defender profile](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F64def556-fbad-4622-930e-72d1d5589bf5) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.Defender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers: [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](/azure/defender-for-cloud/defender-for-containers-introduction). |DeployIfNotExists, Disabled |[4.3.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_AKS_SecurityProfile_DINE.json) | |[Configure basic Microsoft Defender for Storage to be enabled (Activity Monitoring only)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17bc14a7-92e1-4551-8b8c-80f36953e166) |Microsoft Defender for Storage is an Azure-native layer of security intelligence that detects potential threats to your storage accounts. This policy will enable the basic Defender for Storage capabilities (Activity Monitoring). To enable full protection, which also includes On-upload Malware Scanning and Sensitive Data Threat Detection use the full enablement policy: aka.ms/DefenderForStoragePolicy. To learn more about Defender for Storage capabilities and benefits, visit aka.ms/DefenderForStorage. |DeployIfNotExists, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_Microsoft_Defender_For_Storage_Basic_DINE.json) | |[Configure machines to receive a vulnerability assessment provider](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F13ce0167-8ca6-4048-8e6b-f996402e3c1b) |Azure Defender includes vulnerability scanning for your machines at no extra cost. You don't need a Qualys license or even a Qualys account - everything's handled seamlessly inside Security Center. When you enable this policy, Azure Defender automatically deploys the Qualys vulnerability assessment provider to all supported machines that don't already have it installed. |DeployIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VulnerabilityAssessment_ProvisionQualysAgent_DINE.json) | |[Configure Microsoft Defender CSPM to be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F689f7782-ef2c-4270-a6d0-7664869076bd) |Defender Cloud Security Posture Management (CSPM) provides enhanced posture capabilities and a new intelligent cloud security graph to help identify, prioritize, and reduce risk. Defender CSPM is available in addition to the free foundational security posture capabilities turned on by default in Defender for Cloud. |DeployIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Azure_Defender_CSPM_DINE.json) | |
governance | Swift Csp Cscf 2021 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/swift-csp-cscf-2021.md | Title: Regulatory Compliance details for SWIFT CSP-CSCF v2021 description: Details of the SWIFT CSP-CSCF v2021 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Swift Csp Cscf 2022 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/swift-csp-cscf-2022.md | Title: Regulatory Compliance details for SWIFT CSP-CSCF v2022 description: Details of the SWIFT CSP-CSCF v2022 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
governance | Ukofficial Uknhs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/ukofficial-uknhs.md | Title: Regulatory Compliance details for UK OFFICIAL and UK NHS description: Details of the UK OFFICIAL and UK NHS Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/16/2024 Last updated : 08/26/2024 |
iot-hub | Authenticate Authorize Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/authenticate-authorize-azure-ad.md | -Authenticating access by using Microsoft Entra ID and controlling permissions by using Azure RBAC provides improved security and ease of use over security tokens. To minimize potential security issues inherent in security tokens, we recommend that you [enforce Microsoft Entra authentication whenever possible](#enforce-azure-ad-authentication). +Authenticating access by using Microsoft Entra ID and controlling permissions by using Azure RBAC provides improved security and ease of use over security tokens. To minimize potential security issues inherent in security tokens, we recommend that you [enforce Microsoft Entra authentication](#enforce-azure-ad-authentication) whenever possible. > [!NOTE] > Authentication with Microsoft Entra ID isn't supported for the IoT Hub *device APIs* (like device-to-cloud messages and update reported properties). Use [symmetric keys](authenticate-authorize-sas.md) or [X.509](authenticate-authorize-x509.md) to authenticate devices to IoT Hub. ## Authentication and authorization -*Authentication* is the process of proving that you are who you say you are. Authentication verifies the identity of a user or device to IoT Hub. It's sometimes shortened to *AuthN*. *Authorization* is the process of confirming permissions for an authenticated user or device on IoT Hub. It specifies what resources and commands you're allowed to access, and what you can do with those resources and commands. Authorization is sometimes shortened to *AuthZ*. +*Authentication* is the process of proving that you are who you say you are. Authentication verifies the identity of a user or device to IoT Hub. It's sometimes shortened to *AuthN*. -When a Microsoft Entra security principal requests access to an IoT Hub service API, the principal's identity is first *authenticated*. For authentication, the request needs to contain an OAuth 2.0 access token at runtime. The resource name for requesting the token is `https://iothubs.azure.net`. If the application runs in an Azure resource like an Azure VM, Azure Functions app, or Azure App Service app, it can be represented as a [managed identity](../active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md). +*Authorization* is the process of confirming permissions for an authenticated user or device on IoT Hub. It specifies what resources and commands you're allowed to access, and what you can do with those resources and commands. Authorization is sometimes shortened to *AuthZ*. ++When a Microsoft Entra security principal requests access to an IoT Hub service API, the principal's identity is first *authenticated*. For authentication, the request needs to contain an OAuth 2.0 access token at runtime. The resource name for requesting the token is `https://iothubs.azure.net`. If the application runs in an Azure resource like an Azure VM, Azure Functions app, or Azure App Service app, it can be represented as a [managed identity](/entra/identity/managed-identities-azure-resources/how-managed-identities-work-vm). After the Microsoft Entra principal is authenticated, the next step is *authorization*. In this step, IoT Hub uses the Microsoft Entra role assignment service to determine what permissions the principal has. If the principal's permissions match the requested resource or API, IoT Hub authorizes the request. So this step requires one or more Azure roles to be assigned to the security principal. IoT Hub provides some built-in roles that have common groups of permissions. ## Manage access to IoT Hub by using Azure RBAC role assignment -With Microsoft Entra ID and RBAC, IoT Hub requires the principal requesting the API to have the appropriate level of permission for authorization. To give the principal the permission, give it a role assignment. +With Microsoft Entra ID and RBAC, IoT Hub requires that the principal requesting the API have the appropriate level of permission for authorization. To give the principal the permission, give it a role assignment. - If the principal is a user, group, or application service principal, follow the guidance in [Assign Azure roles by using the Azure portal](../role-based-access-control/role-assignments-portal.yml).-- If the principal is a managed identity, follow the guidance in [Assign a managed identity access to a resource by using the Azure portal](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md).+- If the principal is a managed identity, follow the guidance in [Assign a managed identity access to a resource](/entra/identity/managed-identities-azure-resources/how-to-assign-access-azure-resource). To ensure least privilege, always assign the appropriate role at the lowest possible [resource scope](#resource-scope), which is probably the IoT Hub scope. IoT Hub provides the following Azure built-in roles for authorizing access to IoT Hub service APIs by using Microsoft Entra ID and RBAC: -| Role | Description | -| - | -- | +| Role | Description | +| - | -- | | [IoT Hub Data Contributor](../role-based-access-control/built-in-roles.md#iot-hub-data-contributor) | Allows full access to IoT Hub data plane operations. | | [IoT Hub Data Reader](../role-based-access-control/built-in-roles.md#iot-hub-data-reader) | Allows full read access to IoT Hub data plane properties. | | [IoT Hub Registry Contributor](../role-based-access-control/built-in-roles.md#iot-hub-registry-contributor) | Allows full access to the IoT Hub device registry. | Before you assign an Azure RBAC role to a security principal, determine the scop This list describes the levels at which you can scope access to IoT Hub, starting with the narrowest scope: -- **The IoT hub.** At this scope, a role assignment applies to the IoT hub. There's no scope smaller than an individual IoT hub. Role assignment at smaller scopes, like individual device identity or twin section, isn't supported.+- **The IoT hub.** At this scope, a role assignment applies to the IoT hub. There's no scope smaller than an individual IoT hub. Role assignment at smaller scopes, like individual device identity, isn't supported. - **The resource group.** At this scope, a role assignment applies to all IoT hubs in the resource group. - **The subscription.** At this scope, a role assignment applies to all IoT hubs in all resource groups in the subscription. - **A management group.** At this scope, a role assignment applies to all IoT hubs in all resource groups in all subscriptions in the management group. The following table describes the permissions available for IoT Hub service API | RBAC action | Description | |-|-| | `Microsoft.Devices/IotHubs/devices/read` | Read any device or module identity. |-| `Microsoft.Devices/IotHubs/devices/write` | Create or update any device or module identity. | +| `Microsoft.Devices/IotHubs/devices/write` | Create or update any device or module identity. | | `Microsoft.Devices/IotHubs/devices/delete` | Delete any device or module identity. | | `Microsoft.Devices/IotHubs/twins/read` | Read any device or module twin. | | `Microsoft.Devices/IotHubs/twins/write` | Write any device or module twin. | | `Microsoft.Devices/IotHubs/jobs/read` | Return a list of jobs. | | `Microsoft.Devices/IotHubs/jobs/write` | Create or update any job. | | `Microsoft.Devices/IotHubs/jobs/delete` | Delete any job. |-| `Microsoft.Devices/IotHubs/cloudToDeviceMessages/send/action` | Send a cloud-to-device message to any device. | +| `Microsoft.Devices/IotHubs/cloudToDeviceMessages/send/action` | Send a cloud-to-device message to any device. | | `Microsoft.Devices/IotHubs/cloudToDeviceMessages/feedback/action` | Receive, complete, or abandon a cloud-to-device message feedback notification. |-| `Microsoft.Devices/IotHubs/cloudToDeviceMessages/queue/purge/action` | Delete all the pending commands for a device. | +| `Microsoft.Devices/IotHubs/cloudToDeviceMessages/queue/purge/action` | Delete all the pending commands for a device. | | `Microsoft.Devices/IotHubs/directMethods/invoke/action` | Invoke a direct method on any device or module. | | `Microsoft.Devices/IotHubs/fileUpload/notifications/action` | Receive, complete, or abandon file upload notifications. | | `Microsoft.Devices/IotHubs/statistics/read` | Read device and service statistics. | The following table describes the permissions available for IoT Hub service API | `Microsoft.Devices/IotHubs/configurations/testQueries/action` | Validate the target condition and custom metric queries for a configuration. | > [!TIP]+> > - The [Bulk Registry Update](/rest/api/iothub/service/bulkregistry/updateregistry) operation requires both `Microsoft.Devices/IotHubs/devices/write` and `Microsoft.Devices/IotHubs/devices/delete`. > - The [Twin Query](/rest/api/iothub/service/query/gettwins) operation requires `Microsoft.Devices/IotHubs/twins/read`. > - [Get Digital Twin](/rest/api/iothub/service/digitaltwin/getdigitaltwin) requires `Microsoft.Devices/IotHubs/twins/read`. [Update Digital Twin](/rest/api/iothub/service/digitaltwin/updatedigitaltwin) requires `Microsoft.Devices/IotHubs/twins/write`. > - Both [Invoke Component Command](/rest/api/iothub/service/digitaltwin/invokecomponentcommand) and [Invoke Root Level Command](/rest/api/iothub/service/digitaltwin/invokerootlevelcommand) require `Microsoft.Devices/IotHubs/directMethods/invoke/action`. > [!NOTE]-> To get data from IoT Hub by using Microsoft Entra ID, [set up routing to a custom Event Hubs endpoint](iot-hub-devguide-messages-d2c.md). To access the [the built-in Event Hubs compatible endpoint](iot-hub-devguide-messages-read-builtin.md), use the connection string (shared access key) method as before. +> To get data from IoT Hub by using Microsoft Entra ID, [set up routing to a custom Event Hubs endpoint](iot-hub-devguide-messages-d2c.md). To access the [the built-in Event Hubs compatible endpoint](iot-hub-devguide-messages-read-builtin.md), use the connection string (shared access key) method as before. <a name='enforce-azure-ad-authentication'></a> ## Enforce Microsoft Entra authentication -By default, IoT Hub supports service API access through both Microsoft Entra ID and [shared access policies and security tokens](authenticate-authorize-sas.md). To minimize potential security vulnerabilities inherent in security tokens, you can disable access with shared access policies. +By default, IoT Hub supports service API access through both Microsoft Entra ID and [shared access policies and security tokens](authenticate-authorize-sas.md). To minimize potential security vulnerabilities inherent in security tokens, you can disable access with shared access policies. - > [!WARNING] - > By denying connections using shared access policies, all users and services that connect using this method lose access immediately. Notably, since Device Provisioning Service (DPS) only supports linking IoT hubs using shared access policies, all device provisioning flows will fail with "unauthorized" error. Proceed carefully and plan to replace access with Microsoft Entra role based access. **Do not proceed if you use DPS**. +> [!WARNING] +> +> By denying connections using shared access policies, all users and services that connect using this method lose access immediately. Notably, since Device Provisioning Service (DPS) only supports linking IoT hubs using shared access policies, all device provisioning flows will fail with "unauthorized" error. Proceed carefully and plan to replace access with Microsoft Entra role based access. +> +> **Do not proceed if you use Device Provisioning Service**. 1. Ensure that your service clients and users have [sufficient access](#manage-access-to-iot-hub-by-using-azure-rbac-role-assignment) to your IoT hub. Follow the [principle of least privilege](../security/fundamentals/identity-management-best-practices.md). 1. In the [Azure portal](https://portal.azure.com), go to your IoT hub. 1. On the left pane, select **Shared access policies**. 1. Under **Connect using shared access policies**, select **Deny**, and review the warning.+ :::image type="content" source="media/iot-hub-dev-guide-azure-ad-rbac/disable-local-auth.png" alt-text="Screenshot that shows how to turn off IoT Hub shared access policies." border="true"::: +1. Select **Save**. + Your IoT Hub service APIs can now be accessed only through Microsoft Entra ID and RBAC. <a name='azure-ad-access-from-the-azure-portal'></a> For more information, see the [Azure IoT extension for Azure CLI release page](h ## Next steps -- For more information on the advantages of using Microsoft Entra ID in your application, see [Integrating with Microsoft Entra ID](../active-directory/develop/how-to-integrate.md).-- For more information on requesting access tokens from Microsoft Entra ID for users and service principals, see [Authentication scenarios for Microsoft Entra ID](../active-directory/develop/authentication-vs-authorization.md).+- For more information on the advantages of using Microsoft Entra ID in your application, see [Integrating with the Microsoft identity platform](/entra/identity-platform/how-to-integrate). +- To learn how access tokens, refresh tokens, and ID tokens are used in authorization and authentication, see [Security tokens](/entra/identity-platform/security-tokens). -Use the Device Provisioning Service to [Provision multiple X.509 devices using enrollment groups](../iot-dps/tutorial-custom-hsm-enrollment-group-x509.md). |
iot-hub | Iot Hub Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-managed-identity.md | -Managed identities provide Azure services with an automatically managed identity in Microsoft Entra ID in a secure manner. This eliminates the need for developers having to manage credentials by providing an identity. There are two types of managed identities: system-assigned and user-assigned. IoT Hub supports both. +Managed identities provide Azure services with an automatically managed identity in Microsoft Entra ID in a secure manner. This eliminates the need for developers having to manage credentials by providing an identity. There are two types of managed identities: system-assigned and user-assigned. IoT Hub supports both. -In IoT Hub, managed identities can be used for egress connectivity from IoT Hub to other Azure services for features such as [message routing](iot-hub-devguide-messages-d2c.md), [file upload](iot-hub-devguide-file-upload.md), and [bulk device import/export](iot-hub-bulk-identity-mgmt.md). In this article, you learn how to use system-assigned and user-assigned managed identities in your IoT hub for different functionalities. +In IoT Hub, managed identities can be used to connect IoT Hub to other Azure services for features such as [message routing](iot-hub-devguide-messages-d2c.md), [file upload](iot-hub-devguide-file-upload.md), and [bulk device import/export](iot-hub-bulk-identity-mgmt.md). In this article, you learn how to use system-assigned and user-assigned managed identities in your IoT hub for different functionalities. ## Prerequisites -- Understand the managed identity differences between *system-assigned* and *user-assigned* in [What are managed identities for Azure resources?](./../active-directory/managed-identities-azure-resources/overview.md)+- Understand the differences between *system-assigned* and *user-assigned* managed identity in [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview) - An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md). ## System-assigned managed identity -### Enable or disable system-assigned managed identity in Azure portal +### [Azure portal](#tab/portal) ++You can enable or disable system-assigned managed identity in Azure portal 1. Sign in to the Azure portal and navigate to your IoT hub. 2. Select **Identity** from the **Security settings** section of the navigation menu. In IoT Hub, managed identities can be used for egress connectivity from IoT Hub :::image type="content" source="./media/iot-hub-managed-identity/system-assigned.png" alt-text="Screenshot showing where to turn on system-assigned managed identity for an IoT hub."::: -### Enable system-assigned managed identity at hub creation time using ARM template +### [Azure Resource Manager](#tab/arm) ++You can enable system-assigned managed identity at hub creation time using ARM template To enable the system-assigned managed identity in your IoT hub at resource provisioning time, use the Azure Resource Manager (ARM) template below. After substituting the values for your resource `name`, `location`, `SKU.name` a az deployment group create --name <deployment-name> --resource-group <resource-group-name> --template-file <template-file.json> --parameters iotHubName=<valid-iothub-name> skuName=<sku-name> skuTier=<sku-tier> location=<any-of-supported-regions> ``` -After the resource is created, you can retrieve the system-assigned assigned to your hub using Azure CLI: +You can retrieve the system-assigned managed identity assigned to your hub using the Azure CLI: ```azurecli-interactive az resource show --resource-type Microsoft.Devices/IotHubs --name <iot-hub-resource-name> --resource-group <resource-group-name> ``` ++ ## User-assigned managed identity +### [Azure portal](#tab/portal) + In this section, you learn how to add and remove a user-assigned managed identity from an IoT hub using Azure portal. -1. First you need to create a user-assigned managed identity as a standalone resource. To do so, you can follow the instructions in [Create a user-assigned managed identity](./../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity). +1. First you need to create a user-assigned managed identity as a standalone resource. To do so, you can follow the instructions in [Manage user-assigned managed identities](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities). 2. Go to your IoT hub, navigate to the **Identity** in the IoT Hub portal.-3. Under **User-Assigned** tab, click **Associate a user-assigned managed identity**. Choose the user-assigned managed identity you want to add to your hub and then click **Select**. -4. You can remove a user-assigned identity from an IoT hub. Choose the user-assigned identity you want to remove, and click **Remove** button. Note you are only removing it from IoT hub, and this removal does not delete the user-assigned identity as a resource. To delete the user-assigned identity as a resource, follow the instructions in [Delete a user-assigned managed identity](./../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#delete-a-user-assigned-managed-identity). +3. Under **User-Assigned** tab, click **Associate a user-assigned managed identity**. Choose the user-assigned managed identity you want to add to your hub and then click **Select**. +4. You can remove a user-assigned identity from an IoT hub. Choose the user-assigned identity you want to remove, and click **Remove** button. Note you are only removing it from IoT hub, and this removal does not delete the user-assigned identity as a resource. To delete the user-assigned identity as a resource, follow the instructions in [Manage user-assigned managed identities](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities). - :::image type="content" source="./media/iot-hub-managed-identity/user-assigned.png" alt-text="Screenshot showing how to add user-assigned managed identity for an I O T hub."::: + :::image type="content" source="./media/iot-hub-managed-identity/user-assigned.png" alt-text="Screenshot showing how to add user-assigned managed identity for an IoT hub." lightbox="./media/iot-hub-managed-identity/user-assigned.png"::: -### Enable user-assigned managed identity at hub creation time using ARM template +### [Azure Resource Manager](#tab/arm) The following example template can be used to create a hub with user-assigned managed identity. This template creates one user assigned identity with the name *[iothub-name-provided]-identity* and assigned to the IoT hub created. You can change the template to add multiple user-assigned identities as needed.- + ```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", After the resource is created, you can retrieve the user-assigned managed identi az resource show --resource-type Microsoft.Devices/IotHubs --name <iot-hub-resource-name> --resource-group <resource-group-name> ``` ++ ## Egress connectivity from IoT Hub to other Azure resources -Managed identities can be used for egress connectivity from IoT Hub to other Azure services for [message routing](iot-hub-devguide-messages-d2c.md), [file upload](iot-hub-devguide-file-upload.md), and [bulk device import/export](iot-hub-bulk-identity-mgmt.md). You can choose which managed identity to use for each IoT Hub egress connectivity to customer-owned endpoints including storage accounts, event hubs, and service bus endpoints. +Managed identities can be used for egress connectivity from IoT Hub to other Azure services. You can choose which managed identity to use for each IoT Hub egress connectivity to customer-owned endpoints including storage accounts, event hubs, and service bus endpoints. > [!NOTE] > Only system-assigned managed identity gives IoT Hub access to private resources. If you want to use user-assigned managed identity, then the public access on those private resources needs to be enabled in order to allow connectivity. In this section, we use the [message routing](iot-hub-devguide-messages-d2c.md) > [!NOTE] > For a storage account, select **Storage Blob Data Contributor** ([*not* Contributor or Storage Account Contributor](../storage/blobs/assign-azure-role-data-access.md)) as the role. For a service bus, select **Azure Service Bus Data Sender**. - :::image type="content" source="~/reusable-content/ce-skilling/azure/media/role-based-access-control/add-role-assignment-role-generic.png" alt-text="Screenshot showing Add role assignment page with Role tab selected."::: - 1. On the **Members** tab, select **Managed identity**, and then select **Select members**. 1. For user-assigned managed identities, select your subscription, select **User-assigned managed identity**, and then select your user-assigned managed identity. In this section, we use the [message routing](iot-hub-devguide-messages-d2c.md) 1. On the **Review + assign** tab, select **Review + assign** to assign the role. - For more information about role assignments, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml) + For more information about role assignments, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml). 1. If you need to restrict the connectivity to your custom endpoint through a VNet, you need to turn on the trusted Microsoft first party exception, to give your IoT hub access to the specific endpoint. For example, if you're adding an event hub custom endpoint, navigate to the **Firewalls and virtual networks** tab in your event hub and enable **Allow access from selected networks** option. Under the **Exceptions** list, check the box for **Allow trusted Microsoft services to access event hubs**. Click the **Save** button. This also applies to storage account and service bus. Learn more about [IoT Hub support for virtual networks](./virtual-network-support.md). IoT Hub's [file upload](iot-hub-devguide-file-upload.md) feature allows devices 1. On the **Review + assign** tab, select **Review + assign** to assign the role. - For more information about role assignments, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml) + For more information about role assignments, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml). If you need to restrict the connectivity to your storage account through a VNet, you need to turn on the trusted Microsoft first party exception, to give your IoT hub access to the storage account. On your storage account resource page, navigate to the **Firewalls and virtual networks** tab and enable **Allow access from selected networks** option. Under the **Exceptions** list, check the box for **Allow trusted Microsoft services to access this storage account**. Click the **Save** button. Learn more about [IoT Hub support for virtual networks](./virtual-network-support.md). > [!NOTE] > You need to complete above steps to assign the managed identity the right access before saving the storage account in IoT Hub for file upload using the managed identity. Please wait a few minutes for the role assignment to propagate.-5. On your IoT hub's resource page, navigate to **File upload** tab. -6. On the page that shows up, select the container that you intend to use in your blob storage, configure the **File notification settings, SAS TTL, Default TTL, and Maximum delivery count** as desired. Choose the preferred authentication type, and click **Save**. If you get an error at this step, temporarily set your storage account to allow access from **All networks**, then try again. You can configure firewall on the storage account once the File upload configuration is complete. ++1. On your IoT hub's resource page, navigate to **File upload** tab. ++1. On the page that shows up, select the container that you intend to use in your blob storage, configure the **File notification settings, SAS TTL, Default TTL, and Maximum delivery count** as desired. Choose the preferred authentication type, and click **Save**. If you get an error at this step, temporarily set your storage account to allow access from **All networks**, then try again. You can configure firewall on the storage account once the File upload configuration is complete. :::image type="content" source="./media/iot-hub-managed-identity/file-upload.png" alt-text="Screen shot that shows file upload with msi."::: IoT Hub's [file upload](iot-hub-devguide-file-upload.md) feature allows devices ## Configure bulk device import/export with managed identities -IoT Hub supports the functionality to [import/export devices](iot-hub-bulk-identity-mgmt.md)' information in bulk from/to a customer-provided storage blob. This functionality requires connectivity from IoT Hub to the storage account. +IoT Hub supports the functionality to [import/export device information in bulk](iot-hub-bulk-identity-mgmt.md) from or to a customer-provided storage blob. This functionality requires connectivity from IoT Hub to the storage account. 1. In the Azure portal, navigate to your storage account. result = iothub_job_manager.create_import_export_job(JobProperties( ``` > [!NOTE]+> > - If **storageAuthenticationType** is set to **identityBased** and **userAssignedIdentity** property is not **null**, the jobs will use the specified user-assigned managed identity. > - If the IoT hub is not configured with the user-assigned managed identity specified in **userAssignedIdentity**, the job will fail. > - If **storageAuthenticationType** is set to **identityBased** the **userAssignedIdentity** property is null, the jobs will use system-assigned identity. result = iothub_job_manager.create_import_export_job(JobProperties( > - If **storageAuthenticationType** is set to **identityBased** and neither **user-assigned** nor **system-assigned** managed identities are configured on the hub, the job will fail. ## SDK samples+ - [.NET SDK sample](https://aka.ms/iothubmsicsharpsample) - [Java SDK sample](https://github.com/Azure/azure-iot-sdk-java/tree/main/provisioning/provisioning-device-client/src/main/java/com/microsoft/azure/sdk/iot) - [Python SDK sample](https://github.com/Azure/azure-iot-hub-python/tree/main/samples) result = iothub_job_manager.create_import_export_job(JobProperties( Use the links below to learn more about IoT Hub features: -* [Message routing](./iot-hub-devguide-messages-d2c.md) -* [File upload](./iot-hub-devguide-file-upload.md) -* [Bulk device import/export](./iot-hub-bulk-identity-mgmt.md) +- [Message routing](./iot-hub-devguide-messages-d2c.md) +- [File upload](./iot-hub-devguide-file-upload.md) +- [Bulk device import/export](./iot-hub-bulk-identity-mgmt.md) |
lab-services | Retirement Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/retirement-guide.md | Azure Lab Services will be retired on June 28, 2027. The Azure Lab Services reti This section provides links to Microsoft and partner solutions that cover the breadth of Azure Lab Services capabilities. Also included are links that can help you with your transition from Azure Lab Services. ### Microsoft solutions-There are various Microsoft solutions that you might consider as a direct replacement for Azure Lab Services. Each of these Microsoft solutions offers browser-based web access. While these solutions arenΓÇÖt necessarily education-specific, they support a wide range of education and training scenarios. +There are various Microsoft solutions that you might consider as a direct replacement for Azure Lab Services. Each of these Microsoft solutions offers browser-based web access. While these solutions aren't necessarily education-specific, they support a wide range of education and training scenarios. ### Azure Virtual Desktop-[Azure Virtual Desktop](https://azure.microsoft.com/products/virtual-desktop/) is a comprehensive desktop and app virtualization service running in the cloud, offering secure, and scalable virtual desktop experiences with usage-based pricing. ItΓÇÖs ideal for providing full desktop and app delivery scenarios for Windows 10/11 with maximum control to any device from a flexible cloud virtual desktop infrastructure (VDI) platform on your Azure infrastructure and by using Microsoft Entra ID for user identities. Azure Virtual Desktop supports CPU/GPU-based Microsoft Entra ID joined virtual machines, content filtering, image management from Azure Marketplace or Azure compute gallery, centralized end-to-end management with Intune, and multi-session capabilities. +[Azure Virtual Desktop](https://azure.microsoft.com/products/virtual-desktop/) is a comprehensive desktop and app virtualization service running in the cloud, offering secure, and scalable virtual desktop experiences with usage-based pricing. It's ideal for providing full desktop and app delivery scenarios for Windows 10/11 with maximum control to any device from a flexible cloud virtual desktop infrastructure (VDI) platform on your Azure infrastructure and by using Microsoft Entra ID for user identities. Azure Virtual Desktop supports CPU/GPU-based Microsoft Entra ID joined virtual machines, content filtering, image management from Azure Marketplace or Azure compute gallery, centralized end-to-end management with Intune, and multi-session capabilities. #### How can I get started with Azure Virtual Desktop? - [What is Azure Virtual Desktop?](/azure/virtual-desktop/overview) - [Azure landing zones for Azure Virtual Desktop instances](/azure/cloud-adoption-framework/scenarios/azure-virtual-desktop/ready) ### Azure DevTest Labs-[Azure DevTest Labs](https://azure.microsoft.com/products/devtest-lab/) simplifies creation, usage, and management of infrastructure-as-a-service (IaaS) virtual machines within a lab context with usage-based pricing. ItΓÇÖs ideal for computer programming related courses and those users familiar with the Azure portal. Azure DevTest Labs supports Linux and Windows CPU/GPU-based virtual machines, student admin access, network isolated labs, nested virtualization, and image management from Azure Marketplace or Azure compute gallery. +[Azure DevTest Labs](https://azure.microsoft.com/products/devtest-lab/) simplifies creation, usage, and management of infrastructure-as-a-service (IaaS) virtual machines within a lab context with usage-based pricing. It's ideal for computer programming related courses and those users familiar with the Azure portal. Azure DevTest Labs supports Linux and Windows CPU/GPU-based virtual machines, student admin access, network isolated labs, nested virtualization, and image management from Azure Marketplace or Azure compute gallery. ++For more guidance on transitioning from Azure Lab Services to Azure DevTest Labs, see the [Azure Lab Services to Azure DevTest Labs Transition Guide](/azure/lab-services/transition-devtest-labs-guidance). #### How can I get started with Azure DevTest Labs? - [What is Azure DevTest Labs?](/azure/devtest-labs/devtest-lab-overview) There are various Microsoft solutions that you might consider as a direct replac - [What is Windows 365?](/windows-365/enterprise/overview) ### Microsoft Dev Box -[Microsoft Dev Box](https://azure.microsoft.com/products/dev-box/) offers cloud-based workstations preconfigured with tools and environments for developer workflow-specific tasks with usage-based pricing. ItΓÇÖs ideal for facilitating hands-on learning where training leaders can use Dev Box supported images to create identical virtual machines for trainees. Dev Box virtual machines are Microsoft Entra ID joined and support centralized end-to-end management with Microsoft Intune. +[Microsoft Dev Box](https://azure.microsoft.com/products/dev-box/) offers cloud-based workstations preconfigured with tools and environments for developer workflow-specific tasks with usage-based pricing. It's ideal for facilitating hands-on learning where training leaders can use Dev Box supported images to create identical virtual machines for trainees. Dev Box virtual machines are Microsoft Entra ID joined and support centralized end-to-end management with Microsoft Intune. #### How can I get started with Microsoft Dev Box? - [What is Microsoft Dev Box?](/azure/dev-box/overview-what-is-microsoft-dev-box) Yes, support continues for current lab deployments until the service retirement It applies to both lab accounts and lab plans. Transition encompasses the entire service, including labs using either a lab account or a lab plan. ### What will happen after retirement date?-After June 28, 2027, Azure Lab Services wonΓÇÖt be supported, and you won't have access to your lab accounts, lab plans, or labs. You will, however, have access to your Azure compute gallery and any images you might have saved there. +After June 28, 2027, Azure Lab Services won't be supported, and you won't have access to your lab accounts, lab plans, or labs. You will, however, have access to your Azure compute gallery and any images you might have saved there. ### Are there pricing differences across the Microsoft and partner solutions? Azure Lab Services operates on a consumption-based model where you only pay for active usage in your labs. The hourly price of a lab is based on [the virtual machine size](https://azure.microsoft.com/pricing/details/lab-services/) selected and includes costs such as compute. However, Azure Labs Services covers the cost of storage, which is offered as a complimentary service. The costs for other Microsoft and partner solutions vary based on their pricing model and optimizations that can be enabled. Azure Lab Services supports individual, dedicated virtual machines with persistent storage. Dedicated virtual machines with persistent storage might not be as cost efficient with other lab solutions when compared with options for multi-session, dynamic virtual machine creation, or changing the storage type to a lower tier when a virtual machine is shut down. Yes, you can continue to get help and support for Azure Lab Services by either c To get help and support for Azure Virtual Desktop, Azure DevTest Labs, Windows 365 Cloud PC, and Microsoft Dev Box you can use the usual Microsoft support channels for the particular service. ### How can I get transition help and support for a partner solution?-If you have questions about how to transition to one of the partnerΓÇÖs solutions, refer to the following resources for each partner (listed alphabetically). +If you have questions about how to transition to one of the partner's solutions, refer to the following resources for each partner (listed alphabetically). - [Apporto](https://aka.ms/azlabs-apporto) - [CloudLabs by Spektra Systems](https://aka.ms/azlabs-spektra) If you have questions about how to transition to one of the partnerΓÇÖs solution - [Skillable](https://aka.ms/azlabs-skillable) ### Can I automatically migrate my existing lab resources from Azure Lab Services to Microsoft and partner solutions?-Partners might provide migration tooling to automatically migrate labs from Azure Lab Services. However, early customer pilots show that itΓÇÖs often more efficient to recreate new labs using the optimizations offered by Microsoft and partner solutions, such as multi-session, dynamic virtual machine creation, and changing the storage type to a lower tier when a virtual machine is shut down. In certain situations, reusing custom images exported from your labs to an Azure compute gallery might be beneficial. Microsoft and partner solutions all support the use of or migration of images from your Azure compute gallery. We recommend evaluating whether existing lab images should be recreated when you're: +Partners might provide migration tooling to automatically migrate labs from Azure Lab Services. However, early customer pilots show that it's often more efficient to recreate new labs using the optimizations offered by Microsoft and partner solutions, such as multi-session, dynamic virtual machine creation, and changing the storage type to a lower tier when a virtual machine is shut down. In certain situations, reusing custom images exported from your labs to an Azure compute gallery might be beneficial. Microsoft and partner solutions all support the use of or migration of images from your Azure compute gallery. We recommend evaluating whether existing lab images should be recreated when you're: - Upgrading from [Generation 1](/windows-server/virtualization/hyper-v/plan/should-i-create-a-generation-1-or-2-virtual-machine-in-hyper-v) to [Generation 2](/azure/virtual-machines/generation-2) VM image, which might have improved boot and installation times. - Restructuring disk size to optimize lab requirements. - Generalizing image as appropriate, such as AVD (Azure Lab Services only exports specialized images). - Using a supported base Azure Marketplace image. - Dev Box requires specific Dev Box supported Marketplace images. - AVD requires multi-session Marketplace images to enable multi-session capabilities.++## Related content ++- [Azure Lab Services to Azure DevTest Labs Transition Guide](/azure/lab-services/transition-devtest-labs-guidance) |
lab-services | Transition Devtest Labs Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/transition-devtest-labs-guidance.md | + + Title: Transition from Azure Lab Services to Azure DevTest Labs +description: Learn how to transition from Azure Lab Services to Azure DevTest Labs. + Last updated : 08/26/2024++# customer intent: As an Azure Lab Services customer, I want to understand the Azure Lab Services retirement schedule and what Microsoft and partners services I can transition to. +++# Azure Lab Services to Azure DevTest Labs Transition Guide ++When you transition away from Azure Lab Services, DevTest Labs (DTL) is a first party option that can be considered. This document outlines when to and not to consider transitioning to use DevTest Labs. An outline of steps to follow is also included. ++## Scenario guidance ++### What are the target scenarios for DevTest Labs? ++DevTest Labs is targeted at enterprise customers. The primary scenario for which DevTest Labs is designed is the test box scenario, where a professional developer needs temporary access to a virtual machine (VM) that has a prereleased version of the software they need to test. A secondary scenario is professional developer training, when a developer needs temporary access to a VM for internal training. ++### When should a customer consider using DevTest Labs? ++- Customer needs access to Linux VMs - DevTest Labs is the only first party service that provides access to Linux. Cloud PC, Azure Virtual Desktop, Microsoft Dev Box don't provide access to native Linux VMs. +- Customer needs to use an image with nested virtualization - DevTest Labs works well with images that use nested virtualization because it provides a dedicated VM for each student. Nested virtualization isn't well-suited for multi-user session VMs because there's no concept of isolation between user sessions. +- Technical Computer Programming classes - DevTest Labs resources are available using the Azure portal. Only students comfortable with the Azure portal should use DTL. DTL APIs can be used if you want to create a custom portal to access DTL VMs outside of the Azure portal. ++### When should a customer not use DevTest Labs? +- Customer requires extensive cost controls, including user quota and limits on the number of VMs a user can have. DevTest Labs doesn't have any ability to restrict access to a VM based on a quota granted per student. +- Customer requires complex start and stop schedules. DevTest Labs is designed for enterprise developers; it supports daily start and stop schedules. +- Customer requires flexible login methods. DevTest Labs requires that the user exists in the Microsoft Entra ID tenant for the subscription in which the lab is hosted. RBAC permissions are used to control who has access to labs and VMs. ++## Frequently Asked Questions ++**What is the cost model?** +There are no costs for using the service; it's free to use. Customers are charged for resources used by the DevTest Labs service. This cost includes, but isn't limited to, the cost of storage, networking, and running time for any VMs in a lab. ++**Does DevTest Labs provide cost reporting?** +DevTest Labs is integrated into [Microsoft Cost Management](/azure/cost-management-billing/costs/overview-cost-management) for cost budgeting and analysis. [Allow tag inheritance and add tags to lab resource](/azure/devtest-labs/devtest-lab-configure-cost-management) to track per-lab costs. ++**Does DevTest Labs support nested virtualization?** +Yes. + +**Does DevTest Labs support custom images?** +Yes. We recommend [connecting your DevTest Labs to a Shared Image Gallery](/azure/devtest-labs/configure-shared-image-gallery). The Shared Image Gallery can be the same one that is connected to your Azure Lab Services lab account or lab plan. + +We recommend using a Shared Image Gallery over the DTL [custom images feature](/azure/devtest-labs/devtest-lab-create-custom-image-from-vm-using-portal) and [formulas](/azure/devtest-labs/devtest-lab-manage-formulas) features as Shared Image Galleries are compatible with several other Azure services and can be used in multiple labs. ++**Does DevTest Labs support multi-VM environments?** +[Azure Deployment Environments](https://azure.microsoft.com/products/deployment-environments/) is recommended for multi-VM environments. ++**Does DevTest Labs support schedules?** +DevTest Labs supports an optional daily start and/or stop schedule. ++**Does DevTest Labs support web access?** +Yes, if the VM is created in a Bastion-enabled virtual network. See [Enable browser connection to DevTest Labs VMs with Azure Bastion](/azure/devtest-labs/enable-browser-connection-lab-virtual-machines) for details. ++## Transition steps +1. **Verify [compute quota limits](/azure/quotas/view-quotas)** - DevTest Labs uses quota assigned to Compute when creating VMs. Increase [compute quota](/azure/quotas/regional-quota-requests), if needed. +1. **Configure Lab settings** + 1. **Images** + 1. [Restrict Marketplace images](/azure/devtest-labs/devtest-lab-enable-licensed-images) students can use. You can prevent students from using Marketplace images in totality. + 1. Enable custom images as applicable by [connecting your DevTest Labs to a Shared Image Gallery](/azure/devtest-labs/configure-shared-image-gallery). The gallery can be the same gallery you used with Azure Lab Services. + 1. DTL also supports creating VMs from [uploaded VHD](/azure/devtest-labs/devtest-lab-upload-vhd-using-storage-explorer) files. + 1. **SKU selection** - Consider enabling VM sizes equivalent to Azure Labs SKUs. See [Azure Lab Services VM Sizes](/azure/lab-services/administrator-guide#default-vm-sizes) for mappings to make sure to choose sizes that supported the [*shared ip* configuration option](/azure/devtest-labs/devtest-lab-shared-ip). + 1. **VM Limitations** - Set [max number of VMs per user to 1](/azure/devtest-labs/devtest-lab-set-lab-policy#set-virtual-machines-per-user). + 1. **Shutdown policies** + 1. Set [autoshutdown time](/azure/devtest-labs/devtest-lab-set-lab-policy#set-auto-shutdown) to ensure VMs are automatically turned off every day. + 1. Set [autoshutdown policy](/azure/devtest-labs/devtest-lab-set-lab-policy#set-auto-shutdown-policy) to 'User has no control over the schedule set by lab administrator.' If students are in multiple time zones, choose 'User sets a schedule and can't opt out' instead. + 1. [Turn off autostart](/azure/devtest-labs/devtest-lab-set-lab-policy#set-autostart) for the lab. + 1. **Virtual Network**. If your lab needs access to a license server, [add a virtual network in Azure DevTest Labs](/azure/devtest-labs/devtest-lab-configure-vnet). + 1. **Web browser access** - Optionally, enable [browser connection to DevTest Labs VMs with Azure Bastion](/azure/devtest-labs/enable-browser-connection-lab-virtual-machines). +1. **Create lab** - [Quickstart: Create a lab in the Azure portal - Azure DevTest Labs](/azure/devtest-labs/devtest-lab-create-lab). +1. **Cost Tracking** - Use custom tags for cost tracking in Microsoft Cost Management as it allows more nuanced cost analysis of underlying resources. [Allow tag inheritance and add tags to lab resource](/azure/devtest-labs/devtest-lab-configure-cost-management). +1. **Claimable VMs** - Optionally, precreate claimable VMs to ensure VMs are created with expected settings. + 1. Using advanced settings, multiple identical VMs can be created at once. + 1. Using advanced settings, set the expiration date for [claimable VMs](/azure/devtest-labs/devtest-lab-use-claim-capabilities). VMs will automatically be deleted after their expiration date and avoid unnecessary storage charges. +1. **Add Users** - [Add lab owners, contributors, and users in Azure DevTest Labs](/azure/devtest-labs/devtest-lab-add-devtest-user). + - If there are claimable VMs, students can use the 'claim any' command to assign a precreated VM to themselves. +1. **Configure Dashboard** - Optionally, [create a dashboard in the Azure portal](/azure/azure-portal/azure-portal-dashboards) to allow students to find their labs more easily. ++> [!Important] +> If using a Linux VM that only supports access using SSH, follow detailed instructions at [Connect to a Linux VM in your lab (Azure DevTest Labs)](/azure/devtest-labs/connect-linux-virtual-machine). ++## Related content ++- [Azure Lab Services Retirement Guide](/azure/lab-services/retirement-guide) |
load-balancer | Gateway Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-overview.md | You can insert appliances transparently for different kinds of scenarios such as * DDoS protection * Custom appliances -With Gateway Load Balancer, you can easily add or remove advanced network functionality without extra management overhead. It provides the bump-in-the-wire technology you need to ensure all traffic to a public endpoint is first sent to the appliance before your application. In scenarios with NVAs, it's especially important that flows are symmetrical. Gateway Load Balancer maintains flow stickiness to a specific instance in the backend pool along with flow symmetry. As a result, a consistent route to your network virtual appliance is ensured ΓÇô without other manual configuration. As a result, packets traverse the same network path in both directions and appliances that need this key capability are able to function seamlessly. +With Gateway Load Balancer, you can easily add or remove advanced network functionality without extra management overhead. It provides the bump-in-the-wire technology you need to ensure all traffic to and from a public endpoint is first sent to the appliance before your application. In scenarios with NVAs, it's especially important that flows are symmetrical. Gateway Load Balancer maintains flow stickiness to a specific instance in the backend pool along with flow symmetry. As a result, a consistent route to your network virtual appliance is ensured ΓÇô without further manual configuration. As a result, packets traverse the same network path in both directions and appliances that need this key capability are able to function seamlessly. The health probe listens across all ports and routes traffic to the backend instances using the HA ports rule. Traffic sent to and from Gateway Load Balancer uses the VXLAN protocol. Gateway Load Balancer has the following benefits: * Improve network virtual appliance availability. -* Chain applications across regions and subscriptions +* Chain applications across tenants and subscriptions -A Standard Public Load balancer or a Standard IP configuration of a virtual machine can be chained to a Gateway Load Balancer. Once chained to a Standard Public Load Balancer frontend or Standard IP configuration on a virtual machine, no extra configuration is needed to ensure traffic to, and from the application endpoint is sent to the Gateway Load Balancer. +## Configuration and supported scenarios -Traffic moves from the consumer virtual network to the provider virtual network. The traffic then returns to the consumer virtual network. The consumer virtual network and provider virtual network can be in different subscriptions, tenants, or regions removing management overhead. +A Standard Public Load balancer or a Standard IP configuration of a virtual machine can be chained to a Gateway Load Balancer. "Chaining" refers to the load balancer frontend or NIC IP configuration containing a reference to a Gateway Load Balancer frontend IP configuration. Once the Gateway Load Balancer is chained to a consumer resource, no additional configuration such as UDRs are needed to ensure traffic to and from the application endpoint is sent to the Gateway Load Balancer. ++Gateway Load Balancer supports both inbound and outbound traffic inspection. For inserting NVAs in the path of outbound traffic with Standard Load Balancer, Gateway Load Balancer must be chained to the frontend IP configurations selected in the configured outbound rules. ++## Data path diagram ++With Gateway Load Balancer, traffic intended for the consumer application through a Standard Load Balancer will be encapsulated with VXLAN headers and forwarded first to the Gateway Load Balancer and its configured NVAs in the backend pool. The traffic then returns to the consumer resource (in this case a Standard Load Balancer) and arrives at the consumer application virtual machines with its source IP preserved. The consumer virtual network and provider virtual network can be in different subscriptions or tenants, reducing management overhead. :::image type="content" source="./media/gateway-overview/gateway-load-balancer-diagram.png" alt-text="Diagram of gateway load balancer"::: For pricing, see [Load Balancer pricing](https://azure.microsoft.com/pricing/det ## Next steps - See [Create a Gateway Load Balancer using the Azure portal](tutorial-gateway-portal.md) to create a gateway load balancer.+- Learn how to use [Gateway Load Balancer for outbound connectivity scenarios](tutorial-gateway-outbound-connectivity.md). - Learn more about [Azure Load Balancer](load-balancer-overview.md). |
load-balancer | Load Balancer Custom Probe Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-custom-probe-overview.md | The protocol used by the health probe can be configured to one of the following ## Probe interval & timeout -The interval value determines how frequently the health probe checks for a response from your backend pool instances. If the health probe fails, your backend pool instances are immediately marked as unhealthy. If the health probe succeeds on the next healthy probe up, Azure Load Balancer marks your backend pool instances as healthy. The health probe attempts to check the configured health probe port every 5 seconds by default but can be explicitly set to another value. +The interval value determines how frequently the health probe checks for a response from your backend pool instances. If the health probe fails, your backend pool instances are immediately marked as unhealthy. If the health probe succeeds on the next healthy probe up, Azure Load Balancer marks your backend pool instances as healthy. The health probe attempts to check the configured health probe port every 5 seconds by default in the Azure portal, but can be explicitly set to another value. In order to ensure a timely response is received, HTTP/S health probes have built-in timeouts. The following are the timeout durations for TCP and HTTP/S probes: * TCP probe timeout duration: N/A (probes will fail once the configured probe interval duration has passed and the next probe has been sent) * HTTP/S probe timeout duration: 30 seconds -For HTTP/S probes, if the configured interval is longer than the above timeout period, the health probe will timeout and fail if no response is received during the timeout period. For example, if an HTTP health probe is configured with a probe interval of 120 seconds (every 2 minutes), and no probe response is received within the first 30 seconds, the probe will have reached its timeout period and fail. +For HTTP/S probes, if the configured interval is longer than the above timeout period, the health probe will timeout and fail if no response is received during the timeout period. For example, if an HTTP health probe is configured with a probe interval of 120 seconds (every 2 minutes), and no probe response is received within the first 30 seconds, the probe will have reached its timeout period and fail. When the configured interval is shorter than the above timeout period, the health probe will fail if no response is received before the configured interval period completes and the next probe will be sent immediately. ## Design guidance |
load-balancer | Load Balancer Nat Pool Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-nat-pool-migration.md | Install-Module -Name AzureLoadBalancerNATPoolMigration -Scope CurrentUser -Repos #### Example: pass a Load Balancer from the pipeline ```azurepowershell- Get-AzLoadBalancer -ResourceGroupName -ResourceGroupName <loadBalancerResourceGroupName> -Name <LoadBalancerName> | Start-AzNATPoolMigration + Get-AzLoadBalancer -ResourceGroupName <loadBalancerResourceGroupName> -Name <LoadBalancerName> | Start-AzNATPoolMigration ``` ## Common Questions |
migrate | Common Questions Server Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-server-migration.md | Migration and modernization tool migrates all the UEFI-based machines to Azure a | SUSE Linux Enterprise Server 12 SP4 | Y | Y | Y | | Ubuntu Server 16.04, 18.04, 19.04, 19.10 | Y | Y | Y | | RHEL 8.1, 8.0, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x | Y | Y | Y |-| Cent OS 8.1, 8.0, 7.7, 7.6, 7.5, 7.4, 6.x | Y | Y | Y | +| CentOS Stream | Y | Y | Y | | Oracle Linux 7.7, 7.7-CI | Y | Y | Y | ### Can I migrate Active Directory domain-controllers using Azure Migrate? |
migrate | Prepare For Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-for-migration.md | Configure this setting manually as follows: Azure Migrate completes these actions automatically for these versions - Red Hat Enterprise Linux 8.x, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.3, 7.2, 7.1, 7.0, 6.x (Azure Linux VM agent is also installed automatically during migration)-- Cent OS 8.x, 7.7, 7.6, 7.5, 7.4, 6.x (Azure Linux VM agent is also installed automatically during migration)+- CentOS Stream (Azure Linux VM agent is also installed automatically during migration) - SUSE Linux Enterprise Server 15 SP0, 15 SP1, 12, 11 SP4, 11 SP3 - Ubuntu 20.04, 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS (Azure Linux VM agent is also installed automatically during migration) - Debian 10, 9, 8, 7 |
operational-excellence | Overview Relocation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/overview-relocation.md | The following tables provide links to each Azure service relocation document. Th | | | | | [Azure Automation](./relocation-automation.md)| ✅ | ✅| ❌ | [Azure IoT Hub](/azure/iot-hub/iot-hub-how-to-clone?toc=/azure/operational-excellence/toc.json)| ✅ | ✅| ❌ |+[Azure Static Web Apps](./relocation-static-web-apps.md) | ✅ |❌ | ❌ | [Power BI](/power-bi/admin/service-admin-region-move?toc=/azure/operational-excellence/toc.json)| ✅ |❌ | ❌ | |
operational-excellence | Relocation Static Web Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-static-web-apps.md | + + Title: Relocate Azure Static Web Apps to another region +description: Learn how to relocate Azure Static Web Apps to another region +++ Last updated : 08/19/2024++++ - subject-relocation +#Customer intent: As an Azure service administrator, I want to move my Azure Static Web Apps resources to another Azure region. +++# Relocate Azure Static Web Apps to another region ++This article describes how to relocate [Azure Static Web Apps](../static-web-apps/overview.md) resources to another Azure region. +++++## Prerequisites ++Review the following prerequisites before you prepare for the relocation. ++- [Validate that Azure Static Web Apps is available in the target region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/). ++- Make sure that you have permission to create Static Web App resources in the target region. ++- Find out if there any Azure Policy region restrictions applied to your organization. ++- If using integrated API support provided by Azure Functions: + - Determine the availability of Azure Functions in the target region. + - Determine if Function API Keys are being used. For example, are you using Key Vault or do you deploy them as part of your application configuration files? + - Determine the deployment model for API support in the target region: [Distributed managed functions](../static-web-apps/distributed-functions.md) or [Bring Your own functions](../static-web-apps/functions-bring-your-own.md). Understand the differences between the two models. ++- Ensure that the Standard Hosting Plan is used to host the Static Web App. For more information about hosting plans, see [Azure Static Web Apps hosting plans](../static-web-apps/plans.md). ++- Determine the permissible downtime for relocation. ++- Depending on your Azure Static Web App deployment, the following dependent resources may need to be deployed and configured in the target region *prior* to relocation: + + - [Azure Functions](./relocation-functions.md). + - [Azure Virtual Network](./relocation-virtual-network.md) + - [Azure Front Door](../frontdoor/front-door-overview.md) + - [Network Security Group](./relocation-virtual-network-nsg.md) + - [Azure Private Link Service](./relocation-private-link.md) + - [Azure Key Vault](./relocation-key-vault.md) + - [Azure Storage Account](./relocation-storage-account.md) + - [Azure Application Gateway](./relocation-app-gateway.md) +++## Downtime ++The relocation of an Azure Static Web site introduces downtime to your application. The downtime is affected by which high availability pattern you have implemented for your Azure Static Web site. General patterns are: +- **Cold standby**: Workload data is backed up regularly based on its requirements. In case of a disaster, the workload is redeployed in a new Azure region and data is restored. +- **Warm standby**: The workload is deployed in the business continuity and disaster recovery (BCDR) region, and data is replicated asynchronously or synchronously. In the event of a disaster, the deployment in the disaster recovery (DR) region is scaled up and out. +- **Multi-region**: The workload is [deployed in both regions](/azure/architecture/web-apps/app-service/architectures/multi-region) and data is replicated synchronously. Both regions have a writable copy of the data. The implementation can be active/passive or active/active. ++## Prepare ++### Deployments with private endpoints ++If your Static Web Apps are deployed with private endpoints, make sure to: ++- Update host name for connection endpoint. +- Update host name on DNS private zone or custom DNS server (only applicable to Private Link). ++For more information, see [Configure private endpoint in Azure Static Web Apps](../static-web-apps/private-endpoint.md). +++### All other deployments ++For all other deployment types, make sure to: ++- If applicable, retrieve the new Function API keys from Azure Functions in the new region. ++- If the Azure Function has a dependency on a database, ensure that the `DATABASE_CONNECTION_STRING` is updated. This database may not be in scope of regional migration. ++- Update the custom domain to point to the new hostname of the static web app. ++- If using Key Vault, provision a new Key Vault in target region. Update the Function API Keys in Key Vault if applicable. Any other sensitive data not to be stored in code or config files should be stored in this Key Vault +++### Export the template ++To export the Resource Manager template that contains settings that describe your static web app: + +1. Sign in to the [Azure portal](https://portal.azure.com). +1. Go to your static web app. +1. From the left menu, under *Automation*, select **Export template**. ++ The template may take a moment to generate. ++1. Select **Download**. ++1. Locate the downloaded `.zip` file, and open it into a folder of your choice. ++ This file contains the `.json` files that include the template and scripts to deploy the template. ++1. Make the necessary changes to the template, such as updating the location with target region. +++## Relocate ++Use the following steps to relocate your static web app to another region. ++1. If you are relocating with Private Endpoint, follow the guidelines in [Relocate Azure Private Link Service to another region](./relocation-private-link.md). ++1. If you've provided an existing Azure Functions to your static web app, follow the relocation procedure for [Azure Functions](./relocation-functions.md). ++1. Redeploy you static web app using the [template that you exported and configured in the previous section](#export-the-template). ++ >[!IMPORTANT] + > If you're not using a custom domain, your application's URL changes in the target region. In this scenario, ensure that users know about the URL change. ++1. If you're using an Integrated API, create a new Integrated API that's supported by Azure Functions. ++1. Reconfigure your repository (GitHub or Azure DevOps) to deploy into the newly deployed static web app in the target region. Initiate the deployment of the application using GitHub actions or Azure Pipelines. ++1. With a *cold standby* deployment, make sure you inform clients about the new URL. If you're using a custom DNS domain, simply change the DNS entry to point to the target region. With a *warm standby* deployment, a load balancer, such as Front Door or Traffic manager handle migration of the static web app in the source region to the target region. +++++++ |
reliability | Availability Zones Service Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md | The following regions currently support availability zones: | Brazil South | France Central | Qatar Central | South Africa North | Australia East | | Canada Central | Italy North | UAE North | | Central India | | Central US | Germany West Central | Israel Central | | Japan East |-| East US | Norway East | | | Korea Central | +| East US | Norway East | | | *Japan West | | East US 2 | North Europe | | | Southeast Asia | | South Central US | UK South | | | East Asia | | US Gov Virginia | West Europe | | | China North 3 |-| West US 2 | Sweden Central | | | | +| West US 2 | Sweden Central | | |Korea Central | | West US 3 | Switzerland North | | | | | Mexico Central | Poland Central |||| ||Spain Central |||| |
route-server | Expressroute Vpn Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/expressroute-vpn-support.md | You can also replace the SDWAN appliance with Azure VPN gateway. Since Azure VPN If you enable BGP on the VPN gateway, the gateway learns *On-premises 1* routes dynamically over BGP. For more information, see [How to configure BGP for Azure VPN Gateway](../vpn-gateway/bgp-howto.md). If you donΓÇÖt enable BGP on the VPN gateway, the gateway learns *On-premises 1* routes that are defined in the local network gateway of *On-premises 1*. For more information, see [Create a local network gateway](../vpn-gateway/tutorial-site-to-site-portal.md#LocalNetworkGateway). Whether you enable BGP on the VPN gateway or not, the gateway advertises the routes it learns to the Route Server if route exchange is enabled. For more information, see [Configure route exchange](quickstart-configure-route-server-portal.md#configure-route-exchange). -> [!IMPORTANT] -> Azure VPN gateway must be configured in [**active-active**](../vpn-gateway/vpn-gateway-activeactive-rm-powershell.md) mode and have the ASN set to 65515. It's not a requirement to have BGP enabled on the VPN gateway to communicate with the Route Server. :::image type="content" source="./media/expressroute-vpn-support/expressroute-and-vpn-with-route-server.png" alt-text="Diagram showing ExpressRoute and VPN gateways exchanging routes through Azure Route Server."::: |
sap | Manage Virtual Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/manage-virtual-instance.md | In the sidebar menu, look under the section **SAP resources**: :::image type="content" source="media/configure-virtual-instance/sap-resources.png" lightbox="media/configure-virtual-instance/sap-resources.png" alt-text="Screenshot of VIS resource in Azure portal, showing SAP resources pages in the sidebar menu for ASCS, App server, and Database instances."::: +## View Manage identity under VIS ++You can view and create/delete the manage identity under the VIS. ++ ## Default Instance Numbers If you've deployed an SAP system using Azure Center for SAP solutions, the following list shows the default values of instance numbers configured during deployment: To delete a VIS: 1. [Open the VIS in the Azure portal](#open-vis-in-portal). 1. On the overview page's menu, select **Delete**. - :::image type="content" source="media/configure-virtual-instance/delete-vis-button.png" lightbox="media/configure-virtual-instance/delete-vis-button.png" alt-text="Screenshot of VIS resource in the Azure portal, showing delete button in the overview page's menu.."::: + :::image type="content" source="media/configure-virtual-instance/delete-vis-button.png" lightbox="media/configure-virtual-instance/delete-vis-button.png" alt-text="Screenshot of VIS resource in the Azure portal, showing delete button in the overview page's menu."::: 1. In the deletion pane, make sure that you want to delete this VIS and related resources. You can see a count for each type of resource to be deleted. |
sentinel | Api Dcr Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/api-dcr-reference.md | PUT https://management.azure.com/subscriptions/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeee #### Syslog/CEF DCR creation request body -The following is an example of a DCR creation request. For each stream—you can have several in one DCR—change the value of the `"Streams"` field according to the source of the messages you want to ingest: +The following is an example of a DCR creation request. For each data source stream—you can have several in one DCR—add a new subsection under `"syslog"` in the `"dataSources"` section and set the value of the `"streams"` field according to the source of the messages you want to ingest: -| Log source | `"Streams"` field value | +| Log source | `"streams"` field value | | | | | **Syslog** | `"Microsoft-Syslog"` | | **CEF** | `"Microsoft-CommonSecurityLog"` | | **Cisco ASA** | `"Microsoft-CiscoAsa"` | +See the example of multiple streams sections in the following code sample: + ```json { "location": "centralus", |
sentinel | Automate Incident Handling With Automation Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-incident-handling-with-automation-rules.md | In short, automation rules streamline the use of automation in Microsoft Sentine Automation rules are made up of several components: -- **[Triggers](#triggers)** that define what kind of incident event will cause the rule to run, subject to...-- **[Conditions](#conditions)** that will determine the exact circumstances under which the rule will run and perform...-- **[Actions](#actions)** to change the incident in some way or call a [playbook](automate-responses-with-playbooks.md).+- **[Triggers](#triggers)** that define what kind of incident event causes the rule to run, subject to **conditions**. +- **[Conditions](#conditions)** that determine the exact circumstances under which the rule runs and performs **actions**. +- **[Actions](#actions)** to change the incident in some way or call a [playbook](automate-responses-with-playbooks.md), which performs more complex actions and interacts with other services. ### Triggers Automation rules are triggered **when an incident is created or updated** or **when an alert is created**. Recall that incidents include alerts, and that both alerts and incidents can be created by analytics rules, of which there are several types, as explained in [Threat detection in Microsoft Sentinel](threat-detection.md). -The following table shows the different possible scenarios that will cause an automation rule to run. +The following table shows the different possible scenarios that cause an automation rule to run. | Trigger type | Events that cause the rule to run | | | | Even without being onboarded to the unified portal, you might anyway decide to u Complex sets of conditions can be defined to govern when actions (see below) should run. These conditions include the event that triggers the rule (incident created or updated, or alert created), the states or values of the incident's properties and [entity properties](entities-reference.md) (for incident trigger only), and also the analytics rule or rules that generated the incident or alert. -When an automation rule is triggered, it checks the triggering incident or alert against the conditions defined in the rule. For incidents, the property-based conditions are evaluated according to **the current state** of the property at the moment the evaluation occurs, or according to **changes in the state** of the property (see below for details). Since a single incident creation or update event could trigger several automation rules, the **order** in which they run (see below) makes a difference in determining the outcome of the conditions' evaluation. The **actions** defined in the rule will run only if all the conditions are satisfied. +When an automation rule is triggered, it checks the triggering incident or alert against the conditions defined in the rule. For incidents, the property-based conditions are evaluated according to **the current state** of the property at the moment the evaluation occurs, or according to **changes in the state** of the property (see below for details). Since a single incident creation or update event could trigger several automation rules, the **order** in which they run (see below) makes a difference in determining the outcome of the conditions' evaluation. The **actions** defined in the rule are executed only if all the conditions are satisfied. #### Incident create trigger In this example, in *Incident 1*: - If the condition checks each tag individually, then since there's at least one tag that *satisfies the condition* (that *doesn't* contain "2024"), the overall condition is **true**. - If the condition checks all the tags in the incident as a single unit, then since there's at least one tag that *doesn't satisfy the condition* (that *does* contain "2024"), the overall condition is **false**. -In *Incident 2*, the outcome will be the same, regardless of which type of condition is defined. +In *Incident 2*, the outcome is the same, regardless of which type of condition is defined. #### Alert create trigger -Currently the only condition that can be configured for the alert creation trigger is the set of analytics rules for which the automation rule will run. +Currently the only condition that can be configured for the alert creation trigger is the set of analytics rules for which the automation rule is run. ### Actions Actions can be defined to run when the conditions (see above) are met. You can d - Adding a tag to an incident ΓÇô this is useful for classifying incidents by subject, by attacker, or by any other common denominator. -Also, you can define an action to [**run a playbook**](tutorial-respond-threats-playbook.md), in order to take more complex response actions, including any that involve external systems. The playbooks available to be used in an automation rule depend on the [**trigger**](automate-responses-with-playbooks.md#azure-logic-apps-basic-concepts) on which the playbooks *and* the automation rule are based: Only incident-trigger playbooks can be run from incident-trigger automation rules, and only alert-trigger playbooks can be run from alert-trigger automation rules. You can define multiple actions that call playbooks, or combinations of playbooks and other actions. Actions will run in the order in which they are listed in the rule. +Also, you can define an action to [**run a playbook**](tutorial-respond-threats-playbook.md), in order to take more complex response actions, including any that involve external systems. The playbooks available to be used in an automation rule depend on the [**trigger**](automate-responses-with-playbooks.md#azure-logic-apps-basic-concepts) on which the playbooks *and* the automation rule are based: Only incident-trigger playbooks can be run from incident-trigger automation rules, and only alert-trigger playbooks can be run from alert-trigger automation rules. You can define multiple actions that call playbooks, or combinations of playbooks and other actions. Actions are executed in the order in which they are listed in the rule. -Playbooks using [either version of Azure Logic Apps (Standard or Consumption)](automate-responses-with-playbooks.md#logic-app-types) will be available to run from automation rules. +Playbooks using [either version of Azure Logic Apps (Standard or Consumption)](automate-responses-with-playbooks.md#logic-app-types) are available to run from automation rules. ### Expiration date -You can define an expiration date on an automation rule. The rule will be disabled after that date. This is useful for handling (that is, closing) "noise" incidents caused by planned, time-limited activities such as penetration testing. +You can define an expiration date on an automation rule. The rule is disabled after that date passes. This is useful for handling (that is, closing) "noise" incidents caused by planned, time-limited activities such as penetration testing. ### Order -You can define the order in which automation rules will run. Later automation rules will evaluate the conditions of the incident according to its state after being acted on by previous automation rules. +You can define the order in which automation rules are run. Later automation rules evaluate the conditions of the incident according to its state after being acted on by previous automation rules. -For example, if "First Automation Rule" changed an incident's severity from Medium to Low, and "Second Automation Rule" is defined to run only on incidents with Medium or higher severity, it won't run on that incident. +For example, if "First Automation Rule" changed an incident's severity from Medium to Low, and "Second Automation Rule" is defined to run only on incidents with Medium or higher severity, it doesn't run on that incident. -The order of automation rules that add [incident tasks](incident-tasks.md) determines the order in which the tasks will appear in a given incident. +The order of automation rules that add [incident tasks](incident-tasks.md) determines the order in which the tasks appear in a given incident. -Rules based on the update trigger have their own separate order queue. If such rules are triggered to run on a just-created incident (by a change made by another automation rule), they will run only after all the applicable rules based on the create trigger have run. +Rules based on the update trigger have their own separate order queue. If such rules are triggered to run on a just-created incident (by a change made by another automation rule), they run only after all the applicable rules based on the create trigger are finished running. #### Notes on execution order and priority - Setting the **order** number in automation rules determines their order of execution. - Each trigger type maintains its own queue.-- For rules created in the Azure portal, the **order** field will be automatically populated with the number following the highest number used by existing rules of the same trigger type.+- For rules created in the Azure portal, the **order** field is automatically populated with the number following the highest number used by existing rules of the same trigger type. - However, for rules created in other ways (command line, API, etc.), the **order** number must be assigned manually. - There is no validation mechanism preventing multiple rules from having the same order number, even within the same trigger type. - You can allow two or more rules of the same trigger type to have the same order number, if you don't care which order they run in.-- For rules of the same trigger type with the same order number, the execution engine randomly selects which rules will run in which order.-- For rules of different *incident trigger* types, all applicable rules with the *incident creation* trigger type will run first (according to their order numbers), and only then the rules with the *incident update* trigger type (according to *their* order numbers).+- For rules of the same trigger type with the same order number, the execution engine randomly selects which rules run in which order. +- For rules of different *incident trigger* types, all applicable rules with the *incident creation* trigger type run first (according to their order numbers), and only then the rules with the *incident update* trigger type (according to *their* order numbers). - Rules always run sequentially, never in parallel. > [!NOTE] Rules based on the update trigger have their own separate order queue. If such r ### Incident tasks -Automation rules allow you to standardize and formalize the steps required for the triaging, investigation, and remediation of incidents, by [creating tasks](incident-tasks.md) that can be applied to a single incident, across groups of incidents, or to all incidents, according to the conditions you set in the automation rule and the threat detection logic in the underlying analytics rules. Tasks applied to an incident appear in the incident's page, so your analysts have the entire list of actions they need to take, right in front of them, and won't miss any critical steps. +Automation rules allow you to standardize and formalize the steps required for the triaging, investigation, and remediation of incidents, by [creating tasks](incident-tasks.md) that can be applied to a single incident, across groups of incidents, or to all incidents, according to the conditions you set in the automation rule and the threat detection logic in the underlying analytics rules. Tasks applied to an incident appear in the incident's page, so your analysts have the entire list of actions they need to take, right in front of them, and don't miss any critical steps. ### Incident- and alert-triggered automation You can use the update trigger to apply many of the above use cases to incidents ### Update orchestration and notification -Notify your various teams and other personnel when changes are made to incidents, so they won't miss any critical updates. Escalate incidents by assigning them to new owners and informing the new owners of their assignments. Control when and how incidents are reopened. +Notify your various teams and other personnel when changes are made to incidents, so they don't miss any critical updates. Escalate incidents by assigning them to new owners and informing the new owners of their assignments. Control when and how incidents are reopened. ### Maintain synchronization with external systems -If you've used playbooks to create tickets in external systems when incidents are created, you can use an update-trigger automation rule to call a playbook that will update those tickets. +If you've used playbooks to create tickets in external systems when incidents are created, you can use an update-trigger automation rule to call a playbook that updates those tickets. ## Automation rules execution Playbook actions within an automation rule might be treated differently under so When a Microsoft Sentinel automation rule runs a playbook, it uses a special Microsoft Sentinel service account specifically authorized for this action. The use of this account (as opposed to your user account) increases the security level of the service. -In order for an automation rule to run a playbook, this account must be granted explicit permissions to the resource group where the playbook resides. At that point, any automation rule will be able to run any playbook in that resource group. +In order for an automation rule to run a playbook, this account must be granted explicit permissions to the resource group where the playbook resides. At that point, any automation rule can run any playbook in that resource group. -When you're configuring an automation rule and adding a **run playbook** action, a drop-down list of playbooks will appear. Playbooks to which Microsoft Sentinel does not have permissions will show as unavailable ("grayed out"). You can grant Microsoft Sentinel permission to the playbooks' resource groups on the spot by selecting the **Manage playbook permissions** link. To grant those permissions, you'll need **Owner** permissions on those resource groups. [See the full permissions requirements](tutorial-respond-threats-playbook.md#respond-to-incidents). +When you're configuring an automation rule and adding a **run playbook** action, a drop-down list of playbooks appears. Playbooks to which Microsoft Sentinel does not have permissions display as unavailable ("grayed out"). You can grant Microsoft Sentinel permission to the playbooks' resource groups on the spot by selecting the **Manage playbook permissions** link. To grant those permissions, you need **Owner** permissions on those resource groups. [See the full permissions requirements](tutorial-respond-threats-playbook.md#respond-to-incidents). #### Permissions in a multitenant architecture In the specific case of a Managed Security Service Provider (MSSP), where a serv - **An automation rule created in the customer tenant is configured to run a playbook located in the service provider tenant.** - This approach is normally used to protect intellectual property in the playbook. Nothing special is required for this scenario to work. When defining a playbook action in your automation rule, and you get to the stage where you grant Microsoft Sentinel permissions on the relevant resource group where the playbook is located (using the **Manage playbook permissions** panel), you'll see the resource groups belonging to the service provider tenant among those you can choose from. [See the whole process outlined here](tutorial-respond-threats-playbook.md#respond-to-incidents). + This approach is normally used to protect intellectual property in the playbook. Nothing special is required for this scenario to work. When defining a playbook action in your automation rule, and you get to the stage where you grant Microsoft Sentinel permissions on the relevant resource group where the playbook is located (using the **Manage playbook permissions** panel), you can see the resource groups belonging to the service provider tenant among those you can choose from. [See the whole process outlined here](tutorial-respond-threats-playbook.md#respond-to-incidents). - **An automation rule created in the customer workspace (while signed into the service provider tenant) is configured to run a playbook located in the customer tenant**. You can [create and manage automation rules](create-manage-use-automation-rules. In the **Automation** page, you see all the rules that are defined on the workspace, along with their status (Enabled/Disabled) and which analytics rules they are applied to. - When you need an automation rule that will apply to incidents from Microsoft Defender XDR, or from many analytics rules in Microsoft Sentinel, create it directly in the **Automation** page. + When you need an automation rule that applies to incidents from Microsoft Defender XDR, or from many analytics rules in Microsoft Sentinel, create it directly in the **Automation** page. - **Analytics rule wizard** In the **Automated response** tab of the Microsoft Sentinel analytics rule wizard, under **Automation rules**, you can view, edit, and create automation rules that apply to the particular analytics rule being created or edited in the wizard. - You'll notice that when you create an automation rule from here, the **Create new automation rule** panel shows the **analytics rule** condition as unavailable, because this rule is already set to apply only to the analytics rule you're editing in the wizard. All the other configuration options are still available to you. + When you create an automation rule from here, the **Create new automation rule** panel shows the **analytics rule** condition as unavailable, because this rule is already set to apply only to the analytics rule you're editing in the wizard. All the other configuration options are still available to you. - **Incidents page** You can also create an automation rule from the **Incidents** page, in order to respond to a single, recurring incident. This is useful when creating a [suppression rule](#incident-suppression) for [automatically closing "noisy" incidents](false-positives.md). - You'll notice that when you create an automation rule from here, the **Create new automation rule** panel has populated all the fields with values from the incident. It names the rule the same name as the incident, applies it to the analytics rule that generated the incident, and uses all the available entities in the incident as conditions of the rule. It also suggests a suppression (closing) action by default, and suggests an expiration date for the rule. You can add or remove conditions and actions, and change the expiration date, as you wish. + When you create an automation rule from here, the **Create new automation rule** panel populates all the fields with values from the incident. It names the rule the same name as the incident, applies it to the analytics rule that generated the incident, and uses all the available entities in the incident as conditions of the rule. It also suggests a suppression (closing) action by default, and suggests an expiration date for the rule. You can add or remove conditions and actions, and change the expiration date, as you wish. +### Export and import automation rules (Preview) ++Export your automation rules to Azure Resource Manager (ARM) template files, and import rules from these files, as part of managing and controlling your Microsoft Sentinel deployments as code. The export action creates a JSON file in your browser's downloads location, that you can then rename, move, and otherwise handle like any other file. ++The exported JSON file is workspace-independent, so it can be imported to other workspaces and even other tenants. As code, it can also be version-controlled, updated, and deployed in a managed CI/CD framework. ++The file includes all the parameters defined in the automation rule. Rules of any trigger type can be exported to a JSON file. ++For instructions on exporting and importing automation rules, see [Export and import Microsoft Sentinel automation rules](import-export-automation-rules.md). ## Next steps |
sentinel | Data Connectors Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md | Title: Find your Microsoft Sentinel data connector | Microsoft Docs description: Learn about specific configuration steps for Microsoft Sentinel data connectors. Previously updated : 07/03/2024 Last updated : 08/26/2024 appliesto: Log collection from many security appliances and devices are supported by the da Contact the solution provider for more information or where information is unavailable for the appliance or device. +## Custom Logs via AMA connector ++Filter and ingest logs in text-file format from network or security applications installed on Windows or Linux machines by using the **Custom Logs via AMA connector** in Microsoft Sentinel. For more information, see the following articles: ++- [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](/azure/sentinel/connect-custom-logs-ama?tabs=portal) +- [Custom Logs via AMA data connector - Configure data ingestion to Microsoft Sentinel from specific applications](/azure/sentinel/unified-connector-custom-device) + ## Codeless connector platform connectors The following connectors use the current codeless connector platform but don't have a specific documentation page generated. They're available from the content hub in Microsoft Sentinel as part of a solution. For instructions on how to configure these data connectors, review the instructions available with each data connectors within Microsoft Sentinel. |
sentinel | Import Export Analytics Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/import-export-analytics-rules.md | The file includes all the parameters defined in the analytics rule, so for **Sch 1. Select the rule you want to export and click **Export** from the bar at the top of the screen. - :::image type="content" source="./media/import-export-analytics-rules/export-rule.png" alt-text="Export analytics rule" lightbox="./media/import-export-analytics-rules/export-rule.png"::: + :::image type="content" source="./media/import-export-analytics-rules/export-analytics-rule.png" alt-text="Export analytics rule" lightbox="./media/import-export-analytics-rules/export-analytics-rule.png"::: > [!NOTE] > - You can select multiple analytics rules at once for export by marking the check boxes next to the rules and clicking **Export** at the end. The file includes all the parameters defined in the analytics rule, so for **Sch 1. Click **Import** from the bar at the top of the screen. In the resulting dialog box, navigate to and select the JSON file representing the rule you want to import, and select **Open**. - :::image type="content" source="./media/import-export-analytics-rules/import-rule.png" alt-text="Import analytics rule" lightbox="./media/import-export-analytics-rules/import-rule.png"::: + :::image type="content" source="./media/import-export-analytics-rules/import-analytics-rule.png" alt-text="Import analytics rule" lightbox="./media/import-export-analytics-rules/import-analytics-rule.png"::: > [!NOTE] > You can import **up to 50** analytics rules from a single ARM template file. |
sentinel | Import Export Automation Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/import-export-automation-rules.md | + + Title: Import and export Microsoft Sentinel automation rules | Microsoft Docs +description: Export and import automation rules to and from ARM templates to aid deployment +++ Last updated : 08/07/2024+appliesto: + - Microsoft Sentinel in the Azure portal + - Microsoft Sentinel in the Microsoft Defender portal ++++# Export and import automation rules to and from ARM templates ++Manage your Microsoft Sentinel automation rules as code! You can now export your automation rules to Azure Resource Manager (ARM) template files, and import rules from these files, as part of your program to manage and control your Microsoft Sentinel deployments as code. The export action creates a JSON file in your browser's downloads location, that you can then rename, move, and otherwise handle like any other file. ++The exported JSON file is workspace-independent, so it can be imported to other workspaces and even other tenants. As code, it can also be version-controlled, updated, and deployed in a managed CI/CD framework. ++The file includes all the parameters defined in the automation rule. Rules of any trigger type can be exported to a JSON file. ++This article shows you how to export and import automation rules. ++> [!IMPORTANT] +> +> - Exporting and importing rules is in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +> +> - [!INCLUDE [unified-soc-preview](includes/unified-soc-preview-without-alert.md)] ++## Export rules ++1. From the Microsoft Sentinel navigation menu, select **Automation**. ++1. Select the rule (or rules—see note) you want to export, and select **Export** from the bar at the top of the screen. ++ :::image type="content" source="./media/import-export-automation-rules/export-automation-rule.png" alt-text="Screenshot showing how to export an automation rule." lightbox="./media/import-export-automation-rules/export-automation-rule.png"::: ++ Find the exported file in your Downloads folder. It has the same name as the automation rule, with a .json extension. ++ > [!NOTE] + > - You can select multiple automation rules at once for export by marking the check boxes next to the rules and selecting **Export** at the end. + > + > - You can export all the rules on a single page of the display grid at once, by marking the check box in the header row before clicking **Export**. You can't export more than one page's worth of rules at a time, though. + > + > - In this scenario, a single file (named *Azure_Sentinel_automation_rules.json*) is created, and contains JSON code for all the exported rules. ++## Import rules ++1. Have an automation rule ARM template JSON file ready. ++1. From the Microsoft Sentinel navigation menu, select **Automation**. ++1. Select **Import** from the bar at the top of the screen. In the resulting dialog box, navigate to and select the JSON file representing the rule you want to import, and select **Open**. ++ :::image type="content" source="./media/import-export-automation-rules/import-automation-rule.png" alt-text="Screenshot showing how to import an automation rule." lightbox="./media/import-export-automation-rules/import-automation-rule.png"::: ++ > [!NOTE] + > You can import **up to 50** automation rules from a single ARM template file. ++## Troubleshooting ++If you have any issues importing an exported automation rule, consult the following table. ++| Behavior (with *error*) | Reason | Suggested action | +| -- | | - | +| **Imported automation rule is disabled**<br>-*and*-<br>**The rule's *analytics rule* condition displays "Unknown rule"** | The rule contains a condition that refers to an analytics rule that doesn't exist in the target workspace. | <ol><li>Export the referenced analytics rule from the original workspace and import it to the target one.<li>Edit the automation rule in the target workspace, choosing the now-present analytics rule from the drop-down.<li>Enable the automation rule.</ol> | +| **Imported automation rule is disabled**<br>-*and*-<br>**The rule's *custom details key* condition displays "Unknown custom details key"** | The rule contains a condition that refers to a [custom details key](surface-custom-details-in-alerts.md) that isn't defined in any analytics rules in the target workspace. | <ol><li>Export the referenced analytics rule from the original workspace and import it to the target one.<li>Edit the automation rule in the target workspace, choosing the now-present analytics rule from the drop-down.<li>Enable the automation rule. | +| **Deployment failed in target workspace, with error message: "*Automation rules failed to deploy.*"**<br>Deployment details contain the reasons listed in the next column for failure. | The playbook was moved.<br>-*or*-<br>The playbook was deleted.<br>-*or*-<br>The target workspace doesn't have access to the playbook. | Make sure the playbook exists, and that the target workspace has the right access to the resource group that contains the playbook. | +| **Deployment failed in target workspace, with error message: "*Automation rules failed to deploy.*"**<br>Deployment details contain the reasons listed in the next column for failure . | The automation rule was past its defined expiration date when you imported it. | **If you want the rule to remain expired in its original workspace:**<ol><li>Edit the JSON file that represents the exported automation rule.<li>Find the expiration date (that appears immediately after the string `"expirationTimeUtc":`) and replace it with a new expiration date (in the future).<li>Save the file and re-import it into the target workspace.</ol>**If you want the rule to return to active status in its original workspace:**<ol><li>Edit the automation rule in the original workspace and change its expiration date to a date in the future.<li>Export the rule again from the original workspace.<li>Import the newly exported version into the target workspace.</ol> | +| **Deployment failed in target workspace, with error message:<br>"*The JSON file you attempted to import has an invalid format. Please check the file and try again.*"** | The imported file isn't a valid JSON file. | Check the file for problems and try again. For best results, export the original rule again to a new file, then try the import again. | +| **Deployment failed in target workspace, with error message:<br>"*No resources found in the file. Please ensure the file contains deployment resources and try again.*"** | The list of resources under the "resources" key in the JSON file is empty. | Check the file for problems and try again. For best results, export the original rule again to a new file, then try the import again. | ++## Next steps ++In this document, you learned how to export and import automation rules to and from ARM templates. +- Learn more about [automation rules](automate-incident-handling-with-automation-rules.md) and [how to create and work with them](create-manage-use-automation-rules.md). +- Learn more about [ARM templates](../azure-resource-manager/templates/overview.md). |
sentinel | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md | description: Learn about the latest new features and announcement in Microsoft S Previously updated : 07/30/2024 Last updated : 08/18/2024 # What's new in Microsoft Sentinel The listed features were released in the last three months. For information abou ## August 2024 +- [Export and import automation rules (Preview)](#export-and-import-automation-rules-preview) - [Microsoft Sentinel support in Microsoft Defender multitenant management (Preview)](#microsoft-sentinel-support-in-microsoft-defender-multitenant-management-preview) - [Premium Microsoft Defender Threat Intelligence data connector (Preview)](#premium-microsoft-defender-threat-intelligence-data-connector-preview) - [Unified AMA-based connectors for syslog ingestion](#unified-ama-based-connectors-for-syslog-ingestion) The listed features were released in the last three months. For information abou - [New Auxiliary logs retention plan (Preview)](#new-auxiliary-logs-retention-plan-preview) - [Create summary rules for large sets of data (Preview)](#create-summary-rules-in-microsoft-sentinel-for-large-sets-of-data-preview) +### Export and import automation rules (Preview) ++Manage your Microsoft Sentinel automation rules as code! You can now export your automation rules to Azure Resource Manager (ARM) template files, and import rules from these files, as part of your program to manage and control your Microsoft Sentinel deployments as code. The export action will create a JSON file in your browser's downloads location, that you can then rename, move, and otherwise handle like any other file. ++The exported JSON file is workspace-independent, so it can be imported to other workspaces and even other tenants. As code, it can also be version-controlled, updated, and deployed in a managed CI/CD framework. ++The file includes all the parameters defined in the automation rule. Rules of any trigger type can be exported to a JSON file. ++Learn more about [exporting and importing automation rules](import-export-automation-rules.md). + ### Microsoft Sentinel support in Microsoft Defender multitenant management (Preview) If you've onboarded Microsoft Sentinel to the Microsoft unified security operations platform, Microsoft Sentinel data is now available with Defender XDR data in Microsoft Defender multitenant management. Only one Microsoft Sentinel workspace per tenant is currently supported in the Microsoft unified security operations platform. So, Microsoft Defender multitenant management shows security information and event management (SIEM) data from one Microsoft Sentinel workspace per tenant. For more information, see [Microsoft Defender multitenant management](/defender-xdr/mto-overview) and [Microsoft Sentinel in the Microsoft Defender portal](microsoft-sentinel-defender-portal.md). For more information, see: - [Optimize your security operations](soc-optimization/soc-optimization-access.md) - [SOC optimization reference of recommendations](soc-optimization/soc-optimization-reference.md) - ## April 2024 - [Unified security operations platform in the Microsoft Defender portal (preview)](#unified-security-operations-platform-in-the-microsoft-defender-portal-preview) |
service-bus-messaging | Service Bus Partitioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-partitioning.md | Service Bus supports automatic message forwarding from, to, or between partition ## Partitioned entities limitations Currently Service Bus imposes the following limitations on partitioned queues and topics: -- For partitioned premium namespaces, the message size is limited to 1 MB when the messages are sent individually, and the batch size is limited to 1 MB when the messages are sent in a batch.+- For partitioned premium namespaces, the message size is limited to 1 MB when the messages are sent individually, and the batch size is limited to 1 MB when the messages are sent in a batch. If [Large message support](/azure/service-bus-messaging/service-bus-premium-messaging) is enabled the size limit can be up to 100MB. + - Partitioned queues and topics don't support sending messages that belong to different sessions in a single transaction. - Service Bus currently allows up to 100 partitioned queues or topics per namespace for the Basic and Standard SKU. Each partitioned queue or topic counts towards the quota of 10,000 entities per namespace. |
storage | Storage Blob Container Create Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-java.md | -## Prerequisites -- This article assumes you already have a project set up to work with the Azure Blob Storage client library for Java. To learn about setting up your project, including package installation, adding `import` directives, and creating an authorized client object, see [Get Started with Azure Storage and Java](storage-blob-java-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to create a blob container. To learn more, see the authorization guidance for the following REST API operation:- - [Create Container](/rest/api/storageservices/create-container#authorization) +## Set up your environment +++#### Add import statements ++Add the following `import` statements: +++#### Authorization ++The authorization mechanism must have the necessary permissions to create a container. For authorization with Microsoft Entra ID (recommended), you need Azure RBAC built-in role **Storage Blob Data Contributor** or higher. To learn more, see the authorization guidance for [Create Container (REST API)](/rest/api/storageservices/create-container#authorization). + [!INCLUDE [storage-dev-guide-about-container-naming](../../../includes/storage-dev-guides/storage-dev-guide-about-container-naming.md)] |
storage | Storage Blob Container Delete Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-java.md | -## Prerequisites -- This article assumes you already have a project set up to work with the Azure Blob Storage client library for Java. To learn about setting up your project, including package installation, adding `import` directives, and creating an authorized client object, see [Get Started with Azure Storage and Java](storage-blob-java-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to delete a blob container, or to restore a soft-deleted container. To learn more, see the authorization guidance for the following REST API operations:- - [Delete Container](/rest/api/storageservices/delete-container#authorization) - - [Restore Container](/rest/api/storageservices/restore-container#authorization) +## Set up your environment +++#### Add import statements ++Add the following `import` statements: +++#### Authorization ++The authorization mechanism must have the necessary permissions to delete or restore a container. For authorization with Microsoft Entra ID (recommended), you need Azure RBAC built-in role **Storage Blob Data Contributor** or higher. To learn more, see the authorization guidance for [Delete Container (REST API)](/rest/api/storageservices/delete-container#authorization) and [Restore Container (REST API)](/rest/api/storageservices/restore-container#authorization). + ## Delete a container |
storage | Storage Blob Container Lease Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-java.md | -## Prerequisites -- This article assumes you already have a project set up to work with the Azure Blob Storage client library for Java. To learn about setting up your project, including package installation, adding `import` directives, and creating an authorized client object, see [Get Started with Azure Storage and Java](storage-blob-java-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to work with a container lease. To learn more, see the authorization guidance for the following REST API operation:- - [Lease Container](/rest/api/storageservices/lease-container#authorization) +## Set up your environment +++#### Add import statements ++Add the following `import` statements: +++#### Authorization ++The authorization mechanism must have the necessary permissions to work with a container lease. For authorization with Microsoft Entra ID (recommended), you need Azure RBAC built-in role **Storage Blob Data Contributor** or higher. To learn more, see the authorization guidance for [Lease Container (REST API)](/rest/api/storageservices/lease-container#authorization). + ## About container leases |
storage | Storage Blob Container Properties Metadata Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-java.md | -## Prerequisites -- This article assumes you already have a project set up to work with the Azure Blob Storage client library for Java. To learn about setting up your project, including package installation, adding `import` directives, and creating an authorized client object, see [Get Started with Azure Storage and Java](storage-blob-java-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to work with container properties or metadata. To learn more, see the authorization guidance for the following REST API operations:- - [Get Container Properties](/rest/api/storageservices/get-container-properties#authorization) - - [Set Container Metadata](/rest/api/storageservices/set-container-metadata#authorization) - - [Get Container Metadata](/rest/api/storageservices/get-container-metadata#authorization) +## Set up your environment +++#### Add import statements ++Add the following `import` statements: +++#### Authorization ++The authorization mechanism must have the necessary permissions to work with container properties or metadata. For authorization with Microsoft Entra ID (recommended), you need Azure RBAC built-in role **Storage Blob Data Reader** or higher for the *get* operations, and **Storage Blob Data Contributor** or higher for the *set* operations. To learn more, see the authorization guidance for [Get Container Properties (REST API)](/rest/api/storageservices/get-container-properties#authorization), [Set Container Metadata (REST API)](/rest/api/storageservices/set-container-metadata#authorization), or [Get Container Metadata (REST API)](/rest/api/storageservices/get-container-metadata#authorization). + ## About properties and metadata |
storage | Storage Blob Containers List Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-java.md | -## Prerequisites -- This article assumes you already have a project set up to work with the Azure Blob Storage client library for Java. To learn about setting up your project, including package installation, adding `import` directives, and creating an authorized client object, see [Get Started with Azure Storage and Java](storage-blob-java-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to list blob containers. To learn more, see the authorization guidance for the following REST API operation:- - [List Containers](/rest/api/storageservices/list-containers2#authorization) +## Set up your environment +++#### Add import statements ++Add the following `import` statements: +++#### Authorization ++The authorization mechanism must have the necessary permissions to list blob containers. For authorization with Microsoft Entra ID (recommended), you need Azure RBAC built-in role **Storage Blob Data Contributor** or higher. To learn more, see the authorization guidance for [List Containers (REST API)](/rest/api/storageservices/list-containers2#authorization). + ## About container listing options |
storage | Storage Blob Copy Async Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-java.md | description: Learn how to copy a blob with asynchronous scheduling in Azure Stor Previously updated : 08/05/2024 Last updated : 08/26/2024 ms.devlang: java This article shows how to copy a blob with asynchronous scheduling using the [Az The client library methods covered in this article use the [Copy Blob](/rest/api/storageservices/copy-blob) REST API operation, and can be used when you want to perform a copy with asynchronous scheduling. For most copy scenarios where you want to move data into a storage account and have a URL for the source object, see [Copy a blob from a source object URL with Java](storage-blob-copy-url-java.md). -## Prerequisites -- This article assumes you already have a project set up to work with the Azure Blob Storage client library for Java. To learn about setting up your project, including package installation, adding `import` directives, and creating an authorized client object, see [Get Started with Azure Storage and Java](storage-blob-java-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to perform a copy operation, or to abort a pending copy. To learn more, see the authorization guidance for the following REST API operations:- - [Copy Blob](/rest/api/storageservices/copy-blob#authorization) - - [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob#authorization) +## Set up your environment +++#### Add import statements ++Add the following `import` statements: +++#### Authorization ++The authorization mechanism must have the necessary permissions to perform a copy operation, or to abort a pending copy. For authorization with Microsoft Entra ID (recommended), the least privileged Azure RBAC built-in role varies based on several factors. To learn more, see the authorization guidance for [Copy Blob (REST API)](/rest/api/storageservices/copy-blob#authorization) or [Abort Copy Blob (REST API)](/rest/api/storageservices/abort-copy-blob#authorization). + [!INCLUDE [storage-dev-guide-blob-copy-async](../../../includes/storage-dev-guides/storage-dev-guide-about-blob-copy-async.md)] |
storage | Storage Blob Copy Url Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-java.md | description: Learn how to copy a blob from a source object URL in Azure Storage Previously updated : 08/05/2024 Last updated : 08/26/2024 ms.devlang: java This article shows how to copy a blob from a source object URL using the [Azure The client library methods covered in this article use the [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) and [Put Block From URL](/rest/api/storageservices/put-block-from-url) REST API operations. These methods are preferred for copy scenarios where you want to move data into a storage account and have a URL for the source object. For copy operations where you want asynchronous scheduling, see [Copy a blob with asynchronous scheduling using Java](storage-blob-copy-async-java.md). -## Prerequisites -- This article assumes you already have a project set up to work with the Azure Blob Storage client library for Java. To learn about setting up your project, including package installation, adding `import` directives, and creating an authorized client object, see [Get Started with Azure Storage and Java](storage-blob-java-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to perform a copy operation. To learn more, see the authorization guidance for the following REST API operations:- - [Put Blob From URL](/rest/api/storageservices/put-blob-from-url#authorization) - - [Put Block From URL](/rest/api/storageservices/put-block-from-url#authorization) +## Set up your environment +++#### Add import statements ++Add the following `import` statements: +++#### Authorization ++The authorization mechanism must have the necessary permissions to perform a copy operation. For authorization with Microsoft Entra ID (recommended), you need Azure RBAC built-in role **Storage Blob Data Contributor** or higher. To learn more, see the authorization guidance for [Put Blob From URL (REST API)](/rest/api/storageservices/put-blob-from-url#authorization) or [Put Block From URL (REST API)](/rest/api/storageservices/put-block-from-url#authorization). + [!INCLUDE [storage-dev-guide-blob-copy-from-url](../../../includes/storage-dev-guides/storage-dev-guide-about-blob-copy-from-url.md)] If you're copying a blob from a source within Azure, access to the source blob c The following example shows a scenario for copying from a source blob within Azure. The [uploadFromUrl](/java/api/com.azure.storage.blob.specialized.blockblobclient#method-details) method can optionally accept a Boolean parameter to indicate whether an existing blob should be overwritten, as shown in the example. The [uploadFromUrlWithResponse](/java/api/com.azure.storage.blob.specialized.blockblobclient#method-details) method can also accept a [BlobUploadFromUrlOptions](/java/api/com.azure.storage.blob.options.blobuploadfromurloptions) parameter to specify further options for the operation. The [uploadFromUrlWithResponse](/java/api/com.azure.storage.blob.specialized.blo You can perform a copy operation on any source object that can be retrieved via HTTP GET request on a given URL, including accessible objects outside of Azure. The following example shows a scenario for copying a blob from an accessible source object URL. ## Resources To learn more about copying blobs using the Azure Blob Storage client library for Java, see the following resources. +### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobCopyPutFromURL.java) + ### REST API operations The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods covered in this article use the following REST API operations: The Azure SDK for Java contains libraries that build on top of the Azure REST AP - [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) (REST API) - [Put Block From URL](/rest/api/storageservices/put-block-from-url) (REST API) -### Code samples --- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobCopy.java)- [!INCLUDE [storage-dev-guide-resources-java](../../../includes/storage-dev-guides/storage-dev-guide-resources-java.md)] [!INCLUDE [storage-dev-guide-next-steps-java](../../../includes/storage-dev-guides/storage-dev-guide-next-steps-java.md)] |
storage | Storage Blob Delete Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-java.md | -## Prerequisites -- This article assumes you already have a project set up to work with the Azure Blob Storage client library for Java. To learn about setting up your project, including package installation, adding `import` directives, and creating an authorized client object, see [Get Started with Azure Storage and Java](storage-blob-java-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to delete a blob, or to restore a soft-deleted blob. To learn more, see the authorization guidance for the following REST API operations:- - [Delete Blob](/rest/api/storageservices/delete-blob#authorization) - - [Undelete Blob](/rest/api/storageservices/undelete-blob#authorization) +## Set up your environment +++#### Add import statements ++Add the following `import` statements: +++#### Authorization ++The authorization mechanism must have the necessary permissions to delete a blob, or to restore a soft-deleted blob. For authorization with Microsoft Entra ID (recommended), you need Azure RBAC built-in role **Storage Blob Data Contributor** or higher. To learn more, see the authorization guidance for [Delete Blob (REST API)](/rest/api/storageservices/delete-blob#authorization) and [Undelete Blob (REST API)](/rest/api/storageservices/undelete-blob#authorization). + ## Delete a blob |
storage | Storage Blob Download Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-java.md | -## Prerequisites -- This article assumes you already have a project set up to work with the Azure Blob Storage client library for Java. To learn about setting up your project, including package installation, adding `import` directives, and creating an authorized client object, see [Get Started with Azure Storage and Java](storage-blob-java-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to perform a download operation. To learn more, see the authorization guidance for the following REST API operation:- - [Get Blob](/rest/api/storageservices/get-blob#authorization) +## Set up your environment +++#### Add import statements ++Add the following `import` statements: +++#### Authorization ++The authorization mechanism must have the necessary permissions to perform a download operation. For authorization with Microsoft Entra ID (recommended), you need Azure RBAC built-in role **Storage Blob Data Reader** or higher. To learn more, see the authorization guidance for [Get Blob (REST API)](/rest/api/storageservices/get-blob#authorization). + ## Download a blob |
storage | Storage Blob Java Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-java-get-started.md | An easy and secure way to authorize access and connect to Blob Storage is to obt Make sure you have the correct dependencies in pom.xml and the necessary import directives, as described in [Set up your project](#set-up-your-project). -The following example uses [BlobServiceClientBuilder](/java/api/com.azure.storage.blob.blobserviceclientbuilder) to build a `BlobServiceClient` object using `DefaultAzureCredential`: +The following example uses [BlobServiceClientBuilder](/java/api/com.azure.storage.blob.blobserviceclientbuilder) to build a `BlobServiceClient` object using `DefaultAzureCredential`, and shows how to create container and blob clients, if needed: :::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/App.java" id="Snippet_GetServiceClientAzureAD"::: |
storage | Storage Blob Lease Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-java.md | -## Prerequisites -- This article assumes you already have a project set up to work with the Azure Blob Storage client library for Java. To learn about setting up your project, including package installation, adding `import` directives, and creating an authorized client object, see [Get Started with Azure Storage and Java](storage-blob-java-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to work with a blob lease. To learn more, see the authorization guidance for the following REST API operation:- - [Lease Blob](/rest/api/storageservices/lease-blob#authorization) +## Set up your environment +++#### Add import statements ++Add the following `import` statements: +++#### Authorization ++The authorization mechanism must have the necessary permissions to work with a blob lease. For authorization with Microsoft Entra ID (recommended), you need Azure RBAC built-in role **Storage Blob Data Contributor** or higher. To learn more, see the authorization guidance for [Lease Blob (REST API)](/rest/api/storageservices/lease-blob#authorization). + ## About blob leases |
storage | Storage Blob Properties Metadata Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-java.md | -## Prerequisites --- This article assumes you already have a project set up to work with the Azure Blob Storage client library for Java. To learn about setting up your project, including package installation, adding `import` directives, and creating an authorized client object, see [Get Started with Azure Storage and Java](storage-blob-java-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to work with blob properties or metadata. To learn more, see the authorization guidance for the following REST API operations:- - [Set Blob Properties](/rest/api/storageservices/set-blob-properties#authorization) - - [Get Blob Properties](/rest/api/storageservices/get-blob-properties#authorization) - - [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata#authorization) - - [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata#authorization) ++## Set up your environment +++#### Add import statements ++Add the following `import` statements: +++#### Authorization ++The authorization mechanism must have the necessary permissions to work with container properties or metadata. For authorization with Microsoft Entra ID (recommended), you need Azure RBAC built-in role **Storage Blob Data Reader** or higher for the *get* operations, and **Storage Blob Data Contributor** or higher for the *set* operations. To learn more, see the authorization guidance for [Set Blob Properties (REST API)](/rest/api/storageservices/set-blob-properties#authorization), [Get Blob Properties (REST API)](/rest/api/storageservices/get-blob-properties#authorization), [Set Blob Metadata (REST API)](/rest/api/storageservices/set-blob-metadata#authorization), or [Get Blob Metadata (REST API)](/rest/api/storageservices/get-blob-metadata#authorization). + ## About properties and metadata |
storage | Storage Blob Tags Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-java.md | -## Prerequisites -- This article assumes you already have a project set up to work with the Azure Blob Storage client library for Java. To learn about setting up your project, including package installation, adding `import` directives, and creating an authorized client object, see [Get Started with Azure Storage and Java](storage-blob-java-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to work with blob index tags. To learn more, see the authorization guidance for the following REST API operations:- - [Get Blob Tags](/rest/api/storageservices/get-blob-tags#authorization) - - [Set Blob Tags](/rest/api/storageservices/set-blob-tags#authorization) - - [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags#authorization) +## Set up your environment +++#### Add import statements ++Add the following `import` statements: +++#### Authorization ++The authorization mechanism must have the necessary permissions to work with blob index tags. For authorization with Microsoft Entra ID (recommended), you need Azure RBAC built-in role **Storage Blob Data Owner** or higher. To learn more, see the authorization guidance for [Get Blob Tags (REST API)](/rest/api/storageservices/get-blob-tags#authorization), [Set Blob Tags (REST API)](/rest/api/storageservices/set-blob-tags#authorization), or [Find Blobs by Tags (REST API)](/rest/api/storageservices/find-blobs-by-tags#authorization). + [!INCLUDE [storage-dev-guide-about-blob-tags](../../../includes/storage-dev-guides/storage-dev-guide-about-blob-tags.md)] |
storage | Storage Blob Upload Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-java.md | -## Prerequisites -- This article assumes you already have a project set up to work with the Azure Blob Storage client library for Java. To learn about setting up your project, including package installation, adding `import` directives, and creating an authorized client object, see [Get Started with Azure Storage and Java](storage-blob-java-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to perform an upload operation. To learn more, see the authorization guidance for the following REST API operations:- - [Put Blob](/rest/api/storageservices/put-blob#authorization) - - [Put Block](/rest/api/storageservices/put-block#authorization) +## Set up your environment +++#### Add import statements ++Add the following `import` statements: +++#### Authorization ++The authorization mechanism must have the necessary permissions to upload a blob. For authorization with Microsoft Entra ID (recommended), you need Azure RBAC built-in role **Storage Blob Data Contributor** or higher. To learn more, see the authorization guidance for [Put Blob (REST API)](/rest/api/storageservices/put-blob#authorization) and [Put Block (REST API)](/rest/api/storageservices/put-block#authorization). + ## Upload data to a block blob |
storage | Storage Blob Use Access Tier Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-java.md | -## Prerequisites -- This article assumes you already have a project set up to work with the Azure Blob Storage client library for Java. To learn about setting up your project, including package installation, adding `import` directives, and creating an authorized client object, see [Get Started with Azure Storage and Java](storage-blob-java-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to set the blob's access tier. To learn more, see the authorization guidance for the following REST API operation:- - [Set Blob Tier](/rest/api/storageservices/set-blob-tier#authorization) +## Set up your environment +++#### Add import statements ++Add the following `import` statements: +++#### Authorization ++The authorization mechanism must have the necessary permissions to set a blob's access tier. For authorization with Microsoft Entra ID (recommended), you need Azure RBAC built-in role **Storage Blob Data Contributor** or higher. To learn more, see the authorization guidance for [Set Blob Tier](/rest/api/storageservices/set-blob-tier#authorization). + [!INCLUDE [storage-dev-guide-about-access-tiers](../../../includes/storage-dev-guides/storage-dev-guide-about-access-tiers.md)] |
storage | Storage Blobs List Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-java.md | -## Prerequisites -- This article assumes you already have a project set up to work with the Azure Blob Storage client library for Java. To learn about setting up your project, including package installation, adding `import` directives, and creating an authorized client object, see [Get Started with Azure Storage and Java](storage-blob-java-get-started.md).-- The [authorization mechanism](../common/authorize-data-access.md) must have permissions to list blobs. To learn more, see the authorization guidance for the following REST API operation:- - [List Blobs](/rest/api/storageservices/list-blobs#authorization) +## Set up your environment +++#### Add import statements ++Add the following `import` statements: +++#### Authorization ++The authorization mechanism must have the necessary permissions to list a blob. For authorization with Microsoft Entra ID (recommended), you need Azure RBAC built-in role **Storage Blob Data Reader** or higher. To learn more, see the authorization guidance for [List Blobs (REST API)](/rest/api/storageservices/list-blobs#authorization). + ## About blob listing options |
storage | Files Change Redundancy Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-change-redundancy-configuration.md | + + Title: Change redundancy configuration for Azure Files +description: Learn how to change how Azure Files data in an existing storage account is replicated. +++ Last updated : 08/26/2024+++++# Change how Azure Files data is replicated ++Azure always stores multiple copies of your data to protect it in the face of both planned and unplanned events. These events include transient hardware failures, network or power outages, and natural disasters. Data redundancy ensures that your storage account meets the [Service-Level Agreement (SLA) for Azure Storage](https://azure.microsoft.com/support/legal/sla/storage/), even in the face of failures. ++This article describes the process of changing replication settings for an existing storage account. ++## Options for changing the replication type ++When deciding which redundancy configuration is best for your scenario, consider the tradeoffs between lower costs and higher availability. The factors that help determine which redundancy configuration you should choose include: ++- **How your data is replicated within the primary region.** Data in the primary region can be replicated locally using [locally redundant storage (LRS)](files-redundancy.md#locally-redundant-storage), or across Azure availability zones using [zone-redundant storage (ZRS)](files-redundancy.md#zone-redundant-storage). +- **Whether your data requires geo-redundancy.** Geo-redundancy provides protection against regional disasters by replicating your data to a second region that is geographically distant to the primary region. Azure Files supports both [geo-redundant storage (GRS)](files-redundancy.md#geo-redundant-storage) and [geo-zone-redundant storage (GZRS)](files-redundancy.md#geo-zone-redundant-storage). ++> [!IMPORTANT] +> Azure Files doesn't support read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS). If a storage account is configured to use RA-GRS or RA-GZRS, the file shares will be configured and billed as GRS or GZRS. ++For a detailed overview of all of the redundancy options for Azure Files, see [Azure Files redundancy](files-redundancy.md). ++You can change your storage account's redundancy configurations as needed, though some configurations are subject to [limitations](#limitations-for-changing-replication-types) and [downtime requirements](#downtime-requirements). Reviewing these limitations and requirements before making any changes within your environment helps avoid conflicts with your own timeframe and uptime requirements. ++There are three ways to change the replication settings: ++- [Add or remove geo-redundancy or read access](#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli) to the secondary region. +- [Add or remove zone-redundancy](#perform-a-conversion) by performing a conversion. +- [Perform a manual migration](#manual-migration) in scenarios where the first two options aren't supported, or to ensure the change is completed within a specific timeframe. ++Geo-redundancy and read-access can be changed at the same time. However, any change that also involves zone-redundancy requires a conversion and must be performed separately using a two-step process. These two steps can be performed in any order. ++### Changing redundancy configuration ++The following table provides an overview of how to switch between replication types. ++> [!NOTE] +> Manual migration is an option for any scenario in which you want to change the replication setting within the [limitations for changing replication types](#limitations-for-changing-replication-types). The manual migration option is excluded from the following table for simplification. ++| Switching | ΓǪto LRS | ΓǪto GRS <sup>6</sup> | ΓǪto ZRS | ΓǪto GZRS <sup>2,6</sup> | +|--|-||-|| +| **ΓǪfrom LRS** | **N/A** | Use [Azure portal](files-change-redundancy-configuration.md?tabs=portal#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli), [PowerShell](files-change-redundancy-configuration.md?tabs=powershell#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli), or [CLI](files-change-redundancy-configuration.md?tabs=azure-cli#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli) <sup>1,2</sup> | [Perform a conversion](#perform-a-conversion)<sup>2,3,4,5</sup> | First, use the [Portal](files-change-redundancy-configuration.md?tabs=portal#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli), [PowerShell](files-change-redundancy-configuration.md?tabs=powershell#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli), or [CLI](files-change-redundancy-configuration.md?tabs=azure-cli#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli) to switch to GRS <sup>1</sup>, then [perform a conversion](#perform-a-conversion) to GZRS <sup>3,4,5</sup> | +| **ΓǪfrom GRS** | Use [Azure portal](files-change-redundancy-configuration.md?tabs=portal#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli), [PowerShell](files-change-redundancy-configuration.md?tabs=powershell#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli), or [CLI](files-change-redundancy-configuration.md?tabs=azure-cli#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli) | **N/A** | First, use the [Portal](files-change-redundancy-configuration.md?tabs=portal#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli), [PowerShell](files-change-redundancy-configuration.md?tabs=powershell#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli), or [CLI](files-change-redundancy-configuration.md?tabs=azure-cli#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli) to switch to LRS, then [perform a conversion](#perform-a-conversion) to ZRS <sup>3,5</sup> | [Perform a conversion](#perform-a-conversion)<sup>3,5</sup> | +| **ΓǪfrom ZRS** | [Perform a conversion](#perform-a-conversion)<sup>3</sup> | First, use the [Portal](files-change-redundancy-configuration.md?tabs=portal#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli), [PowerShell](files-change-redundancy-configuration.md?tabs=powershell#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli), or [CLI](files-change-redundancy-configuration.md?tabs=azure-cli#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli) to switch to GZRS, then [perform a conversion](#perform-a-conversion) to GRS<sup>3</sup> | **N/A** | Use [Azure portal](files-change-redundancy-configuration.md?tabs=portal#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli), [PowerShell](files-change-redundancy-configuration.md?tabs=powershell#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli), or [CLI](files-change-redundancy-configuration.md?tabs=azure-cli#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli) <sup>1</sup> | +| **ΓǪfrom GZRS** | First, use the [Portal](files-change-redundancy-configuration.md?tabs=portal#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli), [PowerShell](files-change-redundancy-configuration.md?tabs=powershell#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli), or [CLI](files-change-redundancy-configuration.md?tabs=azure-cli#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli) to switch to ZRS, then [perform a conversion](#perform-a-conversion) to LRS <sup>3</sup> | [Perform a conversion](#perform-a-conversion)<sup>3</sup> | [Use Azure portal, PowerShell, or CLI](#change-the-redundancy-configuration-using-azure-portal-powershell-or-azure-cli)| **N/A** | ++<sup>1</sup> [Adding geo-redundancy incurs a one-time egress charge](#costs-associated-with-changing-how-data-is-replicated).<br /> +<sup>2</sup> If your storage account contains blobs in the archive tier, review the [access tier limitations](../common/redundancy-migration.md#access-tier) before changing the redundancy type to geo- or zone-redundant.<br /> +<sup>3</sup> The type of conversion supported depends on the storage account type. For more information, see the [storage account table](#storage-account-type).<br /> +<sup>4</sup> Conversion to ZRS or GZRS for an LRS account resulting from a failover isn't supported. For more information, see [Failover and failback](#failover-and-failback).<br /> +<sup>5</sup> Converting from LRS to ZRS [isn't supported if the NFSv3 protocol support is enabled for Azure Blob Storage or if the storage account contains Azure Files NFSv4.1 shares](#protocol-support). <br /> +<sup>6</sup> Even though enabling geo-redundancy appears to occur instantaneously, failover to the secondary region can't be initiated until data synchronization between the two regions is complete.<br /> ++## Change the replication setting ++Depending on your scenario from the [changing redundancy configuration](#changing-redundancy-configuration) section, use one of the following methods to change your replication settings. ++### Change the redundancy configuration using Azure portal, PowerShell, or Azure CLI ++In most cases you can use the Azure portal, PowerShell, or the Azure CLI to change the geo-redundant or read access (RA) replication setting for a storage account. ++Changing how your storage account is replicated in the Azure portal doesn't result in down time for your applications, including changes that require a conversion. ++# [Portal](#tab/portal) ++To change the redundancy option for your storage account in the Azure portal, follow these steps: ++1. Navigate to your storage account in the Azure portal. +1. Under **Data management** select **Redundancy**. +1. Update the **Redundancy** setting. +1. Select **Save**. ++ :::image type="content" source="../common/media/redundancy-migration/change-replication-option-sml.png" alt-text="Screenshot showing how to change replication option in portal." lightbox="../common/media/redundancy-migration/change-replication-option.png"::: ++# [PowerShell](#tab/powershell) ++You can use Azure PowerShell to change the redundancy options for your storage account. ++To change between locally redundant and geo-redundant storage, call the [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) cmdlet and specify the `-SkuName` parameter. ++```powershell +Set-AzStorageAccount -ResourceGroupName <resource_group> ` + -Name <storage_account> ` + -SkuName <sku> +``` ++# [Azure CLI](#tab/azure-cli) ++You can use the Azure CLI to change the redundancy options for your storage account. ++To change between locally redundant and geo-redundant storage, call the [az storage account update](/cli/azure/storage/account#az-storage-account-update) command and specify the `--sku` parameter: ++```azurecli-interactive +az storage account update \ + --name <storage-account> \ + --resource-group <resource_group> \ + --sku <sku> +``` ++++### Perform a conversion ++A redundancy "conversion" is the process of changing the zone-redundancy aspect of a storage account. ++During a conversion, there's [no data loss or application downtime required](#downtime-requirements). ++There are two ways to initiate a conversion: ++- [Customer-initiated](#customer-initiated-conversion) +- [Support-initiated](#support-initiated-conversion) ++> [!TIP] +> Microsoft recommends using a customer-initiated conversion instead of support-initiated conversion whenever possible. A customer-initiated conversion allows you to initiate the conversion and monitor its progress directly from within the Azure portal. Because the conversion is initiated by the customer, there is no need to create and manage a support request. ++#### Customer-initiated conversion ++Instead of opening a support request, customers in most regions can start a conversion and monitor its progress. This option eliminates potential delays related to creating and managing support requests. For help determining the regions in which customer-initiated conversion is supported, see the [region limitations](#region) article. ++Customer-initiated conversion can be completed in supported regions using the Azure portal, PowerShell, or the Azure CLI. After initiation, the conversion could still take up to 72 hours to begin. ++> [!IMPORTANT] +> There is no SLA for completion of a conversion. +> +> If you need more control over when a conversion begins and finishes, consider a [Manual migration](#manual-migration). Generally, the more data you have in your account, the longer it takes to replicate that data to other zones or regions. +> +> For more information about the timing of a customer-initiated conversion, see [Timing and frequency](#timing-and-frequency). ++# [Portal](#tab/portal) ++To add or modify a storage account's zonal-redundancy within the Azure portal, perform these steps: ++1. Navigate to your storage account in the Azure portal. +1. Under **Data management** select **Redundancy**. +1. Update the **Redundancy** setting. +1. Select **Save**. ++ :::image type="content" source="../common/media/redundancy-migration/change-replication-zone-option-sml.png" alt-text="Screenshot showing how to change the zonal-replication option in portal." lightbox="../common/media/redundancy-migration/change-replication-zone-option.png"::: ++# [PowerShell](#tab/powershell) ++To change between locally redundant and zone-redundant storage with PowerShell, call the [Start-AzStorageAccountMigration](/powershell/module/az.storage/start-azstorageaccountmigration) command and specify the `-TargetSku` parameter: ++```powershell +Start-AzStorageAccountMigration + -AccountName <String> + -ResourceGroupName <String> + -TargetSku <String> + -AsJob +``` ++# [Azure CLI](#tab/azure-cli) ++To change between locally redundant and zone-redundant storage with Azure CLI, call the [az storage account migration start](/cli/azure/storage/account/migration#az-storage-account-migration-start) command and specify the `--sku` parameter: ++```azurecli-interactive +az storage account migration start \ + -- account-name <string> \ + -- g <string> \ + --sku <string> \ + --no-wait +``` ++++##### Monitoring customer-initiated conversion progress ++As the conversion request is evaluated and processed, the status should progress through the list shown in the following table: ++| Status | Explanation | +||--| +| Submitted for conversion | The conversion request was successfully submitted for processing. | +| In Progress<sup>1</sup> | The conversion is in progress. | +| Completed<br>**- or -**</br>Failed<sup>2</sup> | The conversion is completed successfully.<br>**- or -**</br>The conversion failed. | ++<sup>1</sup> Once initiated, the conversion could take up to 72 hours to begin. If the conversion doesn't enter the "In Progress" status within 96 hours of initiating the request, submit a support request to Microsoft to determine why. For more information about the timing of a customer-initiated conversion, see [Timing and frequency](#timing-and-frequency).<br /> +<sup>2</sup> If the conversion fails, submit a support request to Microsoft to determine the reason for the failure.<br /> ++> [!NOTE] +> While Microsoft handles your request for a conversion promptly, there's no guarantee as to when it will complete. If you need your data converted by a certain date, Microsoft recommends that you perform a manual migration instead. +> +> Generally, the more data you have in your account, the longer it takes to replicate that data to other zones in the region. ++# [Portal](#tab/portal) ++The status of your customer-initiated conversion is displayed on the **Redundancy** page of the storage account: +++# [PowerShell](#tab/powershell) ++To track the current migration status of the conversion initiated on your storage account, call the [Get-AzStorageAccountMigration](/powershell/module/az.storage/get-azstorageaccountmigration) cmdlet: ++```powershell +Get-AzStorageAccountMigration + -AccountName <String> + -ResourceGroupName <String> +``` ++# [Azure CLI](#tab/azure-cli) ++To track the current migration status of the conversion initiated on your storage account, use the [az storage account migration show](/cli/azure/storage/account/migration#az-storage-account-migration-show) command: ++```azurecli-interactive +az storage account migration show \ + --account-name <string> \ + - g <sting> \ + -n "default" +``` ++++#### Support-initiated conversion ++Customers can request a conversion by opening a support request with Microsoft. ++> [!TIP] +> If you need to convert more than one storage account, create a single support ticket and specify the names of the accounts to convert on the **Additional details** tab. ++Follow these steps to request a conversion from Microsoft: ++1. In the Azure portal, navigate to a storage account that you want to convert. +1. Under **Support + troubleshooting**, select **New Support Request**. +1. Complete the **Problem description** tab based on your account information: + - **Summary**: (some descriptive text). + - **Issue type**: Select **Technical**. + - **Subscription**: Select your subscription from the drop-down. + - **Service**: Select **My Services**, then **Storage Account Management** for the **Service type**. + - **Resource**: Select a storage account to convert. If you need to specify multiple storage accounts, you can do so on the **Additional details** tab. + - **Problem type**: Choose **Data Migration**. + - **Problem subtype**: Choose **Migrate to ZRS, GZRS, or RA-GZRS**. ++ :::image type="content" source="../common/media/redundancy-migration/request-live-migration-problem-desc-portal-sml.png" alt-text="Screenshot showing how to request a conversion - Problem description tab." lightbox="../common/media/redundancy-migration/request-live-migration-problem-desc-portal.png"::: ++1. Select **Next**. The **Recommended solution** tab might be displayed briefly before it switches to the **Solutions** page. On the **Solutions** page, you can check the eligibility of your storage account(s) for conversion: + - **Target replication type**: (choose the desired option from the drop-down) + - **Storage accounts from**: (enter a single storage account name or a list of accounts separated by semicolons) + - Select **Submit**. ++ :::image type="content" source="../common/media/redundancy-migration/request-live-migration-solutions-portal-sml.png" alt-text="Screenshot showing how to check the eligibility of your storage account(s) for conversion - Solutions page." lightbox="../common/media/redundancy-migration/request-live-migration-solutions-portal.png"::: ++1. Take the appropriate action if the results indicate your storage account isn't eligible for conversion. Otherwise, select **Return to support request**. ++1. Select **Next**. If you have more than one storage account to migrate, on the **Details** tab, specify the name for each account, separated by a semicolon. ++ :::image type="content" source="../common/media/redundancy-migration/request-live-migration-details-portal-sml.png" alt-text="Screenshot showing how to request a conversion - Additional details tab." lightbox="../common/media/redundancy-migration/request-live-migration-details-portal.png"::: ++1. Provide the required information on the **Additional details** tab, then select **Review + create** to review and submit your support ticket. An Azure support agent reviews your case and contacts you to provide assistance. ++### Manual migration ++A manual migration provides more flexibility and control than a conversion. You can use this option if you need your data moved by a certain date, or if conversion [isn't supported for your scenario](#limitations-for-changing-replication-types). Manual migration is also useful when moving a storage account to another region. For more detail, see [Move an Azure Storage account to another region](../common/storage-account-move.md). ++You must perform a manual migration if you want to migrate your storage account to a different region. ++> [!IMPORTANT] +> A manual migration can result in application downtime. If your application requires high availability, Microsoft also provides a [conversion](#perform-a-conversion) option. A conversion is an in-place migration with no downtime. ++With a manual migration, you copy the data from your existing storage account to a new storage account. To perform a manual migration, you can use one of the following options: ++- Copy data by using an existing tool such as AzCopy, one of the Azure Storage client libraries, or a reliable non-Microsoft tool. +- If you're familiar with Hadoop or HDInsight, you can attach both the source storage account and destination storage account to your cluster. Then, parallelize the data copy process with a tool like DistCp. ++For more detailed guidance on how to perform a manual migration, see [Move an Azure Storage account to another region](../common/storage-account-move.md). ++## Limitations for changing replication types ++> [!IMPORTANT] +> Boot diagnostics doesn't support premium storage accounts or zone-redundant storage accounts. When either premium or zone-redundant storage accounts are used for boot diagnostics, users receive a `StorageAccountTypeNotSupported` error upon starting their virtual machine (VM). ++Limitations apply to some replication change scenarios depending on: ++- [Region](#region) +- [Feature conflicts](#feature-conflicts) +- [Storage account type](#storage-account-type) +- [Protocol support](#protocol-support) +- [Failover and failback](#failover-and-failback) ++### Region ++Make sure the region where your storage account is located supports all of the desired replication settings. For example, if you're converting your account to zone-redundant (ZRS or GZRS), make sure your storage account is in a region that supports it. See the lists of supported regions for [Zone-redundant storage](files-redundancy.md#zone-redundant-storage) and [Geo-zone-redundant storage](files-redundancy.md#geo-zone-redundant-storage). ++> [!IMPORTANT] +> [Customer-initiated conversion](#customer-initiated-conversion) from LRS to ZRS is available in all public regions that support ZRS except for the following: +> +> - (Europe) Italy North +> - (Europe) UK South +> - (Europe) Poland Central +> - (Europe) West Europe +> - (Middle East) Israel Central +> - (North America) Canada Central +> - (North America) East US +> - (North America) East US 2 +> +> [Customer-initiated conversion](#customer-initiated-conversion) from existing ZRS accounts to LRS is available in all public regions. ++### Feature conflicts ++Some storage account features aren't compatible with other features or operations. For example, the ability to fail over to the secondary region is the key feature of geo-redundancy, but other features aren't compatible with failover. For more information about features and services not supported with failover, see [Unsupported features and services](../common/storage-disaster-recovery-guidance.md#unsupported-features-and-services). The conversion of an account to GRS or GZRS might be blocked if a conflicting feature is enabled, or it might be necessary to disable the feature later before initiating a failover. ++### Storage account type ++When planning to change your replication settings, consider the following limitations related to the storage account type. ++Some storage account types only support certain redundancy configurations, which affect whether they can be converted or migrated and, if so, how. For more information on Azure storage account types and the supported redundancy options, see [the storage account overview](../common/storage-account-overview.md#types-of-storage-accounts). ++The following table lists the redundancy options available for storage account types and whether conversion and manual migration are supported: ++| Storage account type | Supports LRS | Supports ZRS | Supports conversion<br>(from the portal) | Supports conversion<br>(by support request) | Supports manual migration | +|:-|::|::|:--:|:-:|:-:| +| Standard general purpose v2 | ✅ | ✅ | ✅ | ✅ | ✅ | +| Premium file shares | ✅ | ✅ | | ✅ <sup>1</sup> | ✅ | ++<sup>1</sup> Conversion for premium file shares is only available by [opening a support request](#support-initiated-conversion); [Customer-initiated conversion](#customer-initiated-conversion) isn't currently supported.<br /> ++### Protocol support ++You can't convert storage accounts to zone-redundancy (ZRS or GZRS) if either of the following cases are true: ++- NFSv3 protocol support is enabled for Azure Blob Storage +- The storage account contains Azure Files NFSv4.1 shares ++### Failover and failback ++After an account failover to the secondary region, it's possible to initiate a failback from the new primary back to the new secondary with PowerShell or Azure CLI (version 2.30.0 or later). [Initiate the failover](../common/storage-initiate-account-failover.md#initiate-the-failover). ++If you performed a customer-managed account failover to recover from an outage for your GRS account, the account becomes locally redundant (LRS) in the new primary region after the failover. Conversion to ZRS or GZRS for an LRS account resulting from a failover isn't supported, even for so-called failback operations. For example, if you perform an account failover from GRS to LRS in the secondary region, and then configure it again as GRS, it remains LRS in the new secondary region (the original primary). If you then perform another account failover to failback to the original primary region, it remains LRS again in the original primary. In this case, you can't perform a conversion to ZRS or GZRS in the primary region. Instead, perform a manual migration to add zone-redundancy. ++## Downtime requirements ++During a [conversion](#perform-a-conversion), you can access data in your storage account with no loss of durability or availability. [The Azure Storage SLA](https://azure.microsoft.com/support/legal/sla/storage/) is maintained during the migration process and no data is lost during a conversion. Service endpoints, access keys, shared access signatures, and other account options remain unchanged after the migration. ++If you choose to perform a manual migration, downtime is required but you have more control over the timing of the migration process. ++## Timing and frequency ++If you initiate a zone-redundancy [conversion](#customer-initiated-conversion) from the Azure portal, the conversion process could take up to 72 hours to begin. It could take longer to start if you [request a conversion by opening a support request](#support-initiated-conversion). If a customer-initiated conversion doesn't enter the "In Progress" status within 96 hours of initiating the request, submit a support request to Microsoft to determine why. To monitor the progress of a customer-initiated conversion, see [Monitoring customer-initiated conversion progress](#monitoring-customer-initiated-conversion-progress). ++> [!IMPORTANT] +> There is no SLA for completion of a conversion. If you need more control over when a conversion begins and finishes, consider a [Manual migration](#manual-migration). Generally, the more data you have in your account, the longer it takes to replicate that data to other zones or regions. ++After a zone-redundancy conversion, you must wait at least 72 hours before changing the redundancy setting of the storage account again. The temporary hold allows background processes to complete before making another change, ensuring the consistency and integrity of the account. For example, going from LRS to GZRS is a two-step process. You must add zone redundancy in one operation, then add geo-redundancy in a second. After going from LRS to ZRS, you must wait at least 72 hours before going from ZRS to GZRS. ++## Costs associated with changing how data is replicated ++Azure Files offers several options for configuring replication. These options, ordered by least- to most-expensive, include: ++- LRS +- ZRS +- GRS +- GZRS ++The costs associated with changing how data is replicated in your storage account depend on which [aspects of your redundancy configuration](#options-for-changing-the-replication-type) you change. A combination of data storage and egress bandwidth pricing determines the cost of making a change. For details on pricing, see [Azure Files Pricing page](https://azure.microsoft.com/pricing/details/storage/files/). ++If you add zone-redundancy in the primary region, there's no initial cost associated with making that conversion, but the ongoing data storage cost is higher due to the increased replication and storage space required. ++Geo-redundancy incurs an egress bandwidth charge at the time of the change because your entire storage account is being replicated to the secondary region. All subsequent writes to the primary region also incur egress bandwidth charges to replicate the write to the secondary region. ++If you remove geo-redundancy (change from GRS to LRS), there's no cost for making the change, but your replicated data is deleted from the secondary location. ++## See also ++- [Azure Files redundancy](files-redundancy.md) |
virtual-desktop | Whats New Client Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md | |
virtual-network | Public Ip Addresses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-addresses.md | Full details are listed in the table below: | [Routing preference](routing-preference-overview.md)| Supported to enable more granular control of how traffic is routed between Azure and the Internet. | Not supported.| | Global tier | Supported via [cross-region load balancers](../../load-balancer/cross-region-overview.md).| Not supported. | - Virtual machines attached to a backend pool do not need a public IP address to be attached to a public load balancer. But if they do, matching SKUs are required for load balancer and public IP resources. You can't have a mixture of basic SKU resources and standard SKU resources. You can't attach standalone virtual machines, virtual machines in an availability set resource, or a virtual machine scale set resources to both SKUs simultaneously. New designs should consider using Standard SKU resources. For more information about a standard load balancer, see [Standard Load Balancer](../../load-balancer/load-balancer-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json). ## IP address assignment Static public IP addresses are commonly used in the following scenarios: | Basic public IPv4 | :white_check_mark: | :white_check_mark: | | Basic public IPv6 | x | :white_check_mark: | +## Availability Zone ++Standard SKU Public IPs can be created as non-zonal, zonal, or zone-redundant in [regions that support availability zones](../../availability-zones/az-region.md). Basic SKU Public IPs do not have any zones and are created as non-zonal. +A public IP's availability zone can't be changed after the public IP's creation. ++| Value | Behavior | +| | | +| Non-zonal | A non-zonal public IP address is placed into a zone for you by Azure and doesn't give a guarantee of redundancy. | +| Zonal | A zonal IP is tied to a specific availability zone, and shares fate with the health of the zone. | +| Zone-redundant | A zone-redundant IP is created in all zones for a region and can survive any single zone failure. | ++In regions without availability zones, all public IP addresses are created as nonzonal. Public IP addresses created in a region that is later upgraded to have availability zones remain non-zonal. ++> [!IMPORTANT] +> We are updating Standard non-zonal IPs to be zone-redundant by default on a region by region basis. This means that in the following 12 regions, all IPs created (except zonal) are zone-redundant. +> Region availability: Central Canada, Central Poland, Central Israel, Central France, Central Qatar, East Norway, Italy North, Sweden Central, South Africa North, South Brazil, West Central Germany, West US 2. + ## Domain Name Label Select this option to specify a DNS label for a public IP resource. This functionality works for both IPv4 addresses (32-bit A records) and IPv6 addresses (128-bit AAAA records). This selection creates a mapping for **domainnamelabel**.**location**.cloudapp.azure.com to the public IP in the Azure-managed DNS. The value of the **Domain Name Label Scope** must match one of the options below For example, if **SubscriptionReuse** is selected as the option, and a customer who has the example domain name label **contoso.fjdng2acavhkevd8.westus.cloudapp.Azure.com** deletes and re-deploys a public IP address using the same template as before, the domain name label will remain the same. If the customer deploys a public IP address using this same template under a different subscription, the domain name label would change (e.g. **contoso.c9ghbqhhbxevhzg9.westus.cloudapp.Azure.com**). -## Availability Zone --Standard SKU Public IPs can be created as non-zonal, zonal, or zone-redundant in [regions that support availability zones](../../availability-zones/az-region.md). Basic SKU Public IPs do not have any zones and are created as non-zonal. -A public IP's availability zone can't be changed after the public IP's creation. --| Value | Behavior | -| | | -| Non-zonal | A non-zonal public IP address is placed into a zone for you by Azure and doesn't give a guarantee of redundancy. | -| Zonal | A zonal IP is tied to a specific availability zone, and shares fate with the health of the zone. | -| Zone-redundant | A zone-redundant IP is created in all zones for a region and can survive any single zone failure. | --In regions without availability zones, all public IP addresses are created as nonzonal. Public IP addresses created in a region that is later upgraded to have availability zones remain non-zonal. --> [!IMPORTANT] -> We are updating Standard non-zonal IPs to be zone-redundant by default on a region by region basis. This means that in the following 12 regions, all IPs created (except zonal) are zone-redundant. -> Region availability: Central Canada, Central Poland, Central Israel, Central France, Central Qatar, East Norway, Italy North, Sweden Central, South Africa North, South Brazil, West Central Germany, West US 2. - ## Other public IP address features There are other attributes that can be used for a public IP address (Standard SKU only). |