Updates from: 09/28/2024 01:08:02
Service Microsoft Docs article Related commit history on GitHub Change details
app-service Overview Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-authentication-authorization.md
Title: Authentication and authorization
description: Find out about the built-in authentication and authorization support in Azure App Service and Azure Functions, and how it can help secure your app against unauthorized access. ms.assetid: b7151b57-09e5-4c77-a10c-375a262f17e5 Previously updated : 03/14/2023 Last updated : 09/27/2024
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md
Title: Environment variables and app settings reference description: Describes the commonly used environment variables, and which ones can be modified with app settings. Previously updated : 09/14/2023 Last updated : 09/27/2024
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md
# Deploy an agent-based Linux Hybrid Runbook Worker in Automation
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
- [!INCLUDE [./agent-based-user-hybrid-runbook-worker-retirement.md](./includes/agent-based-user-hybrid-runbook-worker-retirement.md)] You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on the Azure or non-Azure machine, including servers registered with [Azure Arc-enabled servers](/azure/azure-arc/servers/overview). From the machine or server that's hosting the role, you can run runbooks directly it and against resources in the environment to manage those local resources.
automation Guidance Migration Log Analytics Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/guidance-migration-log-analytics-monitoring-agent.md
Previously updated : 09/09/2024 Last updated : 09/27/2024
Using the Azure portal, you can migrate from Change Tracking & Inventory with LA
- Migrate single/multiple VMs from the Virtual Machines page. - Migrate multiples VMs on LA version solution within a particular Automation Account.
+> [!NOTE]
+> File Integrity Monitoring (FIM) using [Microsoft Defender for Endpoint (MDE)](https://learn.microsoft.com/azure/defender-for-cloud/file-integrity-monitoring-enable-defender-endpoint) is now currently available. Follow the guidance to migrate from:
+> - [FIM with Change Tracking and Inventory using AMA](https://learn.microsoft.com/azure/defender-for-cloud/migrate-file-integrity-monitoring#migrate-from-fim-over-ama).
+> - [FIM with Change Tracking and Inventory using MMA](https://learn.microsoft.com/azure/defender-for-cloud/migrate-file-integrity-monitoring#migrate-from-fim-over-mma).
+ ## Onboarding to Change tracking and inventory using Azure Monitoring Agent ### [Using Azure portal - Azure single VM](#tab/ct-single-vm)
automation Overview Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview-monitoring-agent.md
Title: Azure Automation Change Tracking and Inventory overview using Azure Monit
description: This article describes the Change Tracking and Inventory feature using Azure monitoring agent, which helps you identify software and Microsoft service changes in your environment. Previously updated : 09/09/2024 Last updated : 09/27/2024
This article explains on the latest version of change tracking support using Azure Monitoring Agent as a singular agent for data collection. > [!NOTE]
-> The [Current GA version](/azure/defender-for-cloud/file-integrity-monitoring-enable-log-analytics) of File Integrity Monitoring based on Log Analytics agent, will be deprecated in August 2024, and a **new version will be provided over MDE soon**.  The **[FIM Public Preview](/azure/defender-for-cloud/file-integrity-monitoring-enable-ama) based on Azure Monitor Agent (AMA), will be deprecated when the alternative is provided over MDE**. Hence, the FIM with AMA Public Preview version is not planned for GA. Read the announcement [here](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-strategy-and-plan-towards-log/ba-p/3883341).
+> File Integrity Monitoring (FIM) using [Microsoft Defender for Endpoint (MDE)](https://learn.microsoft.com/azure/defender-for-cloud/file-integrity-monitoring-enable-defender-endpoint) is now currently available. Follow the guidance to migrate from:
+> - [FIM with Change Tracking and Inventory using AMA](https://learn.microsoft.com/azure/defender-for-cloud/migrate-file-integrity-monitoring#migrate-from-fim-over-ama).
+> - [FIM with Change Tracking and Inventory using MMA](https://learn.microsoft.com/azure/defender-for-cloud/migrate-file-integrity-monitoring#migrate-from-fim-over-mma).
+ ## Key benefits
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview.md
# Change Tracking and Inventory overview
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
- > [!Important] > Change Tracking and Inventory using Log Analytics agent has retired on **31 August 2024** and we recommend that you use Azure Monitoring Agent as the new supporting agent. Follow the guidelines for [migration from Change Tracking and inventory using Log Analytics to Change Tracking and inventory using Azure Monitoring Agent version](guidance-migration-log-analytics-monitoring-agent.md).
automation Update Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-management.md
# Troubleshoot Update Management issues
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
- [!INCLUDE [./log-analytics-retirement-announcement.md](../includes/log-analytics-retirement-announcement.md)] This article discusses issues that you might run into when using the Update Management feature to assess and manage updates on your machines. There's an agent troubleshooter for the Hybrid Runbook Worker agent to help determine the underlying problem. To learn more about the troubleshooter, see [Troubleshoot Windows update agent issues](update-agent-issues.md) and [Troubleshoot Linux update agent issues](update-agent-issues-linux.md). For other feature deployment issues, see [Troubleshoot feature deployment issues](onboarding.md).
azure-health-insights Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/use-containers.md
curl -X POST 'http://<serverURL>:5000/health-insights/<model>/jobs?api-version=<
#### Example docker compose file
-The below example shows how a [docker compose](https://docs.docker.com/compose/reference/overview) file can be created to deploy the health-insights containers.
+The below example shows how a [docker compose](https://docs.docker.com/compose/) file can be created to deploy the health-insights containers.
```yaml version: "3"
azure-maps How To Use Spatial Io Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-spatial-io-module.md
# How to use the Azure Maps Spatial IO module
-> [!NOTE]
->
-> **Azure Maps Spatial service retirement**
->
-> The Azure Maps Spatial service is now deprecated and will be retired on 9/30/25. For more information, see [End of Life Announcement of Azure Maps Spatial](https://aka.ms/AzureMapsSpatialDeprecation).
- The Azure Maps Web SDK provides the [Spatial IO module], which integrates spatial data with the Azure Maps web SDK using JavaScript or TypeScript. The robust features in this module allow developers to: - [Read and write spatial data]. Supported file formats include: KML, KMZ, GPX, GeoRSS, GML, GeoJSON and CSV files containing columns with spatial information. Also supports Well-Known Text (WKT).
azure-netapp-files Configure Customer Managed Keys Hardware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys-hardware.md
na Previously updated : 09/25/2024 Last updated : 09/27/2024 # Configure customer-managed keys with managed Hardware Security Module for Azure NetApp Files volume encryption
Azure NetApp Files volume encryption with customer-managed keys with the managed
## Supported regions
+* Australia Central
+* Australia Central 2
* Australia East
+* Australia Southeast
* Brazil South
+* Brazil Southeast
* Canada Central
+* Canada East
* Central India * Central US * East Asia * East US * East US 2 * France Central
+* Germany North
+* Germany West Central
+* Israel Central
+* Italy North
* Japan East
+* Japan West
* Korea Central * North Central US * North Europe
Azure NetApp Files volume encryption with customer-managed keys with the managed
* Spain Central * Sweden Central * Switzerland North
+* Switzerland West
* UAE Central * UAE North * UK South
azure-netapp-files Dynamic Change Volume Service Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/dynamic-change-volume-service-level.md
Title: Dynamically change the service level of a volume for Azure NetApp Files | Microsoft Docs
-description: Describes how to dynamically change the service level of a volume.
+ Title: Dynamically change the service level of an Azure NetApp Files volume
+description: Learn about the benefits of changing the service level of an Azure NetApp Files volume within your NetApp account.
Previously updated : 08/20/2024 Last updated : 09/27/2024
-# Dynamically change the service level of a volume
+# Dynamically change the service level of an Azure NetApp Files volume
-You can change the service level of an existing volume by moving the volume to another capacity pool in the same NetApp account that uses the [service level](azure-netapp-files-service-levels.md) you want for the volume. This in-place service-level change for the volume doesn't require that you migrate data nor does it affect access to the volume.
+You can change the service level of an existing volume by moving the volume to another capacity pool in the same NetApp account that uses the [service level](azure-netapp-files-service-levels.md) you want for the volume. This in-place service-level change for the volume doesn't require that you migrate data. It also doesn't affect access to the volume.
-This functionality enables you to meet your workload needs on demand. You can change an existing volume to use a higher service level for better performance, or to use a lower service level for cost optimization. For example, if the volume is currently in a capacity pool that uses the *Standard* service level and you want the volume to use the *Premium* service level, you can move the volume dynamically to a capacity pool that uses the *Premium* service level.
+This functionality enables you to meet your workload needs on demand. You can change an existing volume to use a higher service level for better performance, or to use a lower service level for cost optimization. For example, if the volume is in a capacity pool that uses the *Standard* service level and you want the volume to use the *Premium* service level, you can move the volume dynamically to a capacity pool that uses the *Premium* service level.
-The capacity pool that you want to move the volume to must already exist. The capacity pool can contain other volumes. If you want to move the volume to a brand-new capacity pool, you need to [create the capacity pool](azure-netapp-files-set-up-capacity-pool.md) before you move the volume.
+The capacity pool that you want to move the volume to must already exist. The capacity pool can contain other volumes. If you want to move the volume to a brand-new capacity pool, you need to [create the capacity pool](azure-netapp-files-set-up-capacity-pool.md) before you move the volume.
## Considerations
-* This functionality is supported within the same NetApp account. You can't move the volume to a capacity pool in a different NetApp Account.
+* Dynamically changing the service level of a volume is supported within the same NetApp account. You can't move the volume to a capacity pool in a different NetApp Account.
-* After the volume is moved to another capacity pool, you no longer have access to the previous volume activity logs and volume metrics. The volume will start with new activity logs and metrics under the new capacity pool.
+* After the volume is moved to another capacity pool, you no longer have access to the previous volume activity logs and volume metrics. The volume starts with new activity logs and metrics under the new capacity pool.
-* If you move a volume to a capacity pool of a higher service level (for example, moving from *Standard* to *Premium* or *Ultra* service level), you must wait at least seven days before you can move that volume *again* to a capacity pool of a lower service level (for example, moving from *Ultra* to *Premium* or *Standard*). You can always change to higher service level without wait time.
+* If you move a volume to a capacity pool of a higher service level (for example, moving from *Standard* to *Premium* or *Ultra* service level), you must wait at least 24 hours before you can move that volume *again* to a capacity pool of a lower service level (for example, moving from *Ultra* to *Premium* or *Standard*). You can always change to higher service level without wait time.
-* If the target capacity pool is of the *manual* QoS type, the volume's throughput isn't changed with the volume move. You can [modify the allotted throughput](manage-manual-qos-capacity-pool.md#modify-the-allotted-throughput-of-a-manual-qos-volume) subsequently in the target manual capacity pool.
+* If the target capacity pool is of the *manual* QoS type, the volume's throughput isn't changed with the volume move. You can [modify the allotted throughput](manage-manual-qos-capacity-pool.md#modify-the-allotted-throughput-of-a-manual-qos-volume) in the target manual capacity pool.
* Regardless of the source poolΓÇÖs QoS type, when the target pool is of the *auto* QoS type, the volume's throughput is changed with the move to match the service level of the target capacity pool.
The capacity pool that you want to move the volume to must already exist. The ca
3. Select **OK**. - ## Next steps * [Service levels for Azure NetApp Files](azure-netapp-files-service-levels.md)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Previously updated : 09/17/2024 Last updated : 09/27/2024 # What's new in Azure NetApp Files Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.-
+
## September 2024
+* [Dynamic service level change enhancement:](dynamic-change-volume-service-level.md) shortened wait time for changing to lower service levels
+
+ To address rapidly changing performance requirements, Azure NetApp Files allows [dynamic service level changes of volumes](dynamic-change-volume-service-level.md). The wait time for moving Azure NetApp Files volumes to a lower service level (after first moving service levels upwards) is now 24 hours (a change from the original seven days) enabling you to more actively benefit from this cost optimization capability.
+ * [Reserved capacity](reservations.md) is now generally available (GA) Pay-as-you-go pricing is the most convenient way to purchase cloud storage when your workloads are dynamic or changing over time. However, some workloads are more predictable with stable capacity usage over an extended period. These workloads can benefit from savings in exchange for a longer-term commitment. With a one-year or three-year commitment of an Azure NetApp Files reservation, you can save up to 34% on sustained usage of Azure NetApp Files. Reservations are available in stackable increments of 100 TiB and 1 PiB on Standard, Premium and Ultra service levels in a given region. Azure NetApp Files reservations benefits are automatically applied to existing Azure NetApp Files capacity pools in the matching region and service level. Azure NetApp Files reservations provide cost savings and financial predictability and stability, allowing for more effective budgeting. Additional usage is conveniently billed at the regular pay-as-you-go rate.
azure-resource-manager Bicep Core Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-core-diagnostics.md
If you need more information about a particular diagnostic code, select the **Fe
| <a id='BCP004' />BCP004 | Error | The string at this location isn't terminated due to an unexpected new line character. | | <a id='BCP005' />BCP005 | Error | The string at this location isn't terminated. Complete the escape sequence and terminate the string with a single unescaped quote character. | | <a id='BCP006' />BCP006 | Error | The specified escape sequence isn't recognized. Only the following escape sequences are allowed: {ToQuotedString(escapeSequences)}. |
-| <a id='BCP007' />BCP007 | Error | This declaration type isn't recognized. Specify a metadata, parameter, variable, resource, or output declaration. |
+| <a id='BCP007' />[BCP007](./diagnostics/bcp007.md) | Error | This declaration type isn't recognized. Specify a metadata, parameter, variable, resource, or output declaration. |
| <a id='BCP008' />BCP008 | Error | Expected the "=" token, or a newline at this location. |
-| <a id='BCP009' />BCP009 | Error | Expected a literal value, an array, an object, a parenthesized expression, or a function call at this location. |
+| <a id='BCP009' />[BCP009](./diagnostics/bcp009.md) | Error | Expected a literal value, an array, an object, a parenthesized expression, or a function call at this location. |
| <a id='BCP010' />BCP010 | Error | Expected a valid 64-bit signed integer. | | <a id='BCP011' />BCP011 | Error | The type of the specified value is incorrect. Specify a string, boolean, or integer literal. | | <a id='BCP012' />BCP012 | Error | Expected the "{keyword}" keyword at this location. |
If you need more information about a particular diagnostic code, select the **Fe
| <a id='BCP060' />BCP060 | Error | The "variables" function isn't supported. Directly reference variables by their symbolic names. | | <a id='BCP061' />BCP061 | Error | The "parameters" function isn't supported. Directly reference parameters by their symbolic names. | | <a id='BCP062' />[BCP062](./diagnostics/bcp062.md) | Error | The referenced declaration with name \<type-name> isn't valid. |
-| <a id='BCP063' />BCP063 | Error | The name "{name}" isn't a parameter, variable, resource, or module. |
+| <a id='BCP063' />[BCP063](./diagnostics/bcp063.md) | Error | The name \<name> isn't a parameter, variable, resource, or module. |
| <a id='BCP064' />BCP064 | Error | Found unexpected tokens in interpolated expression. |
-| <a id='BCP065' />BCP065 | Error | Function "{functionName}" isn't valid at this location. It can only be used as a parameter default value. |
-| <a id='BCP066' />BCP066 | Error | Function "{functionName}" isn't valid at this location. It can only be used in resource declarations. |
+| <a id='BCP065' />BCP065 | Error | Function \<function-name> isn't valid at this location. It can only be used as a parameter default value. |
+| <a id='BCP066' />BCP066 | Error | Function \<function-name> isn't valid at this location. It can only be used in resource declarations. |
| <a id='BCP067' />BCP067 | Error | Can't call functions on type "{wrongType}". An "{LanguageConstants.Object}" type is required. | | <a id='BCP068' />BCP068 | Error | Expected a resource type string. Specify a valid resource type of format "\<types>@\<apiVersion>". | | <a id='BCP069' />BCP069 | Error | The function "{function}" isn't supported. Use the "{@operator}" operator instead. | | <a id='BCP070' />BCP070 | Error | Argument of type "{argumentType}" isn't assignable to parameter of type "{parameterType}". |
-| <a id='BCP071' />BCP071 | Error | Expected {expected}, but got {argumentCount}. |
+| <a id='BCP071' />[BCP071](./diagnostics/bcp071.md) | Error | Expected \<arugment-count>, but got \<argument-count>. |
| <a id='BCP072' />[BCP072](./diagnostics/bcp072.md) | Error | This symbol can't be referenced here. Only other parameters can be referenced in parameter default values. | | <a id='BCP073' />[BCP073](./diagnostics/bcp073.md) | Error/Warning | The property \<property-name> is read-only. Expressions can't be assigned to read-only properties. | | <a id='BCP074' />BCP074 | Error | Indexing over arrays requires an index of type "{LanguageConstants.Int}" but the provided index was of type "{wrongType}". |
If you need more information about a particular diagnostic code, select the **Fe
| <a id='BCP079' />BCP079 | Error | This expression is referencing its own declaration, which isn't allowed. | | <a id='BCP080' />BCP080 | Error | The expression is involved in a cycle ("{string.Join("\" -> \"", cycle)}"). | | <a id='BCP081' />BCP081 | Warning | Resource type "{resourceTypeReference.FormatName()}" doesn't have types available. Bicep is unable to validate resource properties prior to deployment, but this won't block the resource from being deployed. |
-| <a id='BCP082' />BCP082 | Error | The name "{name}" doesn't exist in the current context. Did you mean "{suggestedName}"? |
+| <a id='BCP082' />[BCP082](./diagnostics/bcp082.md) | Error | The name \<name> doesn't exist in the current context. Did you mean \<name>? |
| <a id='BCP083' />[BCP083](./diagnostics/bcp083.md) | Error/Warning | The type \<type-definition> doesn't contain property \<property-name>. Did you mean \<property-name>? | | <a id='BCP084' />BCP084 | Error | The symbolic name "{name}" is reserved. Use a different symbolic name. Reserved namespaces are {ToQuotedString(namespaces.OrderBy(ns => ns))}. | | <a id='BCP085' />BCP085 | Error | The specified file path contains one or more invalid path characters. The following aren't permitted: {ToQuotedString(forbiddenChars.OrderBy(x => x).Select(x => x.ToString()))}. |
If you need more information about a particular diagnostic code, select the **Fe
| <a id='BCP121' />BCP121 | Error | Resources: {ToQuotedString(resourceNames)} are defined with this same name in a file. Rename them or split into different modules. | | <a id='BCP122' />BCP122 | Error | Modules: {ToQuotedString(moduleNames)} are defined with this same name and this same scope in a file. Rename them or split into different modules. | | <a id='BCP123' />BCP123 | Error | Expected a namespace or decorator name at this location. |
-| <a id='BCP124' />BCP124 | Error | The decorator "{decoratorName}" can only be attached to targets of type "{attachableType}", but the target has type "{targetType}". |
-| <a id='BCP125' />BCP125 | Error | Function "{functionName}" can't be used as a parameter decorator. |
-| <a id='BCP126' />BCP126 | Error | Function "{functionName}" can't be used as a variable decorator. |
-| <a id='BCP127' />BCP127 | Error | Function "{functionName}" can't be used as a resource decorator. |
-| <a id='BCP128' />BCP128 | Error | Function "{functionName}" can't be used as a module decorator. |
-| <a id='BCP129' />BCP129 | Error | Function "{functionName}" can't be used as an output decorator. |
+| <a id='BCP124' />[BCP124](./diagnostics/bcp124.md) | Error | The decorator \<decorator-name> can only be attached to targets of type \<data-type>, but the target has type \<data-type>. |
+| <a id='BCP125' />[BCP125](./diagnostics/bcp125.md) | Error | Function \<function-name> can't be used as a parameter decorator. |
+| <a id='BCP126' />[BCP126](./diagnostics/bcp126.md) | Error | Function \<function-name> can't be used as a variable decorator. |
+| <a id='BCP127' />[BCP127](./diagnostics/bcp127.md) | Error | Function \<function-name> can't be used as a resource decorator. |
+| <a id='BCP128' />[BCP128](./diagnostics/bcp128.md) | Error | Function \<function-name> can't be used as a module decorator. |
+| <a id='BCP129' />[BCP129](./diagnostics/bcp129.md) | Error | Function \<function-name> can't be used as an output decorator. |
| <a id='BCP130' />BCP130 | Error | Decorators aren't allowed here. |
-| <a id='BCP132' />BCP132 | Error | Expected a declaration after the decorator. |
+| <a id='BCP132' />[BCP132](./diagnostics/bcp132.md) | Error | Expected a declaration after the decorator. |
| <a id='BCP133' />BCP133 | Error | The unicode escape sequence isn't valid. Valid unicode escape sequences range from \\u{0} to \\u{10FFFF}. | | <a id='BCP134' />BCP134 | Warning | Scope {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(suppliedScope))} isn't valid for this module. Permitted scopes: {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(supportedScopes))}. | | <a id='BCP135' />BCP135 | Warning | Scope {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(suppliedScope))} isn't valid for this resource type. Permitted scopes: {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(supportedScopes))}. |
If you need more information about a particular diagnostic code, select the **Fe
| <a id='BCP143' />BCP143 | Error | For-expressions can't be used with properties whose names are also expressions. | | <a id='BCP144' />BCP144 | Error | Directly referencing a resource or module collection isn't currently supported here. Apply an array indexer to the expression. | | <a id='BCP145' />BCP145 | Error | Output "{identifier}" is declared multiple times. Remove or rename the duplicates. |
-| <a id='BCP147' />BCP147 | Error | Expected a parameter declaration after the decorator. |
+| <a id='BCP147' />[BCP147](./diagnostics/bcp147.md) | Error | Expected a parameter declaration after the decorator. |
| <a id='BCP148' />BCP148 | Error | Expected a variable declaration after the decorator. | | <a id='BCP149' />BCP149 | Error | Expected a resource declaration after the decorator. | | <a id='BCP150' />BCP150 | Error | Expected a module declaration after the decorator. | | <a id='BCP151' />BCP151 | Error | Expected an output declaration after the decorator. |
-| <a id='BCP152' />BCP152 | Error | Function "{functionName}" can't be used as a decorator. |
-| <a id='BCP153' />BCP153 | Error | Expected a resource or module declaration after the decorator. |
+| <a id='BCP152' />[BCP152](./diagnostics/bcp152.md) | Error | Function \<function-name> can't be used as a decorator. |
+| <a id='BCP153' />[BCP153](./diagnostics/bcp153.md) | Error | Expected a resource or module declaration after the decorator. |
| <a id='BCP154' />BCP154 | Error | Expected a batch size of at least {limit} but the specified value was "{value}". |
-| <a id='BCP155' />BCP155 | Error | The decorator "{decoratorName}" can only be attached to resource or module collections. |
+| <a id='BCP155' />BCP155 | Error | The decorator \<decorator-name> can only be attached to resource or module collections. |
| <a id='BCP156' />BCP156 | Error | The resource type segment "{typeSegment}" is invalid. Nested resources must specify a single type segment, and optionally can specify an API version using the format "\<type>@\<apiVersion>". | | <a id='BCP157' />BCP157 | Error | The resource type can't be determined due to an error in the containing resource. | | <a id='BCP158' />BCP158 | Error | Can't access nested resources of type "{wrongType}". A resource type is required. |
If you need more information about a particular diagnostic code, select the **Fe
| <a id='BCP162' />BCP162 | Error | Expected a loop item variable identifier or "(" at this location. | | <a id='BCP164' />BCP164 | Error | A child resource's scope is computed based on the scope of its ancestor resource. This means that using the "scope" property on a child resource is unsupported. | | <a id='BCP165' />BCP165 | Error | A resource's computed scope must match that of the Bicep file for it to be deployable. This resource's scope is computed from the "scope" property value assigned to ancestor resource "{ancestorIdentifier}". You must use modules to deploy resources to a different scope. |
-| <a id='BCP166' />BCP166 | Error | Duplicate "{decoratorName}" decorator. |
+| <a id='BCP166' />[BCP166](./diagnostics/bcp166.md) | Error | Duplicate \<decorator-name> decorator. |
| <a id='BCP167' />BCP167 | Error | Expected the "{" character or the "if" keyword at this location. | | <a id='BCP168' />BCP168 | Error | Length must not be a negative value. | | <a id='BCP169' />BCP169 | Error | Expected resource name to contain {expectedSlashCount} "/" character(s). The number of name segments must match the number of segments in the resource type. |
If you need more information about a particular diagnostic code, select the **Fe
| <a id='BCP177' />BCP177 | Error | This expression is being used in the if-condition expression, which requires a value that can be calculated at the start of the deployment.{variableDependencyChainClause}{accessiblePropertiesClause} | | <a id='BCP178' />BCP178 | Error | This expression is being used in the for-expression, which requires a value that can be calculated at the start of the deployment.{variableDependencyChainClause}{accessiblePropertiesClause} | | <a id='BCP179' />BCP179 | Warning | Unique resource or deployment name is required when looping. The loop item variable "{itemVariableName}" or the index variable "{indexVariableName}" must be referenced in at least one of the value expressions of the following properties in the loop body: {ToQuotedString(expectedVariantProperties)} |
-| <a id='BCP180' />BCP180 | Error | Function "{functionName}" isn't valid at this location. It can only be used when directly assigning to a module parameter with a secure decorator. |
-| <a id='BCP181' />BCP181 | Error | This expression is being used in an argument of the function "{functionName}", which requires a value that can be calculated at the start of the deployment.{variableDependencyChainClause}{accessiblePropertiesClause} |
+| <a id='BCP180' />BCP180 | Error | Function \<function-name> isn't valid at this location. It can only be used when directly assigning to a module parameter with a secure decorator. |
+| <a id='BCP181' />BCP181 | Error | This expression is being used in an argument of the function \<function-name>, which requires a value that can be calculated at the start of the deployment.{variableDependencyChainClause}{accessiblePropertiesClause} |
| <a id='BCP182' />BCP182 | Error | This expression is being used in the for-body of the variable "{variableName}", which requires values that can be calculated at the start of the deployment.{variableDependencyChainClause}{violatingPropertyNameClause}{accessiblePropertiesClause} | | <a id='BCP183' />BCP183 | Error | The value of the module "params" property must be an object literal. | | <a id='BCP184' />BCP184 | Error | File '{filePath}' exceeded maximum size of {maxSize} {unit}. |
If you need more information about a particular diagnostic code, select the **Fe
| <a id='BCP238' />BCP238 | Error | Unexpected new line character after a comma. | | <a id='BCP239' />BCP239 | Error | Identifier "{name}" is a reserved Bicep symbol name and can't be used in this context. | | <a id='BCP240' />BCP240 | Error | The "parent" property only permits direct references to resources. Expressions aren't supported. |
-| <a id='BCP241' />BCP241 | Warning | The "{functionName}" function is deprecated and will be removed in a future release of Bicep. Add a comment to https://github.com/Azure/bicep/issues/2017 if you believe this will impact your workflow. |
+| <a id='BCP241' />BCP241 | Warning | The \<function-name> function is deprecated and will be removed in a future release of Bicep. Add a comment to https://github.com/Azure/bicep/issues/2017 if you believe this will impact your workflow. |
| <a id='BCP242' />BCP242 | Error | Lambda functions may only be specified directly as function arguments. | | <a id='BCP243' />BCP243 | Error | Parentheses must contain exactly one expression. | | <a id='BCP244' />BCP244 | Error | {minArgCount == maxArgCount ? $"Expected lambda expression of type "{lambdaType}" with {minArgCount} arguments but received {actualArgCount} arguments." : $"Expected lambda expression of type "{lambdaType}" with between {minArgCount} and {maxArgCount} arguments but received {actualArgCount} arguments."} | | <a id='BCP245' />BCP245 | Warning | Resource type "{resourceTypeReference.FormatName()}" can only be used with the 'existing' keyword. | | <a id='BCP246' />BCP246 | Warning | Resource type "{resourceTypeReference.FormatName()}" can only be used with the 'existing' keyword at the requested scope. Permitted scopes for deployment: {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(writableScopes))}. | | <a id='BCP247' />BCP247 | Error | Using lambda variables inside resource or module array access isn't currently supported. Found the following lambda variable(s) being accessed: {ToQuotedString(variableNames)}. |
-| <a id='BCP248' />BCP248 | Error | Using lambda variables inside the "{functionName}" function isn't currently supported. Found the following lambda variable(s) being accessed: {ToQuotedString(variableNames)}. |
+| <a id='BCP248' />BCP248 | Error | Using lambda variables inside the \<function-name> function isn't currently supported. Found the following lambda variable(s) being accessed: {ToQuotedString(variableNames)}. |
| <a id='BCP249' />BCP249 | Error | Expected loop variable block to consist of exactly 2 elements (item variable and index variable), but found {actualCount}. | | <a id='BCP250' />BCP250 | Error | Parameter "{identifier}" is assigned multiple times. Remove or rename the duplicates. | | <a id='BCP256' />BCP256 | Error | The using declaration is missing a Bicep template file path reference. |
If you need more information about a particular diagnostic code, select the **Fe
| <a id='BCP263' />BCP263 | Error | The file specified in the using declaration path doesn't exist. | | <a id='BCP264' />BCP264 | Error | Resource type "{resourceTypeName}" is declared in multiple imported namespaces ({ToQuotedStringWithCaseInsensitiveOrdering(namespaces)}), and must be fully qualified. | | <a id='BCP265' />BCP265 | Error | The name "{name}" isn't a function. Did you mean "{knownFunctionNamespace}.{knownFunctionName}"? |
-| <a id='BCP266' />BCP266 | Error | Expected a metadata identifier at this location. |
+| <a id='BCP266' />[BCP266](./diagnostics/bcp266.md) | Error | Expected a metadata identifier at this location. |
| <a id='BCP267' />BCP267 | Error | Expected a metadata declaration after the decorator. | | <a id='BCP268' />BCP268 | Error | Invalid identifier: "{name}". Metadata identifiers starting with '_' are reserved. Use a different identifier. |
-| <a id='BCP269' />BCP269 | Error | Function "{functionName}" can't be used as a metadata decorator. |
+| <a id='BCP269' />BCP269 | Error | Function \<function-name> can't be used as a metadata decorator. |
| <a id='BCP271' />BCP271 | Error | Failed to parse the contents of the Bicep configuration file "{configurationPath}" as valid JSON: {parsingErrorMessage.TrimEnd('.')}. | | <a id='BCP272' />BCP272 | Error | Couldn't load the Bicep configuration file "{configurationPath}": {loadErrorMessage.TrimEnd('.')}. | | <a id='BCP273' />BCP273 | Error | Failed to parse the contents of the Bicep configuration file "{configurationPath}" as valid JSON: {parsingErrorMessage.TrimEnd('.')}. |
If you need more information about a particular diagnostic code, select the **Fe
| <a id='BCP287' />BCP287 | Error | '{symbolName}' refers to a value but is being used as a type here. | | <a id='BCP288' />[BCP288](./diagnostics/bcp288.md) | Error | \<name> refers to a type but is being used as a value here. | | <a id='BCP289' />BCP289 | Error | The type definition isn't valid. |
-| <a id='BCP290' />BCP290 | Error | Expected a parameter or type declaration after the decorator. |
+| <a id='BCP290' />[BCP290](./diagnostics/bcp290.md) | Error | Expected a parameter or type declaration after the decorator. |
| <a id='BCP291' />BCP291 | Error | Expected a parameter or output declaration after the decorator. |
-| <a id='BCP292' />BCP292 | Error | Expected a parameter, output, or type declaration after the decorator. |
+| <a id='BCP292' />[BCP292](./diagnostics/bcp292.md) | Error | Expected a parameter, output, or type declaration after the decorator. |
| <a id='BCP293' />BCP293 | Error | All members of a union type declaration must be literal values. | | <a id='BCP294' />[BCP294](./diagnostics/bcp294.md) | Error | Type unions must be reducible to a single ARM type (such as 'string', 'int', or 'bool'). | | <a id='BCP295' />BCP295 | Error | The '{decoratorName}' decorator may not be used on targets of a union or literal type. The allowed values for this parameter or type definition will be derived from the union or literal type automatically. | | <a id='BCP296' />BCP296 | Error | Property names on types must be compile-time constant values. |
-| <a id='BCP297' />BCP297 | Error | Function "{functionName}" can't be used as a type decorator. |
+| <a id='BCP297' />BCP297 | Error | Function \<function-name> can't be used as a type decorator. |
| <a id='BCP298' />BCP298 | Error | This type definition includes itself as a required component, which creates a constraint that can't be fulfilled. | | <a id='BCP299' />BCP299 | Error | This type definition includes itself as a required component via a cycle ("{string.Join("\" -> \"", cycle)}"). | | <a id='BCP300' />BCP300 | Error | Expected a type literal at this location. Specify a concrete value or a reference to a literal type. |
If you need more information about a particular diagnostic code, select the **Fe
| <a id='BCP305' />BCP305 | Error | Expected the "with" keyword, "as" keyword, or a new line character at this location. | | <a id='BCP306' />BCP306 | Error | The name "{name}" refers to a namespace, not to a type. | | <a id='BCP307' />BCP307 | Error | The expression can't be evaluated, because the identifier properties of the referenced existing resource including {ToQuotedString(runtimePropertyNames.OrderBy(x => x))} can't be calculated at the start of the deployment. In this situation, {accessiblePropertyNamesClause}{accessibleFunctionNamesClause}. |
-| <a id='BCP308' />BCP308 | Error | The decorator "{decoratorName}" may not be used on statements whose declared type is a reference to a user-defined type. |
+| <a id='BCP308' />BCP308 | Error | The decorator \<decorator-name> may not be used on statements whose declared type is a reference to a user-defined type. |
| <a id='BCP309' />BCP309 | Error | Values of type "{flattenInputType.Name}" can't be flattened because "{incompatibleType.Name}" isn't an array type. |
-| <a id='BCP311' />BCP311 | Error | The provided index value of "{indexSought}" isn't valid for type "{typeName}". Indexes for this type must be between 0 and {tupleLength - 1}. |
+| <a id='BCP311' />[BCP311](./diagnostics/bcp311.md) | Error | The provided index value of \<index-value> isn't valid for type \<type-name>. Indexes for this type must be between 0 and \<zero-based-tuple-index>. |
| <a id='BCP315' />BCP315 | Error | An object type may have at most one additional properties declaration. | | <a id='BCP316' />BCP316 | Error | The "{LanguageConstants.ParameterSealedPropertyName}" decorator may not be used on object types with an explicit additional properties type declaration. | | <a id='BCP317' />BCP317 | Error | Expected an identifier, a string, or an asterisk at this location. |
If you need more information about a particular diagnostic code, select the **Fe
| <a id='BCP348' />BCP348 | Error | Using a test declaration statement requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.TestFramework)}". | | <a id='BCP349' />BCP349 | Error | Using an assert declaration requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.Assertions)}". | | <a id='BCP350' />BCP350 | Error | Value of type "{valueType}" can't be assigned to an assert. Asserts can take values of type 'bool' only. |
-| <a id='BCP351' />BCP351 | Error | Function "{functionName}" isn't valid at this location. It can only be used when directly assigning to a parameter. |
+| <a id='BCP351' />BCP351 | Error | Function \<function-name> isn't valid at this location. It can only be used when directly assigning to a parameter. |
| <a id='BCP352' />BCP352 | Error | Failed to evaluate variable "{name}": {message} | | <a id='BCP353' />BCP353 | Error | The {itemTypePluralName} {ToQuotedString(itemNames)} differ only in casing. The ARM deployments engine isn't case sensitive and won't be able to distinguish between them. | | <a id='BCP354' />BCP354 | Error | Expected left brace ('{') or asterisk ('*') character at this location. |
If you need more information about a particular diagnostic code, select the **Fe
| <a id='BCP383' />BCP383 | Error | The "{typeName}" type isn't parameterizable. | | <a id='BCP384' />BCP384 | Error | The "{typeName}" type requires {requiredArgumentCount} argument(s). | | <a id='BCP385' />BCP385 | Error | Using resource-derived types requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.ResourceDerivedTypes)}". |
-| <a id='BCP386' />BCP386 | Error | The decorator "{decoratorName}" may not be used on statements whose declared type is a reference to a resource-derived type. |
+| <a id='BCP386' />BCP386 | Error | The decorator \<decorator-name> may not be used on statements whose declared type is a reference to a resource-derived type. |
| <a id='BCP387' />BCP387 | Error | Indexing into a type requires an integer greater than or equal to 0. | | <a id='BCP388' />BCP388 | Error | Can't access elements of type "{wrongType}" by index. A tuple type is required. | | <a id='BCP389' />BCP389 | Error | The type "{wrongType}" doesn't declare an additional properties type. |
azure-resource-manager Bcp007 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp007.md
+
+ Title: BCP007
+description: Error - This declaration type isn't recognized. Specify a metadata, parameter, variable, resource, or output declaration.
++ Last updated : 08/23/2024++
+# Bicep error code - BCP007
+
+This error occurs when the declaration type isn't recognized. For a list of declaration types, see [Understand the structure and syntax of Bicep files](../file.md).
+
+## Error description
+
+`This declaration type isn't recognized. Specify a metadata, parameter, variable, resource, or output declaration.`
+
+## Solution
+
+Use the correct declaration type. For more information, see [Bicep file](../file.md).
+
+## Examples
+
+The following example raises the error because `parameter` isn't a correct declaration type:
+
+```bicep
+parameter name string
+```
+
+You can fix the error by using the correct declaration type, `param`.
+
+```bicep
+param name string
+```
+
+For more information, see [Parameters](../parameters.md).
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp009 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp009.md
+
+ Title: BCP009
+description: Error - Expected a literal value, an array, an object, a parenthesized expression, or a function call at this location.
++ Last updated : 08/23/2024++
+# Bicep error code - BCP009
+
+This error occurs when a declaration is not completed.
+
+## Error description
+
+`Expected a literal value, an array, an object, a parenthesized expression, or a function call at this location.`
+
+## Solution
+
+Include the missing part. For more information, see [Bicep file](../file.md).
+
+## Examples
+
+The following example raises the error because the `metadata` declaration isn't completed:
+
+```bicep
+metadata description =
+```
+
+You can fix the error by using completing the declaration.
+
+```bicep
+metadata description = 'Creates a storage account and a web app'
+```
+
+For more information, see [Metadata](../file.md#metadata).
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp018 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp018.md
output tennisBall object = {
} ```
+For more information, see [Objects](../data-types.md#objects)
+ The following example raises the error because the code is missing a _]_. ```bicep
output colors array = [
] ```
+For more information, see [Arrays](../data-types.md#arrays).
+
+The following example raises the error because the code is missing `=` and the assigned value.
+
+```bicep
+output month int
+```
+
+You can fix the error by completing the output declaration.
+
+```bicep
+output month int = 3
+```
+
+For more information, see [Outputs](../file.md#outputs).
+ ## Next steps
-For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp057 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp057.md
Last updated 08/08/2024
# Bicep error code - BCP057
-This error occurs when the referenced name doesn't exist, either because of a typo or because it hasn't been declared.
+This error occurs when the referenced name doesn't exist, either due to a typo or because it hasn't been declared. If it's a typo, you'll encounter [BCP082](./bcp082.md) when the compiler identifies and suggests a similarly named symbol.
## Error description
This error occurs when the referenced name doesn't exist, either because of a ty
## Solution
-Fix the typo or declare the name.
+Fix the typo or declare the symbol.
## Examples
azure-resource-manager Bcp063 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp063.md
+
+ Title: BCP063
+description: Error - The name <name> isn't a parameter, variable, resource, or module.
++ Last updated : 08/23/2024++
+# Bicep error code - BCP063
+
+This error occurs when the system tries to locate a name within the context, but no matching name is found.
+
+## Error description
+
+`The name <name> isn't a parameter, variable, resource, or module.`
+
+## Solutions
+
+Use the property declaration types. For more information, see [Bicep file](../file.md).
+
+## Examples
+
+The following example raises the error because *@metadata* is not a correct declaration type:
+
+```bicep
+@metadata
+resource store 'Microsoft.Storage/storageAccounts@2023-05-01' existing = {
+ name: 'mystore'
+}
+```
+
+You can fix the error by properly declaring `metadata`.
+
+```bicep
+metadata description = 'create a storage account'
+
+resource store 'Microsoft.Storage/storageAccounts@2023-05-01' existing = {
+ name: 'mystore'
+}
+```
+
+For more information, see [Metadata](../file.md#metadata).
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp071 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp071.md
+
+ Title: BCP071
+description: Error - Expected <arugment-count>, but got <argument-count>.
++ Last updated : 07/15/2024++
+# Bicep error code - BCP071
+
+This error occurs when a function is given an incorrect number of arguments. For a list of system defined functions, see [Bicep functions](../bicep-functions-any.md). To define you own functions, see [User-defined functions](../user-defined-functions.md).
+
+## Error description
+
+`Expected <arugment-count>, but got <argument-count>.`
+
+## Solution
+
+Provide the correct number of arguments.
+
+## Examples
+
+The following example raises the error because [`split()`](../bicep-functions-string.md#split) expects two arguments, but three arguments were provided:
+
+```bicep
+var tooManyArgs = split('a,b', ',', '?')
+```
+
+You can fix the error by removing the extra argument:
+
+```bicep
+var tooManyArgs = split('a,b', ',', '?')
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp082 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp082.md
+
+ Title: BCP082
+description: Error/warning - The name <name> doesn't exist in the current context. Did you mean <name>?
++ Last updated : 08/06/2024++
+# Bicep error/warning code - BCP082
+
+This error or warning is similar to [BCP057](./bcp057.md), it occurs when the referenced name doesn't exist, likely due to a typo. The compiler identifies and suggests a similarly named symbol.
+
+## Error/warning description
+
+`The name <name> doesn't exist in the current context. Did you mean <name>?`
+
+## Solutions
+
+Fix the typo.
+
+## Examples
+
+The following example raises the error because `substirng` looks like a typo.
+
+```bicep
+var prefix = substirng('1234567890', 0, 11)
+```
+
+You can fix the error by using `substring` instead:
+
+```bicep
+var prefix = substring('1234567890', 0, 11)
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](../bicep-core-diagnostics.md).
azure-resource-manager Bcp124 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp124.md
+
+ Title: BCP124
+description: Error - The decorator <decorator-name> can only be attached to targets of type <data-type>, but the target has type <data-type>.
++ Last updated : 08/23/2024++
+# Bicep error code - BCP124
+
+This error occurs when you specify a decorator that isn't supported by the type of the syntax being decorated.
+
+## Error description
+
+`The decorator <decorator-name> can only be attached to targets of type <data-type>, but the target has type <data-type>.`
+
+## Solutions
+
+Use the valid decorators based on the data types.
+
+## Examples
+
+The following example raises the error because `@maxValue()` is for the integer data type, not for the string data type.
+
+```bicep
+@maxValue(3)
+param name string
+```
+
+You can fix the error by providing the correct decorator for the data type:
+
+```bicep
+@maxLength(3)
+param name string
+```
+
+For a list of decorators, see [Decorators](../file.md#decorators).
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp125 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp125.md
+
+ Title: BCP125
+description: Error - Function <function-name> can't be used as a parameter decorator.
++ Last updated : 08/23/2024++
+# Bicep error code - BCP125
+
+This error occurs when you specify an invalid parameter decorator.
+
+## Error description
+
+`Function <function-name> can't be used as a parameter decorator.`
+
+## Solutions
+
+Use the valid decorators for parameter declarations. For more information, see [Decorators](../parameters.md#use-decorators).
+
+## Examples
+
+The following example raises the error because `@export()` isn't a valid decorator for parameters.
+
+```bicep
+@export()
+param name string
+```
+
+You can fix the error by providing the correct decorator for parameters:
+
+```bicep
+@description('Specify the resource name.')
+param name string
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp126 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp126.md
+
+ Title: BCP126
+description: Error - Function <function-name> can't be used as a variable decorator.
++ Last updated : 08/23/2024++
+# Bicep error code - BCP126
+
+This error occurs when you specify an invalid variable decorator.
+
+## Error description
+
+`Function <function-name> can't be used as a variable decorator.`
+
+## Solutions
+
+Use the valid decorators for variable declarations. For more information, see [Decorators](../variables.md#use-decorators).
+
+## Examples
+
+The following example raises the error because `@minLength()` is not a valid variable decorator.
+
+```bicep
+@minLength()
+var name = uniqueString(resourceGroup().id)
+```
+
+The valid variable decorators are `@description()` and `@export()`.
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp127 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp127.md
+
+ Title: BCP127
+description: Error - Function <function-name> can't be used as a resource decorator.
++ Last updated : 08/23/2024++
+# Bicep error code - BCP127
+
+This error occurs when you specify an invalid resource decorator.
+
+## Error description
+
+`Function <function-name> can't be used as a resource decorator.`
+
+## Solutions
+
+Use the valid decorators for resource declarations. For more information, see [Decorators](../resource-declaration.md#use-decorators).
+
+## Examples
+
+The following example raises the error because `@export()` is not a valid resource decorator.
+
+```bicep
+@export()
+resource store 'Microsoft.Storage/storageAccounts@2023-05-01' existing = {
+ name: uniqueString(resourceGroup().id)
+}
+```
+
+The valid resource decorators are `@description()` and `@batchSize()`.
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp128 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp128.md
+
+ Title: BCP128
+description: Error - Function <function-name> can't be used as a module decorator.
++ Last updated : 08/23/2024++
+# Bicep error code - BCP128
+
+This error occurs when you specify an invalid module decorator.
+
+## Error description
+
+`Function <function-name> can't be used as a module decorator.`
+
+## Solutions
+
+Use the valid decorators for module declarations. For more information, see [Decorators](../modules.md#use-decorators).
+
+## Examples
+
+The following example raises the error because `@export()` is not a valid module decorator.
+
+```bicep
+@export()
+module storage 'br/public:avm/res/storage/storage-account:0.11.1' = {
+ name: 'myStorage'
+ params: {
+ name: 'store${resourceGroup().name}'
+ }
+}
+```
+
+The valid module decorators are `@description()` and `@batchSize()`.
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp129 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp129.md
+
+ Title: BCP129
+description: Error - Function <function-name> can't be used as an output decorator.
++ Last updated : 08/23/2024++
+# Bicep error code - BCP129
+
+This error occurs when you specify an invalid output decorator.
+
+## Error description
+
+`Function <function-name> can't be used as a output decorator.`
+
+## Solutions
+
+Use the valid decorators for output declarations. For more information, see [Decorators](../outputs.md#use-decorators).
+
+## Examples
+
+The following example raises the error because `@export()` isn't a valid output decorator.
+
+```bicep
+@export()
+output foo string = 'Hello world'
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp132 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp132.md
+
+ Title: BCP132
+description: Error - Expected a declaration after the decorator.
++ Last updated : 08/23/2024++
+# Bicep error code - BCP132
+
+This error occurs when you have a decorator but there is no declaration following it.
+
+## Error description
+
+`Expected a declaration after the decorator.`
+
+## Solutions
+
+Add the proper type declarations after the decorator.
+
+## Examples
+
+The following example raises the error because there are no declarations after the decorator.
+
+```bicep
+@description()
+```
+
+You can fix the error by removing the decorator or adding the declaration after the decorator.
+
+```bicep
+@description('Declare a existing storage account.')
+resource store 'Microsoft.Storage/storageAccounts@2023-05-01' existing = {
+ name: 'mystore'
+}
+```
+
+For a list of valid decorators, see [Decorators](../file.md#decorators).
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp147 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp147.md
+
+ Title: BCP147
+description: Error - Expected a parameter declaration after the decorator.
++ Last updated : 08/23/2024++
+# Bicep error code - BCP147
+
+This error occurs when you have a decorator that is expecting to be followed by a `param` declaration, but miss the declaration.
+
+## Error description
+
+`Expected a parameter declaration after the decorator.`
+
+## Solutions
+
+Add the parameter declaration after the decorator. For a list of valid parameter decorators, see [Decorators](../parameters.md#use-decorators).
+
+## Examples
+
+The following example raises the error because the `param` declaration is missing.
+
+```bicep
+@allowed()
+```
+
+You can fix the error by adding the `param` declaration.
+
+```bicep
+@allowed([
+ 'foo'
+ 'bar'
+])
+param stringParam string = 'foo'
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp152 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp152.md
+
+ Title: BCP152
+description: Error - Function <function-name> can't be used as a decorator.
++ Last updated : 08/23/2024++
+# Bicep error code - BCP152
+
+This error occurs when you try to use a Bicep function as a decorator, but the function isn't suitable for that purpose.
+
+## Error description
+
+`Function <function-name> can't be used as a decorator.`
+
+## Solutions
+
+Use valid decorators. For a list of parameter decorators, see [Decorators](../parameters.md#use-decorators).
+
+## Examples
+
+The following example raises the error because `uniqueString()` can't be used as a parameter decorator.
+
+```bicep
+@uniqueString()
+param name string
+```
+
+You can fix the error by using the valid decorators.
+
+```bicep
+@description('Provide resource name.')
+param name string
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp153 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp153.md
+
+ Title: BCP153
+description: Error - Expected a resource or module declaration after the decorator.
++ Last updated : 08/23/2024++
+# Bicep error code - BCP153
+
+This error occurs when you have a decorator that is expecting to be followed by a `resource` or `module` declaration, but miss the declaration.
+
+## Error description
+
+`Expected a resource or module declaration after the decorator.`
+
+## Solutions
+
+Add the [module](../modules.md) or [resource](../resource-declaration.md) declarations.
+
+## Examples
+
+The following example raises the error because the type declaration is missing.
+
+```bicep
+@batchSize()
+```
+
+You can fix the error by adding the module or resource declaration.
+
+```bicep
+@batchSize(3)
+module storage 'br/public:avm/res/storage/storage-account:0.11.1' = [for storageName in storageAccounts: {
+ name: 'myStorage'
+ params: {
+ name: 'store${resourceGroup().name}'
+ }
+}]
+```
+
+For a list of valid decorators, see [Module decorators](../modules.md#use-decorators) and [Resource decorators](../resource-declaration.md#use-decorators).
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp166 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp166.md
+
+ Title: BCP166
+description: Error - Duplicate <decorator-name> decorator.
++ Last updated : 08/23/2024++
+# Bicep error code - BCP166
+
+This error occurs when you have duplicate decorators.
+
+## Error description
+
+`Duplicate <decorator-name> decorator.`
+
+## Solutions
+
+Remove the duplicate decorator.
+
+## Examples
+
+The following example raises the error because there's two duplicate decorators.
+
+```bicep
+@description('foo')
+@description('bar')
+param name string
+```
+
+You can fix the error by removing one of the duplicate decorators.
+
+```bicep
+@description('bar')
+param name string
+```
+
+For more information, see [Decorators](../file.md#decorators).
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp266 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp266.md
+
+ Title: BCP266
+description: Error - Expected a metadata identifier at this location.
++ Last updated : 08/08/2024++
+# Bicep error code - BCP266
+
+This error occurs when the `metadata` declaration missing the identifier.
+
+## Error description
+
+`Expected a metadata identifier at this location.`
+
+## Solution
+
+Add the metadata identifier, and complete the declaration.
+
+## Examples
+
+The following example raises the error because type declaration `metadata` misses the identifier.
+
+```bicep
+metadata
+```
+
+You can fix the error by completing the `metadata` declaration.
+
+```bicep
+metadata description = 'Creates a storage account and a web app'
+```
+
+For more information, see [Metadata](../file.md#metadata).
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp290 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp290.md
+
+ Title: BCP290
+description: Error - Expected a parameter or type declaration after the decorator.
++ Last updated : 08/23/2024++
+# Bicep error code - BCP290
+
+This error occurs when you have a decorator that is expecting to be followed by a `param` or `type` declaration, but miss the declaration.
+
+## Error description
+
+`Expected a parameter or type declaration after the decorator.`
+
+## Examples
+
+The following example raises the error because there is no parameter or type declaration after the `@secure()` decorator.
+
+```bicep
+@secure()
+```
+
+You can fix the error by removing the decorator and adding a parameter of type declaration.
+
+```bicep
+@secure()
+param password string
+```
+
+For more information, see [Decorators](../file.md#decorators).
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp292 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp292.md
+
+ Title: BCP292
+description: Error - Expected a parameter, output, or type declaration after the decorator.
++ Last updated : 08/23/2024++
+# Bicep error code - BCP292
+
+This error occurs when you have a decorator that is expecting to be followed by a `param` or `output` or `type` declaration, but miss the declaration.
+
+## Error description
+
+`Expected a parameter, output, or type declaration after the decorator.`
+
+## Examples
+
+The following example raises the error because there is no type declaration after the `@metadata`, `@minValue()`, `@maxValue()`,`@minLength()`, `@maxLength()`, `@discriminator()`, or `@sealed()` decorator.
+
+```bicep
+@minLength()
+```
+
+You can fix the error by removing the decorator and adding the correct type declaration.
+
+```bicep
+@minLength(3)
+param name string
+```
+
+For more information, see [Decorators](../file.md#decorators).
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp311 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp311.md
+
+ Title: BCP311
+description: Error - The provided index value of <index-value> isn't valid for type <type-name>. Indexes for this type must be between 0 and <zero-based-tuple-index>.
++ Last updated : 08/08/2024++
+# Bicep error code - BCP311
+
+This error occurs when you provide an invalid index number. Arrays in Bicep are zero-based. For more information, see [Arrays](../data-types.md#arrays).
+
+## Error description
+
+`The provided index value of <index-value> isn't valid for type <type-name>. Indexes for this type must be between 0 and <zero-based-tuple-index>.`
+
+## Solutions
+
+Use the correct index number.
+
+## Examples
+
+The following example raises the error because the index is out of bounds:
+
+```bicep
+var exampleArray = [
+ 1
+ 2
+ 3
+]
+
+output bar int = exampleArray[3]
+```
+
+You can fix the error by using the correct index number:
+
+```bicep
+var exampleArray = [
+ 1
+ 2
+ 3
+]
+
+output bar int = exampleArray[2]
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp332 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp332.md
param storageAccountName string = 'myStorage'
## Next steps
-For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp333 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp333.md
param storageAccountName string = 'myStorage'
## Next steps
-For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Bcp338 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp338.md
You can fix the error by assigning a string whose length is within the allowable
## Next steps
-For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Async Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/async-operations.md
Title: Status of asynchronous operations description: Describes how to track asynchronous operations in Azure. It shows the values you use to get the status of a long-running operation. Previously updated : 03/19/2024 Last updated : 09/26/2024 # Track asynchronous Azure operations
azure-resource-manager Authenticate Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/authenticate-multi-tenant.md
Title: Authenticate across tenants
description: Describes how Azure Resource Manager handles authentication requests across tenants. Previously updated : 03/19/2024 Last updated : 09/26/2024 # Authenticate requests across tenants
The request has the following authentication header values:
| Authorization | Primary token | Bearer &lt;primary-token&gt; | | x-ms-authorization-auxiliary | Auxiliary tokens | Bearer &lt;auxiliary-token1&gt;, EncryptedBearer &lt;auxiliary-token2&gt;, Bearer &lt;auxiliary-token3&gt; |
-The auxiliary header can hold up to three auxiliary tokens.
+The auxiliary header can hold up to three auxiliary tokens.
In the code of your multi-tenant app, get the authentication token for other tenants and store them in the auxiliary headers. The user or application must have been invited as a guest to the other tenants.
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
Title: Find resource providers by Azure services description: Lists all resource provider namespaces for Azure Resource Manager and shows the Azure service for that namespace. Previously updated : 11/07/2023 Last updated : 09/26/2024 content_well_notification: - AI-contribution
azure-resource-manager Control Plane And Data Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/control-plane-and-data-plane.md
Title: Control plane and data plane operations
description: Describes the difference between control plane and data plane operations. Control plane operations are handled by Azure Resource Manager. Data plane operations are handled by a service. Previously updated : 03/19/2024 Last updated : 09/26/2024 # Azure control plane and data plane
Azure Resource Manager handles all control plane requests. It automatically appl
* [Management Locks](lock-resources.md) * [Activity Logs](/azure/azure-monitor/essentials/activity-log)
-After authenticating the request, Azure Resource Manager sends it to the resource provider, which completes the operation. Even during periods of unavailability for the control plane, you can still access the data plane of your Azure resources. For instance, you can continue to access and operate on data in your storage account resource via its separate storage URI `https://myaccount.blob.core.windows.net` even when `https://management.azure.com` is not available.
+After authenticating the request, Azure Resource Manager sends it to the resource provider, which completes the operation. Even during periods of unavailability for the control plane, you can still access the data plane of your Azure resources. For instance, you can continue to access and operate on data in your storage account resource via its separate storage URI `https://myaccount.blob.core.windows.net` even when `https://management.azure.com` isn't available.
The control plane includes two scenarios for handling requests - "green field" and "brown field". Green field refers to new resources. Brown field refers to existing resources. As you deploy resources, Azure Resource Manager understands when to create new resources and when to update existing resources. You don't have to worry that identical resources will be created.
azure-resource-manager Create Private Link Access Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/create-private-link-access-commands.md
Title: Manage resources through private link description: Restrict management access for resource to private link Previously updated : 03/19/2024 Last updated : 09/26/2024 # Use APIs to create a private link for managing Azure resources
This article explains how you can use [Azure Private Link](../../private-link/in
## Create resource management private link To create resource management private link, send the following request:+ # [Azure CLI](#tab/azure-cli)+ ### Example+ ```azurecli # Login first with az login if not using Cloud Shell az resourcemanagement private-link create --location WestUS --resource-group PrivateLinkTestRG --name NewRMPL ```
-
+ # [PowerShell](#tab/azure-powershell)+ ### Example+ ```azurepowershell-interactive # Login first with Connect-AzAccount if not using Cloud Shell New-AzResourceManagementPrivateLink -ResourceGroupName PrivateLinkTestRG -Name NewRMPL ```
-
+ # [REST](#tab/REST)+ REST call+ ```http PUT https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}?api-version=2020-05-01
To create resource management private link, send the following request:
Note the ID that is returned for the new resource management private link. You'll use it for creating the private link association. ## Create private link association+ The resource name of a private link association resource must be a GUID, and it isn't yet supported to disable the publicNetworkAccess field. To create the private link association, use:+ # [Azure CLI](#tab/azure-cli)+ ### Example+ ```azurecli # Login first with az login if not using Cloud Shell az private-link association create --management-group-id fc096d27-0434-4460-a3ea-110df0422a2d --name 1d7942d1-288b-48de-8d0f-2d2aa8e03ad4 --privatelink "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/PrivateLinkTestRG/providers/Microsoft.Authorization/resourceManagementPrivateLinks/newRMPL" ```
-
+ # [PowerShell](#tab/azure-powershell)+ ### Example+ ```azurepowershell-interactive # Login first with Connect-AzAccount if not using Cloud Shell New-AzPrivateLinkAssociation -ManagementGroupId fc096d27-0434-4460-a3ea-110df0422a2d -Name 1d7942d1-288b-48de-8d0f-2d2aa8e03ad4 -PrivateLink "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/PrivateLinkTestRG/providers/Microsoft.Authorization/resourceManagementPrivateLinks/newRMPL" -PublicNetworkAccess enabled | fl ```
-
+ # [REST](#tab/REST)+ REST call ```http
To create the private link association, use:
"type": "Microsoft.Authorization/privateLinkAssociations" } ```+ ## Add private endpoint
azure-resource-manager Create Private Link Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/create-private-link-access-portal.md
Title: Create private link for managing resources - Azure portal description: Use Azure portal to create private link for managing resources. Previously updated : 03/19/2024 Last updated : 09/26/2024 # Use portal to create private link for managing Azure resources
azure-resource-manager Delete Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/delete-resource-group.md
Title: Delete resource group and resources description: Describes how to delete resource groups and resources. It describes how Azure Resource Manager orders the deletion of resources when a deleting a resource group. It describes the response codes and how Resource Manager handles them to determine if the deletion succeeded. Previously updated : 09/27/2023 Last updated : 09/26/2024 content_well_notification: - AI-contribution
azure-resource-manager Deployment Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/deployment-models.md
Title: Resource Manager and classic deployment
description: Describes the differences between the Resource Manager deployment model and the classic (or Service Management) deployment model. Previously updated : 03/19/2024 Last updated : 09/26/2024 # Azure Resource Manager vs. classic deployment: Understand deployment models and the state of your resources
azure-resource-manager Manage Private Link Access Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-private-link-access-commands.md
Title: Manage resource management private links
description: Use APIs to manage existing resource management private links Previously updated : 03/19/2024 Last updated : 09/26/2024 # Manage resource management private links
If you need to create a resource management private link, see [Use portal to cre
To **get a specific** resource management private link, send the following request: # [Azure CLI](#tab/azure-cli)+ ### Example+ ```azurecli # Login first with az login if not using Cloud Shell az resourcemanagement private-link show --resource-group PrivateLinkTestRG --name NewRMPL ```
-
+ # [PowerShell](#tab/azure-powershell)+ ### Example+ ```azurepowershell-interactive # Login first with Connect-AzAccount if not using Cloud Shell Get-AzResourceManagementPrivateLink -ResourceGroupName PrivateLinkTestRG -Name NewRMPL ```
-
+ # [REST](#tab/REST)+ REST call+ ```http GET https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}?api-version=2020-05-01 ```
The operation returns:
To **get all** resource management private links in a subscription, use:+ # [Azure CLI](#tab/azure-cli)+ ```azurecli # Login first with az login if not using Cloud Shell az resourcemanagement private-link list ```
-
+ # [PowerShell](#tab/azure-powershell)+ ```azurepowershell-interactive # Login first with Connect-AzAccount if not using Cloud Shell Get-AzResourceManagementPrivateLink ```
-
+ # [REST](#tab/REST)+ REST call+ ```http GET https://management.azure.com/subscriptions/{subscriptionID}/providers/Microsoft.Authorization/resourceManagementPrivateLinks?api-version=2020-05-01
To **get all** resource management private links in a subscription, use:
To **delete a specific** resource management private link, use: # [Azure CLI](#tab/azure-cli)+ ### Example+ ```azurecli # Login first with az login if not using Cloud Shell az resourcemanagement private-link delete --resource-group PrivateLinkTestRG --name NewRMPL ```
-
+ # [PowerShell](#tab/azure-powershell)+ ### Example+ ```azurepowershell-interactive # Login first with Connect-AzAccount if not using Cloud Shell Remove-AzResourceManagementPrivateLink -ResourceGroupName PrivateLinkTestRG -Name NewRMPL ```
-
+ # [REST](#tab/REST)+ REST call+ ```http DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}?api-version=2020-05-01
To **delete a specific** resource management private link, use:
## Private link association To **get a specific** private link association for a management group, use:+ # [Azure CLI](#tab/azure-cli)+ ### Example+ ```azurecli # Login first with az login if not using Cloud Shell az private-link association show --management-group-id fc096d27-0434-4460-a3ea-110df0422a2d --name 1d7942d1-288b-48de-8d0f-2d2aa8e03ad4 ```
-
+ # [PowerShell](#tab/azure-powershell)+ ### Example+ ```azurepowershell-interactive # Login first with Connect-AzAccount if not using Cloud Shell Get-AzPrivateLinkAssociation -ManagementGroupId fc096d27-0434-4460-a3ea-110df0422a2d -Name 1d7942d1-288b-48de-8d0f-2d2aa8e03ad4 | fl ```
-
+ # [REST](#tab/REST)+ REST call+ ```http GET https://management.azure.com/providers/Microsoft.Management/managementGroups/{managementGroupID}/providers/Microsoft.Authorization/privateLinkAssociations?api-version=2020-05-01
To **get a specific** private link association for a management group, use:
To **delete** a private link association, use:+ # [Azure CLI](#tab/azure-cli)+ ### Example+ ```azurecli # Login first with az login if not using Cloud Shell az private-link association delete --management-group-id 24f15700-370c-45bc-86a7-aee1b0c4eb8a --name 1d7942d1-288b-48de-8d0f-2d2aa8e03ad4 ```
-
+ # [PowerShell](#tab/azure-powershell)+ ### Example+ ```azurepowershell-interactive # Login first with Connect-AzAccount if not using Cloud Shell Remove-AzPrivateLinkAssociation -ManagementGroupId 24f15700-370c-45bc-86a7-aee1b0c4eb8a -Name 1d7942d1-288b-48de-8d0f-2d2aa8e03ad4 ```
-
+ # [REST](#tab/REST)+ REST call ```http
The operation returns: `Status 200 OK`.
- ## Next steps * To learn more about private links, see [Azure Private Link](../../private-link/index.yml).
azure-resource-manager Manage Resource Groups Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-cli.md
Title: Manage resource groups - Azure CLI
description: Use Azure CLI to manage your resource groups through Azure Resource Manager. Shows how to create, list, and delete resource groups. Previously updated : 03/19/2024 Last updated : 09/26/2024
You can apply tags to resource groups and resources to logically organize your a
## Export resource groups to templates
-To assist with creating ARM templates, you can export a template from existing resources. For more information, see [Use Azure CLI to export a template](../templates/export-template-cli.md).
+To assist with creating ARM templates, you can export a template from existing resources. For more information, see [Use Azure CLI to export a template](../templates/export-template-cli.md).
## Manage access to resource groups
To manage access to a resource group, use [Azure role-based access control (Azur
## Next steps -- To learn Azure Resource Manager, see [Azure Resource Manager overview](overview.md).-- To learn the Resource Manager template syntax, see [Understand the structure and syntax of Azure Resource Manager templates](../templates/syntax.md).
+* To learn Azure Resource Manager, see [Azure Resource Manager overview](overview.md).
+* To learn the Resource Manager template syntax, see [Understand the structure and syntax of Azure Resource Manager templates](../templates/syntax.md).
azure-resource-manager Manage Resource Groups Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-portal.md
Title: Manage resource groups - Azure portal
description: Use Azure portal to manage your resource groups through Azure Resource Manager. Shows how to create, list, and delete resource groups. Previously updated : 03/19/2024 Last updated : 09/26/2024 # Manage Azure resource groups by using the Azure portal
Learn how to use the [Azure portal](https://portal.azure.com) with [Azure Resour
A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. You decide how you want to allocate resources to resource groups based on what makes the most sense for your organization. Generally, add resources that share the same lifecycle to the same resource group so you can easily deploy, update, and delete them as a group. The resource group scope is also used throughout the Azure portal to create views that span across multiple resources. For example:+ - Metrics blade provides metrics information (CPU, resources) to users. - Deployments blade shows the history of ARM Template or Bicep deployments targeted to that Resource Group (which includes Portal deployments). - Policy blade provides information related to the policies enforced on the resource group. - Diagnostics settings blade provides the ability to diagnose errors or review warnings.
-The resource group stores metadata about the resources. Therefore, when you specify a location for the resource group, you are specifying where that metadata is stored. For compliance reasons, you might need to ensure that your data is stored in a particular region. Note that resources inside a resource group can be of different regions.
-
+The resource group stores metadata about the resources. Therefore, when you specify a location for the resource group, you're specifying where that metadata is stored. For compliance reasons, you might need to ensure that your data is stored in a particular region. Note that resources inside a resource group can be of different regions.
## Create resource groups
azure-resource-manager Manage Resource Groups Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-powershell.md
Title: Manage resource groups - Azure PowerShell description: Use Azure PowerShell to manage your resource groups through Azure Resource Manager. Shows how to create, list, and delete resource groups. Previously updated : 03/19/2024 Last updated : 09/26/2024
For more information about deploying a Bicep file, see [Deploy resources with Bi
## Lock resource groups
-Locking prevents other users in your organization from accidentally deleting or modifying critical resources.
+Locking prevents other users in your organization from accidentally deleting or modifying critical resources.
To prevent a resource group and its resources from being deleted, use [New-AzResourceLock](/powershell/module/az.resources/new-azresourcelock).
You can apply tags to resource groups and resources to logically organize your a
## Export resource groups to templates
-To assist with creating ARM templates, you can export a template from existing resources. For more information, see [Use Azure PowerShell to export a template](../templates/export-template-powershell.md).
+To assist with creating ARM templates, you can export a template from existing resources. For more information, see [Use Azure PowerShell to export a template](../templates/export-template-powershell.md).
## Manage access to resource groups
To assist with creating ARM templates, you can export a template from existing r
## Next steps -- To learn Azure Resource Manager, see [Azure Resource Manager overview](overview.md).-- To learn the Resource Manager template syntax, see [Understand the structure and syntax of Azure Resource Manager templates](../templates/syntax.md).
+* To learn Azure Resource Manager, see [Azure Resource Manager overview](overview.md).
+* To learn the Resource Manager template syntax, see [Understand the structure and syntax of Azure Resource Manager templates](../templates/syntax.md).
azure-resource-manager Manage Resource Groups Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-python.md
Title: Manage resource groups - Python
description: Use Python to manage your resource groups through Azure Resource Manager. Shows how to create, list, and delete resource groups. Previously updated : 01/27/2024 Last updated : 09/26/2024 content_well_notification: - AI-contribution ai-usage: ai-assisted + # Manage Azure resource groups by using Python Learn how to use Python with [Azure Resource Manager](overview.md) to manage your Azure resource groups.
To assist with creating ARM templates, you can export a template from existing r
## Next steps -- To learn Azure Resource Manager, see [Azure Resource Manager overview](overview.md).-- For more information about authentication options, see [Authenticate Python apps to Azure services by using the Azure SDK for Python](/azure/developer/python/sdk/authentication-overview).
+* To learn Azure Resource Manager, see [Azure Resource Manager overview](overview.md).
+* For more information about authentication options, see [Authenticate Python apps to Azure services by using the Azure SDK for Python](/azure/developer/python/sdk/authentication-overview).
azure-resource-manager Manage Resources Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-cli.md
Title: Manage resources - Azure CLI description: Use Azure CLI and Azure Resource Manager to manage your resources. Shows how to deploy and delete resources. Previously updated : 03/19/2024 Last updated : 09/26/2024
For more information, see [Move resources to new resource group or subscription]
## Lock resources
-Locking prevents other users in your organization from accidentally deleting or modifying critical resources, such as Azure subscription, resource group, or resource.
+Locking prevents other users in your organization from accidentally deleting or modifying critical resources, such as Azure subscription, resource group, or resource.
The following script locks a storage account so the account can't be deleted.
azure-resource-manager Manage Resources Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-portal.md
Title: Manage resources - Azure portal
description: Use the Azure portal and Azure Resource Manager to manage your resources. Shows how to deploy and delete resources. Previously updated : 03/19/2024 Last updated : 09/26/2024 # Manage Azure resources by using the Azure portal
To open a resource by resource group:
1. Sign in to the [Azure portal](https://portal.azure.com). 2. In the left pane, select **Resource groups** to list the resource within the group.
-3. Select the resource you want to open.
+3. Select the resource you want to open.
## Manage resources
For more information, see [Move resources to new resource group or subscription]
## Lock resources
-Locking prevents other users in your organization from accidentally deleting or modifying critical resources, such as Azure subscription, resource group, or resource.
+Locking prevents other users in your organization from accidentally deleting or modifying critical resources, such as Azure subscription, resource group, or resource.
1. Open the resource in the portal. For the steps, see [Open resources](#open-resources). 2. Select **Locks**. The following screenshot shows the management options for a storage account.
For more information, see [Lock resources with Azure Resource Manager](lock-reso
## Tag resources
-Tagging helps organizing your resource group and resources logically.
+Tagging helps organizing your resource group and resources logically.
1. Open the resource in the portal. For the steps, see [Open resources](#open-resources). 2. Select **Tags**. The following screenshot shows the management options for a storage account.
azure-resource-manager Manage Resources Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-powershell.md
Title: Manage resources - Azure PowerShell description: Use Azure PowerShell and Azure Resource Manager to manage your resources. Shows how to deploy and delete resources. Previously updated : 03/19/2024 Last updated : 09/26/2024
For more information, see [Move resources to new resource group or subscription]
## Lock resources
-Locking prevents other users in your organization from accidentally deleting or modifying critical resources, such as Azure subscription, resource group, or resource.
+Locking prevents other users in your organization from accidentally deleting or modifying critical resources, such as Azure subscription, resource group, or resource.
The following script locks a storage account so the account can't be deleted.
azure-resource-manager Manage Resources Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-python.md
Title: Manage resources - Python description: Use Python and Azure Resource Manager to manage your resources. Shows how to deploy and delete resources. Previously updated : 06/20/2024 Last updated : 09/26/2024 content_well_notification: - AI-contribution
For more information, see [Move resources to new resource group or subscription]
## Lock resources
-Locking prevents other users in your organization from accidentally deleting or modifying critical resources, such as Azure subscription, resource group, or resource.
+Locking prevents other users in your organization from accidentally deleting or modifying critical resources, such as Azure subscription, resource group, or resource.
The following example locks a web site so it can't be deleted.
azure-resource-manager Manage Resources Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-rest.md
Title: Manage resources - REST
description: Use REST operations with Azure Resource Manager to manage your resources. Shows how to read, deploy, and delete resources. Previously updated : 03/19/2024 Last updated : 09/26/2024 # Manage Azure resources by using the REST API
Last updated 03/19/2024
Learn how to use the REST API for [Azure Resource Manager](overview.md) to manage your Azure resources. For a comprehensive reference of how to structure Azure REST calls, see [Getting Started with REST](/rest/api/azure/). View the [Resource Management REST API reference](/rest/api/resources/) for more details on the available operations. ## Obtain an access token+ To make a REST API call to Azure, you first need to obtain an access token. Include this access token in the headers of your Azure REST API calls using the "Authorization" header and setting the value to "Bearer {access-token}". If you need to programmatically retrieve new tokens as part of your application, you can obtain an access token by [Registering your client application with Microsoft Entra ID](/rest/api/azure/#register-your-client-application-with-azure-ad).
-If you are getting started and want to test Azure REST APIs using your individual token, you can retrieve your current access token quickly with either Azure PowerShell or Azure CLI.
+If you're getting started and want to test Azure REST APIs using your individual token, you can retrieve your current access token quickly with either Azure PowerShell or Azure CLI.
### [Azure CLI](#tab/azure-cli)+ ```azurecli-interactive token=$(az account get-access-token --query accessToken --output tsv) ``` ### [Azure PowerShell](#tab/azure-powershell)+ ```azurepowershell-interactive $token = (Get-AzAccessToken).Token ```
$token = (Get-AzAccessToken).Token
## Operation scope+ You can call many Azure Resource Manager operations at different scopes: | Type | Scope |
You can call many Azure Resource Manager operations at different scopes:
| Resource | `subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderName}/{resourceType}/{resourceName}` | ## List resources+ The following REST operation returns the resources within a provided resource group. ```http
Host: management.azure.com
``` Here is an example cURL command that you can use to list all resources in a resource group using the Azure Resource Manager API:+ ```curl curl -H "Authorization: Bearer $token" -H 'Content-Type: application/json' -X GET 'https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/resources?api-version=2021-04-01' ``` - With the authentication step, this example looks like:+ ### [Azure CLI](#tab/azure-cli)+ ```azurecli-interactive token=$(az account get-access-token --query accessToken --output tsv) curl -H "Authorization: Bearer $token" -H 'Content-Type: application/json' -X GET 'https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/resources?api-version=2021-04-01' ``` ### [Azure PowerShell](#tab/azure-powershell)+ ```azurepowershell-interactive $token = (Get-AzAccessToken).Token $headers = @{Authorization="Bearer $token"}
Host: management.azure.com
The following operations deploy a Quickstart template to create a storage account. For more information, see [Quickstart: Create Azure Resource Manager templates by using Visual Studio Code](../templates/quickstart-create-templates-use-visual-studio-code.md). For the API reference of this call, see [Deployments - Create Or Update](/rest/api/resources/deployments/create-or-update). - ```http PUT /subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Resources/deployments/my-deployment?api-version=2021-04-01 HTTP/1.1 Authorization: Bearer <bearer-token>
Host: management.azure.com
} } ```+ For the REST APIs, the value of `uri` can't be a local file or a file that is only available on your local network. Azure Resource Manager must be able to access the template. Provide a URI value that downloadable as HTTP or HTTPS. For more information, see [Deploy resources with Resource Manager templates and Azure PowerShell](../templates/deploy-powershell.md).
azure-resource-manager Microsoft Resources Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/microsoft-resources-move-regions.md
Title: Move regions for resources in Microsoft.Resources description: Show how to move resources that are in the Microsoft.Resources namespace to new regions. Previously updated : 03/19/2024 Last updated : 09/26/2024 # Move Microsoft.Resources resources to new region
azure-resource-manager Move Resources Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resources-overview.md
Title: Move Azure resources across resource groups, subscriptions, or regions. description: Overview of Azure resource types that can be moved across resource groups, subscriptions, or regions. Previously updated : 03/19/2024 Last updated : 09/26/2024 # Move Azure resources across resource groups, subscriptions, or regions
The move operation doesn't support moving resources to new [Microsoft Entra tena
If you actually want to upgrade your Azure subscription (such as switching from free to pay-as-you-go), you need to convert your subscription. -- To upgrade a free trial, see [Upgrade your Free Trial or Microsoft Imagine Azure subscription to pay-as-you-go](../../cost-management-billing/manage/upgrade-azure-subscription.md).-- To change a pay-as-you-go account, see [Change your Azure pay-as-you-go subscription to a different offer](../../cost-management-billing/manage/switch-azure-offer.md).
+* To upgrade a free trial, see [Upgrade your Free Trial or Microsoft Imagine Azure subscription to pay-as-you-go](../../cost-management-billing/manage/upgrade-azure-subscription.md).
+* To change a pay-as-you-go account, see [Change your Azure pay-as-you-go subscription to a different offer](../../cost-management-billing/manage/switch-azure-offer.md).
If you can't convert the subscription, [create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Select **Subscription Management** for the issue type.
Azure geographies, regions, and availability zones form the foundation of the Az
After you deploy resources to a specific Azure region, there are many reasons that you might want to move resources to a different region. -- **Align to a region launch**: Move your resources to a newly introduced Azure region that wasn't previously available.-- **Align for services/features**: Move resources to take advantage of services or features that are available in a specific region.-- **Respond to business developments**: Move resources to a region in response to business changes, such as mergers or acquisitions.-- **Align for proximity**: Move resources to a region local to your business.-- **Meet data requirements**: Move resources to align with data residency requirements, or data classification needs. [Learn more](https://azure.microsoft.com/mediahandler/files/resourcefiles/achieving-compliant-data-residency-and-security-with-azure/Achieving_Compliant_Data_Residency_and_Security_with_Azure.pdf).-- **Respond to deployment requirements**: Move resources that were deployed in error, or move in response to capacity needs.-- **Respond to decommissioning**: Move resources because of decommissioned regions.
+* **Align to a region launch**: Move your resources to a newly introduced Azure region that wasn't previously available.
+* **Align for services/features**: Move resources to take advantage of services or features that are available in a specific region.
+* **Respond to business developments**: Move resources to a region in response to business changes, such as mergers or acquisitions.
+* **Align for proximity**: Move resources to a region local to your business.
+* **Meet data requirements**: Move resources to align with data residency requirements, or data classification needs. [Learn more](https://azure.microsoft.com/mediahandler/files/resourcefiles/achieving-compliant-data-residency-and-security-with-azure/Achieving_Compliant_Data_Residency_and_Security_with_Azure.pdf).
+* **Respond to deployment requirements**: Move resources that were deployed in error, or move in response to capacity needs.
+* **Respond to decommissioning**: Move resources because of decommissioned regions.
### Move resources with Resource Mover You can move resources to a different region with [Azure Resource Mover](../../resource-mover/overview.md). Resource Mover provides: -- A single hub for moving resources across regions.-- Reduced move time and complexity. Everything you need is in a single location.-- A simple and consistent experience for moving different types of Azure resources.-- An easy way to identify dependencies across resources you want to move. This identification helps you to move related resources together, so that everything works as expected in the target region, after the move.-- Automatic cleanup of resources in the source region, if you want to delete them after the move.-- Testing. You can try out a move, and then discard it if you don't want to do a full move.
+* A single hub for moving resources across regions.
+* Reduced move time and complexity. Everything you need is in a single location.
+* A simple and consistent experience for moving different types of Azure resources.
+* An easy way to identify dependencies across resources you want to move. This identification helps you to move related resources together, so that everything works as expected in the target region, after the move.
+* Automatic cleanup of resources in the source region, if you want to delete them after the move.
+* Testing. You can try out a move, and then discard it if you don't want to do a full move.
You can move resources to another region using a couple of different methods: -- **Start moving resources from a resource group**: With this method, you kick off the region move from within a resource group. After selecting the resources you want to move, the process continues in the Resource Mover hub, to check resource dependencies, and orchestrate the move process. [Learn more](../../resource-mover/move-region-within-resource-group.md).-- **Start moving resources directly from the Resource Mover hub**: With this method, you kick off the region move process directly in the hub. [Learn more](../../resource-mover/tutorial-move-region-virtual-machines.md).
+* **Start moving resources from a resource group**: With this method, you kick off the region move from within a resource group. After selecting the resources you want to move, the process continues in the Resource Mover hub, to check resource dependencies, and orchestrate the move process. [Learn more](../../resource-mover/move-region-within-resource-group.md).
+* **Start moving resources directly from the Resource Mover hub**: With this method, you kick off the region move process directly in the hub. [Learn more](../../resource-mover/tutorial-move-region-virtual-machines.md).
### Move resources manually through redeployment
To move resources from a region that doesn't support availability zones to one t
## Next steps -- To check if a resource type supports being moved, see [Move operation support for resources](move-support-resources.md).-- To learn more about the region move process, see [About the move process](../../resource-mover/about-move-process.md).-- To learn more deeply about service relocation and planning recommendations, see [Relocated cloud workloads](/azure/cloud-adoption-framework/relocate/).
+* To check if a resource type supports being moved, see [Move operation support for resources](move-support-resources.md).
+* To learn more about the region move process, see [About the move process](../../resource-mover/about-move-process.md).
+* To learn more deeply about service relocation and planning recommendations, see [Relocated cloud workloads](/azure/cloud-adoption-framework/relocate/).
azure-resource-manager Preview Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/preview-features.md
Title: Set up preview features in Azure subscription description: Describes how to list, register, or unregister preview features in your Azure subscription for a resource provider. Previously updated : 03/19/2024 Last updated : 09/26/2024 # Customer intent: As an Azure user, I want to use preview features in my subscription so that I can expose a resource provider's preview functionality.
InGuestPatchVMPreview Microsoft.Compute Unregistered
``` + ## Configuring preview features using Azure Policy Subscriptions can be remediated to register to a preview feature if not already registered using a [built-in](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe624c84f-2923-4437-9fd9-4115c6da3888) policy definition. Note that new subscriptions added to an existing tenant won't be automatically registered.
azure-resource-manager Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Resource Manager description: Sample Azure Resource Graph queries for Azure Resource Manager showing use of resource types and tables to access Azure Resource Manager related resources and properties. Previously updated : 03/19/2024 Last updated : 09/26/2024
# Azure Resource Graph sample queries for Azure Resource Manager This page is a collection of [Azure Resource Graph](../../governance/resource-graph/overview.md) sample queries for Azure Resource Manager.+ ## Sample queries for tags [!INCLUDE [azure-resource-graph-samples-cat-tags](./includes/tags.md)]
azure-resource-manager Resource Group Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-group-insights.md
Title: Azure Monitor Resource Group insights | Microsoft Docs description: Understand the health and performance of your distributed applications and services at the Resource Group level with Resource Group insights feature of Azure Monitor. Previously updated : 03/19/2024 Last updated : 09/26/2024
Modern applications are often complex and highly distributed with many discrete
1. Select **Resource groups** from the left-side navigation bar. 2. Pick one of your resource groups that you want to explore. (If you have a large number of resource groups filtering by subscription can sometimes be helpful.)
-3. To access insights for a resource group, click **Insights** in the left-side menu of any resource group.
+3. To access insights for a resource group, select **Insights** in the left-side menu of any resource group.
<!-- convertborder later --> :::image type="content" source="./media/resource-group-insights/0001-overview.png" lightbox="./media/resource-group-insights/0001-overview.png" alt-text="Screenshot of resource group insights overview page." border="false":::
What if you've noticed your application is running slowly, or users have reporte
The **Performance** and **Failures** tabs simplify this process by bringing together performance and failure diagnostic views for many common resource types.
-Most resource types will open a gallery of Azure Monitor Workbook templates. Each workbook you create can be customized, saved, shared with your team, and reused in the future to diagnose similar issues.
+Most resource types open a gallery of Azure Monitor Workbook templates. Each workbook you create can be customized, saved, shared with your team, and reused in the future to diagnose similar issues.
### Investigate failures
The left-side menu bar changes after your selection is made, offering you new op
<!-- convertborder later --> :::image type="content" source="./media/resource-group-insights/00004-failures.png" lightbox="./media/resource-group-insights/00004-failures.png" alt-text="Screenshot of Failure overview pane." border="false":::
-When App Service is chosen, you are presented with a gallery of Azure Monitor Workbook templates.
+When App Service is chosen, you're presented with a gallery of Azure Monitor Workbook templates.
<!-- convertborder later --> :::image type="content" source="./media/resource-group-insights/0005-failure-insights-workbook.png" lightbox="./media/resource-group-insights/0005-failure-insights-workbook.png" alt-text="Screenshot of application workbook gallery." border="false":::
-Choosing the template for Failure Insights will open the workbook.
+Choosing the template for Failure Insights open the workbook.
<!-- convertborder later --> :::image type="content" source="./media/resource-group-insights/0006-failure-visual.png" lightbox="./media/resource-group-insights/0006-failure-visual.png" alt-text="Screenshot of failure report." border="false":::
You can select any of the rows. The selection is then displayed in a graphical d
<!-- convertborder later --> :::image type="content" source="./media/resource-group-insights/0007-failure-details.png" lightbox="./media/resource-group-insights/0007-failure-details.png" alt-text="Screenshot of failure details." border="false":::
-Workbooks abstract away the difficult work of creating custom reports and visualizations into an easily consumable format. While some users may only want to adjust the prebuilt parameters, workbooks are completely customizable.
+Workbooks abstract away the difficult work of creating custom reports and visualizations into an easily consumable format. While some users may only want to adjust the prebuilt parameters, workbooks are customizable.
To get a sense of how this workbook functions internally, select **Edit** in the top bar. <!-- convertborder later -->
Performance offers its own gallery of workbooks. For App Service the prebuilt Ap
<!-- convertborder later --> :::image type="content" source="./media/resource-group-insights/0011-performance.png" lightbox="./media/resource-group-insights/0011-performance.png" alt-text="Screenshot of performance view." border="false":::
-In this case, if you select edit you will see that this set of visualizations is powered by Azure Monitor Metrics.
+In this case, if you select edit you'll see that this set of visualizations is powered by Azure Monitor Metrics.
<!-- convertborder later --> :::image type="content" source="./media/resource-group-insights/0012-performance-metrics.png" lightbox="./media/resource-group-insights/0012-performance-metrics.png" alt-text="Screenshot of performance view with Azure Metrics." border="false":::
In this case, if you select edit you will see that this set of visualizations is
### Enabling access to alerts
-To see alerts in Resource Group insights, someone with an Owner or Contributor role for this subscription needs to open Resource Group insights for any resource group in the subscription. This will enable anyone with read access to see alerts in Resource Group insights for all of the resource groups in the subscription. If you have an Owner or Contributor role, refresh this page in a few minutes.
+To see alerts in Resource Group insights, someone with an Owner or Contributor role for this subscription needs to open Resource Group insights for any resource group in the subscription. This enables anyone with read access to see alerts in Resource Group insights for all of the resource groups in the subscription. If you have an Owner or Contributor role, refresh this page in a few minutes.
Resource Group insights relies on the Azure Monitor Alerts Management system to retrieve alert status. Alerts Management isn't configured for every resource group and subscription by default, and it can only be enabled by someone with an Owner or Contributor role. It can be enabled either by:+ * Opening Resource Group insights for any resource group in the subscription. * Or by going to the subscription, clicking **Resource Providers**, then clicking **Register** for **Microsoft.AlertsManagement**. ## Next steps -- [Azure Monitor Workbooks](/azure/azure-monitor/visualize/workbooks-overview)-- [Azure Resource Health](/azure/service-health/resource-health-overview)-- [Azure Monitor Alerts](/azure/azure-monitor/alerts/alerts-overview)
+* [Azure Monitor Workbooks](/azure/azure-monitor/visualize/workbooks-overview)
+* [Azure Resource Health](/azure/service-health/resource-health-overview)
+* [Azure Monitor Alerts](/azure/azure-monitor/alerts/alerts-overview)
azure-resource-manager Resource Manager Personal Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-manager-personal-data.md
Title: Personal data
description: Learn how to manage personal data associated with Azure Resource Manager operations. Previously updated : 03/19/2024 Last updated : 09/26/2024 # Manage personal data associated with Azure Resource Manager
To delete **tags**, use:
* [az tag delete](/cli/azure/tag#az-tag-delete) ## Next steps+ * For an overview of Azure Resource Manager, see the [What is Resource Manager?](overview.md)
azure-resource-manager Resource Providers And Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-providers-and-types.md
Title: Resource providers and resource types description: Describes the resource providers that support Azure Resource Manager. It describes their schemas, available API versions, and the regions that can host the resources. Previously updated : 07/14/2023 Last updated : 09/26/2024 content_well_notification: - AI-contribution
For a list that maps resource providers to Azure services, see [Resource provide
## Register resource provider
-Before you use a resource provider, you must make sure your Azure subscription is registered for the resource provider. Registration configures your subscription to work with the resource provider.
+Before you use a resource provider, you must make sure your Azure subscription is registered for the resource provider. Registration configures your subscription to work with the resource provider.
> [!IMPORTANT] > Register a resource provider only when you're ready to use it. This registration step helps maintain least privileges within your subscription. A malicious user can't use unregistered resource providers.
West US
... ``` - ## Next steps
-* To learn about creating Resource Manager templates, see [Authoring Azure Resource Manager templates](../templates/syntax.md).
+* To learn about creating Resource Manager templates, see [Authoring Azure Resource Manager templates](../templates/syntax.md).
* To view the resource provider template schemas, see [Template reference](/azure/templates/). * For a list that maps resource providers to Azure services, see [Resource providers for Azure services](azure-services-resource-providers.md). * To view the operations for a resource provider, see [Azure REST API](/rest/api/).
azure-resource-manager Resources Without Resource Group Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resources-without-resource-group-limit.md
Title: Resources without 800 count limit description: Lists the Azure resource types that can have more than 800 instances in a resource group. Previously updated : 03/19/2024 Last updated : 09/26/2024 # Resources not limited to 800 instances per resource group
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.BotService
-* botServices - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
+* botServices - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
## Microsoft.Cdn
-* profiles - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
-* profiles/networkpolicies - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
+* profiles - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
+* profiles/networkpolicies - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
## Microsoft.Compute
Some resources have a limit on the number instances per region. This limit is di
* snapshots * virtualMachines * virtualMachines/extensions
-* virtualMachineScaleSets - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
+* virtualMachineScaleSets - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
## Microsoft.ContainerInstance
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.Fabric
-* capacities - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Fabric/UnlimitedResourceGroupQuota
+* capacities - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Fabric/UnlimitedResourceGroupQuota
## Microsoft.GuestConfiguration
Some resources have a limit on the number instances per region. This limit is di
* applicationSecurityGroups * customIpPrefixes * ddosProtectionPlans
-* loadBalancers - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
+* loadBalancers - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
* networkIntentPolicies * networkInterfaces * networkSecurityGroups
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.NetworkFunction
-* vpnBranches - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.NetworkFunction/AllowNaasVpnAccess
+* vpnBranches - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.NetworkFunction/AllowNaasVpnAccess
## Microsoft.NotificationHubs
-* namespaces - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.NotificationHubs/ARMDisableResourcesPerRGLimit
-* namespaces/notificationHubs - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.NotificationHubs/ARMDisableResourcesPerRGLimit
+* namespaces - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.NotificationHubs/ARMDisableResourcesPerRGLimit
+* namespaces/notificationHubs - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.NotificationHubs/ARMDisableResourcesPerRGLimit
## Microsoft.PowerBI
-* workspaceCollections - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.PowerBI/UnlimitedQuota
+* workspaceCollections - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.PowerBI/UnlimitedQuota
## Microsoft.PowerBIDedicated
-* autoScaleVCores - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.PowerBIDedicated/UnlimitedResourceGroupQuota
-* capacities - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.PowerBIDedicated/UnlimitedResourceGroupQuota
+* autoScaleVCores - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.PowerBIDedicated/UnlimitedResourceGroupQuota
+* capacities - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.PowerBIDedicated/UnlimitedResourceGroupQuota
## Microsoft.Relay
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.StreamAnalytics
-* streamingjobs - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.StreamAnalytics/ASADisableARMResourcesPerRGLimit
+* streamingjobs - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.StreamAnalytics/ASADisableARMResourcesPerRGLimit
## Microsoft.Web
azure-resource-manager Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/06/2024 Last updated : 09/26/2024
azure-resource-manager Tag Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-policies.md
Title: Policy definitions for tagging resources description: Describes the Azure Policy definitions that you can assign to ensure tag compliance. Previously updated : 03/19/2024 Last updated : 09/26/2024 # Assign policy definitions for tag compliance
azure-resource-manager Tag Resources Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-bicep.md
Title: Tag resources, resource groups, and subscriptions with Bicep
description: Shows how to use Bicep to apply tags to Azure resources. Previously updated : 03/19/2024 Last updated : 09/26/2024 # Apply tags with Bicep
azure-resource-manager Tag Resources Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-cli.md
Title: Tag resources, resource groups, and subscriptions with Azure CLI
description: Shows how to use Azure CLI to apply tags to Azure resources. Previously updated : 03/19/2024 Last updated : 09/26/2024 # Apply tags with Azure CLI
azure-resource-manager Tag Resources Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-portal.md
Title: Tag resources, resource groups, and subscriptions with Azure portal description: Shows how to use Azure portal to apply tags to Azure resources. Previously updated : 03/19/2024 Last updated : 09/26/2024 # Apply tags with Azure portal
azure-resource-manager Tag Resources Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-powershell.md
Title: Tag resources, resource groups, and subscriptions with Azure PowerShell
description: Shows how to use Azure PowerShell to apply tags to Azure resources. Previously updated : 03/19/2024 Last updated : 09/26/2024 # Apply tags with Azure PowerShell
azure-resource-manager Tag Resources Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-python.md
Title: Tag resources, resource groups, and subscriptions with Python description: Shows how to use Python to apply tags to Azure resources. Previously updated : 01/27/2024 Last updated : 09/26/2024 content_well_notification: - AI-contribution
azure-resource-manager Tag Resources Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-templates.md
Title: Tag resources, resource groups, and subscriptions with ARM templates description: Shows how to use ARM templates to apply tags to Azure resources. Previously updated : 03/19/2024 Last updated : 09/26/2024 # Apply tags with ARM templates
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources.md
Title: Tag resources, resource groups, and subscriptions for logical organization description: Describes the conditions and limitations for using tags with Azure resources. Previously updated : 01/04/2024 Last updated : 09/26/2024 # Use tags to organize your Azure resources and management hierarchy
Resource tags support all cost-accruing services. To ensure that cost-accruing s
There are two ways to get the required access to tag resources. -- You can have write access to the `Microsoft.Resources/tags` resource type. This access lets you tag any resource, even if you don't have access to the resource itself. The [Tag Contributor](../../role-based-access-control/built-in-roles.md#tag-contributor) role grants this access. The tag contributor role, for example, can't apply tags to resources or resource groups through the portal. It can, however, apply tags to subscriptions through the portal. It supports all tag operations through Azure PowerShell and REST API.
+* You can have write access to the `Microsoft.Resources/tags` resource type. This access lets you tag any resource, even if you don't have access to the resource itself. The [Tag Contributor](../../role-based-access-control/built-in-roles.md#tag-contributor) role grants this access. The tag contributor role, for example, can't apply tags to resources or resource groups through the portal. It can, however, apply tags to subscriptions through the portal. It supports all tag operations through Azure PowerShell and REST API.
-- You can have write access to the resource itself. The [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role grants the required access to apply tags to any entity. To apply tags to only one resource type, use the contributor role for that resource. To apply tags to virtual machines, for example, use the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor).
+* You can have write access to the resource itself. The [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role grants the required access to apply tags to any entity. To apply tags to only one resource type, use the contributor role for that resource. To apply tags to virtual machines, for example, use the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor).
## Inherit tags
You can retrieve information about tags by downloading the usage file available
For REST API operations, see [Azure Billing REST API Reference](/rest/api/billing/).
-## Unique tags pagination
+## Unique tags pagination
-When calling the [Unique Tags API](/rest/api/resources/tags/list) there is a limit to the size of each API response page that is returned. A tag that has a large set of unique values will require the API to fetch the next page to retrieve the remaining set of values. When this happens the tag key is shown again to indicate that the values are still under this key.
+When calling the [Unique Tags API](/rest/api/resources/tags/list) there's a limit to the size of each API response page that is returned. A tag that has a large set of unique values will require the API to fetch the next page to retrieve the remaining set of values. When this happens the tag key is shown again to indicate that the values are still under this key.
This can result in some tools, like the Azure portal, to show the tag key twice.
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
Title: Tag support for resources description: Shows which Azure resource types support tags. Provides details for all Azure services. Previously updated : 02/05/2024 Last updated : 09/26/2024 # Tag support for Azure resources
azure-resource-manager Tls Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tls-support.md
Title: TLS version supported by Azure Resource Manager
description: Describes the deprecation of TLS versions prior to 1.2 in Azure Resource Manager Previously updated : 01/27/2024 Last updated : 09/26/2024 # Migrating to TLS 1.2 for Azure Resource Manager
Azure Resource Manager is the deployment and management service for Azure. You u
## Prepare for migration to TLS 1.2
-We recommend the following steps as you prepare to migrate your clients to TLS 1.2:
+We recommend the following steps as you prepare to migrate your clients to TLS 1.2:
* Update your operating system to the latest version. * Update your development libraries and frameworks to their latest versions. For example, Python 3.8 supports TLS 1.2.
For a more detailed guidance, see the [checklist to deprecate older TLS versions
* [Solving the TLS 1.0 Problem, 2nd Edition](/security/engineering/solving-tls1-problem) ΓÇô deep dive into migrating to TLS 1.2. * [How to enable TLS 1.2 on clients](/mem/configmgr/core/plan-design/security/enable-tls-1-2-client) ΓÇô for Microsoft Configuration Manager.
-* [Configure Transport Layer Security (TLS) for a client application](../../storage/common/transport-layer-security-configure-client-version.md) ΓÇô contains instructions to update TLS version in PowerShell
+* [Configure Transport Layer Security (TLS) for a client application](../../storage/common/transport-layer-security-configure-client-version.md) ΓÇô contains instructions to update TLS version in PowerShell.
* [Enable support for TLS 1.2 in your environment for Microsoft Entra TLS 1.1 and 1.0 deprecation](/troubleshoot/azure/active-directory/enable-support-tls-environment) ΓÇô contains information on updating TLS version for WinHTTP. * [Transport Layer Security (TLS) best practices with the .NET Framework](/dotnet/framework/network-programming/tls) ΓÇô best practices when configuring security protocols for applications targeting .NET Framework. * [TLS best practices with the .NET Framework](https://github.com/dotnet/docs/issues/4675) ΓÇô GitHub to ask questions about best practices with .NET Framework.
azure-resource-manager Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/best-practices.md
Title: Best practices for templates
description: Describes recommended approaches for authoring Azure Resource Manager templates (ARM templates). Offers suggestions to avoid common problems when using templates. Previously updated : 09/25/2024 Last updated : 09/26/2024 # ARM template best practices
The following information can be helpful when you work with [resources](./syntax
] ```
- For more details about comments and metadata see [Understand the structure and syntax of ARM templates](./syntax.md#comments-and-metadata).
+ For more details about comments and metadata, see [Understand the structure and syntax of ARM templates](./syntax.md#comments-and-metadata).
* If you use a *public endpoint* in your template (such as an Azure Blob storage public endpoint), *don't hard-code* the namespace. Use the `reference` function to dynamically retrieve the namespace. You can use this approach to deploy the template to different public namespace environments without manually changing the endpoint in the template. Set the API version to the same version that you're using for the storage account in your template.
The following information can be helpful when you work with [resources](./syntax
## Comments
-In addition to the `comments` property, comments using the `//` syntax are supported. For more details about comments and metadata see [Understand the structure and syntax of ARM templates](./syntax.md#comments-and-metadata). You may choose to save JSON files that contain `//` comments using the `.jsonc` file extension, to indicate the JSON file contains comments. The ARM service will also accept comments in any JSON file including parameters files.
+In addition to the `comments` property, comments using the `//` syntax are supported. For more details about comments and metadata, see [Understand the structure and syntax of ARM templates](./syntax.md#comments-and-metadata). You may choose to save JSON files that contain `//` comments using the `.jsonc` file extension, to indicate the JSON file contains comments. The ARM service will also accept comments in any JSON file including parameters files.
## Visual Studio Code ARM Tools
-Working with ARM templates is much easier with the Azure Resource Manager (ARM) Tools for Visual Studio Code. This extension provides language support, resource snippets, and resource auto-completion to help you create and validate Azure Resource Manager templates. To learn more and install the extension, see [Azure Resource Manager (ARM) Tools](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools).
+Working with ARM templates is easier with the Azure Resource Manager (ARM) Tools for Visual Studio Code. This extension provides language support, resource snippets, and resource auto-completion to help you create and validate Azure Resource Manager templates. To learn more and install the extension, see [Azure Resource Manager (ARM) Tools](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools).
## Use test toolkit
azure-resource-manager Copy Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/copy-resources.md
Title: Deploy multiple instances of resources
description: Use copy operation and arrays in an Azure Resource Manager template (ARM template) to deploy resource type many times. Previously updated : 08/30/2023 Last updated : 09/26/2024 # Resource iteration in ARM templates
You can also use copy loop with [properties](copy-properties.md), [variables](co
If you need to specify whether a resource is deployed at all, see [condition element](conditional-resource-deployment.md). - > [!TIP] > We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [loops](../bicep/loops.md).
azure-resource-manager Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/definitions.md
Title: Type definitions in templates
description: Describes how to create type definitions in an Azure Resource Manager template (ARM template). Previously updated : 08/22/2023 Last updated : 09/26/2024 # Type definitions in ARM templates
If the value is true, elements of the array whose index is greater than the larg
The nullable constraint indicates that the value may be `null` or omitted. See [Properties](#properties) for an example. - ## Description You can add a description to a type definition to help users of your template understand the value to provide.
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-cli.md
Title: Azure deployment templates with Azure CLI ΓÇô Azure Resource Manager | Microsoft Docs description: Use Azure Resource Manager and Azure CLI to create and deploy resource groups to Azure. The resources are defined in an Azure deployment template. Previously updated : 10/10/2023 Last updated : 09/26/2024 keywords: azure cli deploy arm template, create resource group azure, azure deployment template, deployment resources, arm template, azure arm template
azure-resource-manager Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-github-actions.md
Title: Deploy Resource Manager templates by using GitHub Actions description: Describes how to deploy Azure Resource Manager templates (ARM templates) by using GitHub Actions. Previously updated : 06/23/2023 Last updated : 09/26/2024
Add a Resource Manager template to your GitHub repository. This template creates
https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json ```
-You can put the file anywhere in the repository. The workflow sample in the next section assumes the template file is named **azuredeploy.json**, and it is stored at the root of your repository.
+You can put the file anywhere in the repository. The workflow sample in the next section assumes the template file is named **azuredeploy.json**, and it's stored at the root of your repository.
## Create workflow
The workflow file must be stored in the **.github/workflows** folder at the root
The first section of the workflow file includes: - **name**: The name of the workflow.
- - **on**: The name of the GitHub events that triggers the workflow. The workflow is trigger when there is a push event on the main branch, which modifies at least one of the two files specified. The two files are the workflow file and the template file.
+ - **on**: The name of the GitHub events that triggers the workflow. The workflow is trigger when there's a push event on the main branch, which modifies at least one of the two files specified. The two files are the workflow file and the template file.
# [OpenID Connect](#tab/openid)
The workflow file must be stored in the **.github/workflows** folder at the root
The first section of the workflow file includes: - **name**: The name of the workflow.
- - **on**: The name of the GitHub events that triggers the workflow. The workflow is trigger when there is a push event on the main branch, which modifies at least one of the two files specified. The two files are the workflow file and the template file.
+ - **on**: The name of the GitHub events that triggers the workflow. The workflow is trigger when there's a push event on the main branch, which modifies at least one of the two files specified. The two files are the workflow file and the template file.
1. Select **Start commit**.
Because the workflow is configured to be triggered by either the workflow file o
## Check workflow status
-1. Select the **Actions** tab. You will see a **Create deployStorageAccount.yml** workflow listed. It takes 1-2 minutes to run the workflow.
+1. Select the **Actions** tab. You see a **Create deployStorageAccount.yml** workflow listed. It takes 1-2 minutes to run the workflow.
1. Select the workflow to open it. 1. Select **Run ARM deploy** from the menu to verify the deployment.
azure-resource-manager Deployment Tutorial Linked Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-tutorial-linked-template.md
Title: Tutorial - Deploy a linked template description: Learn how to deploy a linked template Previously updated : 10/05/2023 Last updated : 09/26/2024
You can separate the storage account resource into a linked template:
:::code language="json" source="~/resourcemanager-templates/get-started-deployment/linked-template/linkedStorageAccount.json":::
-The following template is the main template. The highlighted `Microsoft.Resources/deployments` object shows how to call a linked template. The linked template cannot be stored as a local file or a file that is only available on your local network. You can either provide a URI value of the linked template that includes either HTTP or HTTPS, or use the _relativePath_ property to deploy a remote linked template at a location relative to the parent template. One option is to place both the main template and the linked template in a storage account.
+The following template is the main template. The highlighted `Microsoft.Resources/deployments` object shows how to call a linked template. The linked template can't be stored as a local file or a file that is only available on your local network. You can either provide a URI value of the linked template that includes either HTTP or HTTPS, or use the _relativePath_ property to deploy a remote linked template at a location relative to the parent template. One option is to place both the main template and the linked template in a storage account.
:::code language="json" source="~/resourcemanager-templates/get-started-deployment/linked-template/azuredeploy.json" highlight="34-52":::
Write-Host "Press [ENTER] to continue ..."
## Deploy template
-To deploy templates in a storage account, generate a SAS token and supply it to the _-QueryString_ parameter. Set the expiry time to allow enough time to complete the deployment. The blobs containing the templates are accessible to only the account owner. However, when you create a SAS token for a blob, the blob is accessible to anyone with that SAS token. If another user intercepts the URI and the SAS token, that user is able to access the template. A SAS token is a good way of limiting access to your templates, but you should not include sensitive data like passwords directly in the template.
+To deploy templates in a storage account, generate a SAS token and supply it to the _-QueryString_ parameter. Set the expiry time to allow enough time to complete the deployment. The blobs containing the templates are accessible to only the account owner. However, when you create a SAS token for a blob, the blob is accessible to anyone with that SAS token. If another user intercepts the URI and the SAS token, that user is able to access the template. A SAS token is a good way of limiting access to your templates, but you shouldn't include sensitive data like passwords directly in the template.
If you haven't created the resource group, see [Create resource group](./deployment-tutorial-local-template.md#create-resource-group).
azure-resource-manager Deployment Tutorial Local Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-tutorial-local-template.md
Title: Tutorial - Deploy a local Azure Resource Manager template description: Learn how to deploy an Azure Resource Manager template (ARM template) from your local computer Previously updated : 10/05/2023 Last updated : 09/26/2024
azure-resource-manager Deployment Tutorial Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-tutorial-pipeline.md
Title: Continuous integration with Azure Pipelines description: Learn how to continuously build, test, and deploy Azure Resource Manager templates (ARM templates). Previously updated : 06/20/2024 Last updated : 09/26/2024
azure-resource-manager Linked Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/linked-templates.md
Title: Link templates for deployment description: Describes how to use linked templates in an Azure Resource Manager template (ARM template) to create a modular template solution. Shows how to pass parameters values, specify a parameter file, and dynamically created URLs. Previously updated : 08/22/2023 Last updated : 09/26/2024
The following example deploys a storage account through a nested template.
} ```
-[Nested resources](./child-resource-name-type.md#within-parent-resource) can't be used in a [symbolic name](./resource-declaration.md#use-symbolic-name) template. In the following template, the nested storage account resource cannot use symbolic name:
+[Nested resources](./child-resource-name-type.md#within-parent-resource) can't be used in a [symbolic name](./resource-declaration.md#use-symbolic-name) template. In the following template, the nested storage account resource can't use symbolic name:
```json {
az deployment group create \
-Make sure there is no leading "?" in QueryString. The deployment adds one when assembling the URI for the deployments.
+Make sure there's no leading "?" in QueryString. The deployment adds one when assembling the URI for the deployments.
## Template specs
azure-resource-manager Parameter File Test Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/parameter-file-test-cases.md
Title: Parameter file test cases for Azure Resource Manager test toolkit
description: Describes the parameter file tests that are run by the Azure Resource Manager template test toolkit. Previously updated : 06/23/2023 Last updated : 09/26/2024 # Test cases for parameter files
azure-resource-manager Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/parameters.md
Title: Parameters in templates
description: Describes how to define parameters in an Azure Resource Manager template (ARM template). Previously updated : 08/22/2023 Last updated : 09/26/2024 # Parameters in ARM templates
In addition to minValue, maxValue, minLength, maxLength, and allowedValues, [lan
> [!TIP] > We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [parameters](../bicep/parameters.md).
-You are limited to 256 parameters in a template. For more information, see [Template limits](./best-practices.md#template-limits).
+You're limited to 256 parameters in a template. For more information, see [Template limits](./best-practices.md#template-limits).
For parameter best practices, see [Parameters](./best-practices.md#parameters).
If the value is true, elements of the array whose index is greater than the larg
} ``` - ## nullable constraint The nullable constraint can only be used with [languageVersion 2.0](./syntax.md#languageversion-20). It indicates that the value may be `null` or omitted. See [Properties](#properties) for an example.
The following examples demonstrate scenarios for using parameters.
## Next steps
-* To learn about the available properties for parameters, see [Understand the structure and syntax of ARM templates](./syntax.md).
-* To learn about passing in parameter values as a file, see [Create Resource Manager parameter file](parameter-files.md).
-* For recommendations about creating parameters, see [Best practices - parameters](./best-practices.md#parameters).
+- To learn about the available properties for parameters, see [Understand the structure and syntax of ARM templates](./syntax.md).
+- To learn about passing in parameter values as a file, see [Create Resource Manager parameter file](parameter-files.md).
+- For recommendations about creating parameters, see [Best practices - parameters](./best-practices.md#parameters).
azure-resource-manager Quickstart Create Templates Use Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md
Title: Create template - Visual Studio Code description: Use Visual Studio Code and the Azure Resource Manager tools extension to work on Azure Resource Manager templates (ARM templates). Previously updated : 07/28/2023 Last updated : 09/26/2024 #Customer intent: As a developer new to Azure deployment, I want to learn how to use Visual Studio Code to create and edit Resource Manager templates, so I can use the templates to deploy Azure resources.
Update the name of the parameter to `storageAccountName` and the description to
:::image type="content" source="./media/quickstart-create-templates-use-visual-studio-code/10.png" alt-text="Screenshot showing the completed parameter in an ARM template.":::
-Azure storage account names have a minimum length of 3 characters and a maximum of 24. Add both `minLength` and `maxLength` to the parameter and provide appropriate values.
+Azure storage account names have a minimum length of three characters and a maximum of 24. Add both `minLength` and `maxLength` to the parameter and provide appropriate values.
:::image type="content" source="./media/quickstart-create-templates-use-visual-studio-code/11.png" alt-text="Screenshot showing minLength and maxLength being added to an ARM template parameter.":::
New-AzResourceGroup -Name arm-vscode -Location eastus
New-AzResourceGroupDeployment -ResourceGroupName arm-vscode -TemplateFile ./azuredeploy.json -TemplateParameterFile ./azuredeploy.parameters.json ```+ ## Clean up resources
az group delete --name arm-vscode
```azurepowershell Remove-AzResourceGroup -Name arm-vscode ```+ ## Next steps
azure-resource-manager Resource Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/resource-dependency.md
Title: Set deployment order for resources description: Describes how to set one Azure resource as dependent on another resource during deployment. The dependencies ensure resources are deployed in the correct order. Previously updated : 08/22/2023 Last updated : 09/26/2024 # Define the order for deploying resources in ARM templates
Azure Resource Manager evaluates the dependencies between resources, and deploys
Within your Azure Resource Manager template (ARM template), the `dependsOn` element enables you to define one resource as a dependent on one or more resources. Its value is a JavaScript Object Notation (JSON) array of strings, each of which is a resource name or ID. The array can include resources that are [conditionally deployed](conditional-resource-deployment.md). When a conditional resource isn't deployed, Azure Resource Manager automatically removes it from the required dependencies.
-The following example shows a network interface that depends on a virtual network, network security group, and public IP address. For the full template, see [the quickstart template for a Linux VM](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-simple-linux/azuredeploy.json).
+The following example shows a network interface that depends on a virtual network, network security group, and public IP address. For the full template, see [the quickstart template for a Linux virtual machine](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-simple-linux/azuredeploy.json).
```json {
azure-resource-manager Resource Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/resource-extensions.md
Title: Post-deployment configuration with extensions
description: Learn how to use Azure Resource Manager template (ARM template) extensions for post-deployment configurations. Previously updated : 06/23/2023 Last updated : 09/26/2024 # Post-deployment configurations by using extensions
azure-resource-manager Scope Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/scope-functions.md
Title: Template functions in scoped deployments
description: Describes how template functions are resolved in scoped deployments. The scope can be a tenant, management groups, subscriptions, and resource groups. Previously updated : 06/23/2023 Last updated : 09/26/2024 # ARM template functions in deployment scopes
azure-resource-manager Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/syntax.md
Title: Template structure and syntax
description: Describes the structure and properties of Azure Resource Manager templates (ARM templates) using declarative JSON syntax. Previously updated : 08/22/2023 Last updated : 09/26/2024 # Understand the structure and syntax of ARM templates
You can break a string into multiple lines. For example, see the `location` prop
> > Multi-line strings aren't supported when you deploy the template through the Azure portal, a DevOps pipeline, or the REST API. - ```json { "type": "Microsoft.Compute/virtualMachines",
To use languageVersion 2.0, add `"languageVersion": "2.0"` to your template:
} ```
-The enhancements and changes that comes with languageVersion 2.0:
--- Use symbolic name in ARM JSON template. For more information, see [Use symbolic name](./resource-declaration.md#use-symbolic-name).-- Use symbolic name in resource copy loops. See [Use symbolic name](./copy-resources.md#use-symbolic-name).-- Use symbolic name in `dependsOn` arrays. See [DependsOn](./resource-dependency.md#dependson) and [Depend on resources in a loop](./resource-dependency.md#depend-on-resources-in-a-loop).-- Use symbolic name instead of resource name in the `reference` function. See [reference](./template-functions-resource.md#reference).-- A references() function that returns an array of objects representing a resource collection's runtime states. See [references](./template-functions-resource.md#references).-- Use the 'existing' resource property to declare existing resources for ARM to read rather than deploy a resource. See [Declare existing resources](./resource-declaration.md#declare-existing-resources).-- Create user-defined types. See [Type definition](./definitions.md).-- Additional aggregate type validation constraints to be used in [parameters](./parameters.md) and [outputs](./outputs.md).-- The default value for the `expressionEvaluationOptions` property is `inner`. The value `outer` is blocked. See [Expression evaluation scope in nested templates](./linked-templates.md#expression-evaluation-scope-in-nested-templates).-- The `deployment` function returns a limited subset of properties. See [deployment](./template-functions-deployment.md#deployment).-- If Deployments resource is used in a symbolic-name deployment, use apiVersion `2020-09-01` or later.-- In resource definition, double-escaping values within an expression is no longer needed. See [Escape characters](./template-expressions.md#escape-characters).
+The enhancements and changes that come with languageVersion 2.0:
+
+* Use symbolic name in ARM JSON template. For more information, see [Use symbolic name](./resource-declaration.md#use-symbolic-name).
+* Use symbolic name in resource copy loops. See [Use symbolic name](./copy-resources.md#use-symbolic-name).
+* Use symbolic name in `dependsOn` arrays. See [DependsOn](./resource-dependency.md#dependson) and [Depend on resources in a loop](./resource-dependency.md#depend-on-resources-in-a-loop).
+* Use symbolic name instead of resource name in the `reference` function. See [reference](./template-functions-resource.md#reference).
+* A references() function that returns an array of objects representing a resource collection's runtime states. See [references](./template-functions-resource.md#references).
+* Use the 'existing' resource property to declare existing resources for ARM to read rather than deploy a resource. See [Declare existing resources](./resource-declaration.md#declare-existing-resources).
+* Create user-defined types. See [Type definition](./definitions.md).
+* Additional aggregate type validation constraints to be used in [parameters](./parameters.md) and [outputs](./outputs.md).
+* The default value for the `expressionEvaluationOptions` property is `inner`. The value `outer` is blocked. See [Expression evaluation scope in nested templates](./linked-templates.md#expression-evaluation-scope-in-nested-templates).
+* The `deployment` function returns a limited subset of properties. See [deployment](./template-functions-deployment.md#deployment).
+* If Deployments resource is used in a symbolic-name deployment, use apiVersion `2020-09-01` or later.
+* In resource definition, double-escaping values within an expression is no longer needed. See [Escape characters](./template-expressions.md#escape-characters).
## Next steps
azure-resource-manager Template Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-expressions.md
Title: Template syntax and expressions
description: Describes the declarative JSON syntax for Azure Resource Manager templates (ARM templates). Previously updated : 08/22/2023 Last updated : 09/26/2024 # Syntax and expressions in ARM templates
To pass a string value as a parameter to a function, use single quotes.
"name": "[concat('storage', uniqueString(resourceGroup().id))]" ```
-Most functions work the same whether they are deployed to a resource group, subscription, management group, or tenant. The following functions have restrictions based on the scope:
+Most functions work the same whether they're deployed to a resource group, subscription, management group, or tenant. The following functions have restrictions based on the scope:
* [resourceGroup](template-functions-resource.md#resourcegroup) - can only be used in deployments to a resource group. * [resourceId](template-functions-resource.md#resourceid) - can be used at any scope, but the valid parameters change depending on the scope.
azure-resource-manager Update Visual Studio Deployment Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/update-visual-studio-deployment-script.md
Title: Update Visual Studio's template deployment script to use Az PowerShell description: Update the Visual Studio template deployment script from AzureRM to Az PowerShell Previously updated : 06/23/2023 Last updated : 09/26/2024 # Update Visual Studio template deployment script to use Az PowerShell module
azure-resource-manager Create Troubleshooting Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/create-troubleshooting-template.md
description: Describes how to create a template to troubleshoot Azure resource d
tags: top-support-issue Previously updated : 04/05/2023 Last updated : 09/26/2024 # Create a troubleshooting template
azure-resource-manager Deployment Quota Exceeded https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/deployment-quota-exceeded.md
Title: Deployment quota exceeded description: Describes how to resolve the error of having more than 800 deployments in the resource group history. Previously updated : 04/05/2023 Last updated : 09/26/2024 # Resolve error when deployment count exceeds 800
batch Batch Pool Update Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-update-properties.md
+
+ Title: Update pool properties
+description: Learn how to update existing Batch pool properties.
+ Last updated : 09/26/2024+++
+# Update Batch pool properties
+
+When you create an Azure Batch pool, you specify certain properties that define the configuration
+of the pool. Examples include specifying the VM size, VM image to use, virtual network configuration,
+and encryption settings. However, you may need to update pool properties as your workload evolves
+over time or if a VM image reaches end-of-life.
+
+Some, but not all, of these pool properties can be patched or updated to accommodate these situations.
+This article provides information about updateable pool properties, expected behaviors for
+pool property updates, and examples.
+
+> [!TIP]
+> Some pool properties can only be updated using the
+> [Batch Management Plane APIs or SDKs](batch-apis-tools.md#batch-management-apis) using Entra
+> authentication. You will need to install or use the appropriate [API or SDK](batch-apis-tools.md)
+> for these operations to be available.
+
+## Updateable pool properties
+
+Batch provides multiple methods to update properties on a pool. Selecting which API to use
+determines the set of pool properties that can be updated as well as the update behavior.
+
+> [!NOTE]
+> If you want to update pool properties that aren't part of the following Update or Patch
+> APIs, then you must recreate the pool to reflect the desired state.
+
+### Management Plane: Pool - Update
+
+The recommended path to updating pool properties is utilizing the
+[Pool - Update API](/rest/api/batchmanagement/pool/update) as part of the
+[Batch Management Plane API or SDK](batch-apis-tools.md#batch-management-apis). This API provides
+the most comprehensive and flexible way to update pool properties. Using this API allows select
+update of Management plane only pool properties and the ability to update other properties that
+would otherwise be immutable via Data Plane APIs.
+
+> [!IMPORTANT]
+> You must use API version 2024-07-01 or newer of the Batch Management Plane API for updating pool
+> properties as described in this section.
+
+Since this operation is a `PATCH`, only pool properties specified in the request are updated.
+If properties aren't specified as part of the request, then the existing values remain unmodified.
+
+Some properties can only be updated when the pool has no active nodes in it or where the total
+number of compute nodes in the pool is zero. The properties that *don't* require the pool
+to be size zero for the new value to take effect are:
+
+- applicationPackages
+- certificates
+- metadata
+- scaleSettings
+- startTask
+
+If there are active nodes when the pool is updated with these properties, reboot of active
+compute nodes may be required for changes to take effect. For more information, see the
+documentation for each individual pool property.
+
+All other updateable pool properties require the pool to be of size zero nodes to be accepted
+as part of the request to update.
+
+You may also use [Pool - Create API](/rest/api/batchmanagement/pool/create) to update these
+select properties, but since the operation is a `PUT`, the request fully replaces all
+existing properties. Therefore, any property that isn't specified in the request is removed
+or set with the associated default.
+
+#### Example: Update VM Image Specification
+
+The following example shows how to update a pool VM image configuration via the Management Plane C# SDK:
+
+```csharp
+public async Task UpdatePoolVmImage()
+{
+ // Authenticate
+ var clientId = Environment.GetEnvironmentVariable("CLIENT_ID");
+ var clientSecret = Environment.GetEnvironmentVariable("CLIENT_SECRET");
+ var tenantId = Environment.GetEnvironmentVariable("TENANT_ID");
+ var subscriptionId = Environment.GetEnvironmentVariable("SUBSCRIPTION_ID");
+ ClientSecretCredential credential = new ClientSecretCredential(tenantId, clientId, clientSecret);
+ ArmClient client = new ArmClient(credential, subscriptionId);
+
+ // Get an existing Batch account
+ string resourceGroupName = "<resourcegroup>";
+ string accountName = "<batchaccount>";
+ ResourceIdentifier batchAccountResourceId = BatchAccountResource.CreateResourceIdentifier(subscriptionId, resourceGroupName, accountName);
+ BatchAccountResource batchAccount = client.GetBatchAccountResource(batchAccountResourceId);
+
+ // get the collection of this BatchAccountPoolResource
+ BatchAccountPoolCollection collection = batchAccount.GetBatchAccountPools();
+
+ // Update the pool
+ string poolName = "mypool";
+ BatchAccountPoolData data = new BatchAccountPoolData()
+ {
+ DeploymentConfiguration = new BatchDeploymentConfiguration()
+ {
+ VmConfiguration = new BatchVmConfiguration(new BatchImageReference()
+ {
+ Publisher = "MicrosoftWindowsServer",
+ Offer = "WindowsServer",
+ Sku = "2022-datacenter-azure-edition-smalldisk",
+ Version = "latest",
+ },
+ nodeAgentSkuId: "batch.node.windows amd64"),
+ },
+ };
+
+ ArmOperation<BatchAccountPoolResource> lro = await collection.CreateOrUpdateAsync(WaitUntil.Completed, poolName, data);
+ BatchAccountPoolResource result = lro.Value;
+
+ BatchAccountPoolData resourceData = result.Data;
+ Console.WriteLine($"Succeeded on id: {resourceData.Id}");
+}
+```
+
+#### Example: Update VM Size and Target Node Communication Mode
+
+The following example shows how to update a pool VM image size and target node communication
+mode to be simplified via REST API:
+
+```http
+PATCH https://management.azure.com/subscriptions/<subscriptionid>/resourceGroups/<resourcegroupName>/providers/Microsoft.Batch/batchAccounts/<batchaccountname>/pools/<poolname>?api-version=2024-07-01
+```
+
+Request Body
+
+```json
+{
+ "type": "Microsoft.Batch/batchAccounts/pools",
+ "parameters": {
+ "properties": {
+ "vmSize": "standard_d32ads_v5",
+ "targetNodeCommunicationMode": "simplified"
+ }
+ }
+}
+```
+
+### Data Plane: Pool - Patch or Update Properties
+
+The Data Plane offers the ability to either patch or update select pool properties. The
+available APIs are the [Pool - Patch API](/rest/api/batchservice/pool/patch) or the
+[Pool - Update Properties API](/rest/api/batchservice/pool/update-properties) as part of
+the [Batch Data Plane API or SDK](batch-apis-tools.md#batch-service-apis).
+
+The [Patch API](/rest/api/batchservice/pool/patch) allows patching of select pool properties
+as specified in the documentation such as the `startTask`. Since this operation is a `PATCH`,
+only pool properties specified in the request are updated. If properties aren't specified as
+part of the request, then the existing values remain unmodified.
+
+The [Update Properties API](/rest/api/batchservice/pool/update-properties) allows select
+update of the pool properties as specified in the documentation. This request fully
+replaces the existing properties, therefore any property that isn't specified in the
+request is removed.
+
+Compute nodes must be rebooted for changes to take effect for the following properties:
+
+- applicationPackageReferences
+- certificateReferences
+- startTask
+
+The pool must be resized to zero active nodes for updates to the `targetNodeCommunicationMode`
+property.
+
+## FAQs
+
+- Do I need to perform any other operations after updating pool properties while the pool
+has active nodes?
+
+Yes, for pool properties that can be updated with active nodes, there are select properties
+which require compute nodes to be rebooted. Alternatively, the pool can be scaled down to
+zero nodes to reflect the modified properties.
+
+- Can I modify the Managed identity collection on the pool while the pool has active nodes?
+
+Yes, but you shouldn't. While Batch doesn't prohibit mutation of the collection with active
+nodes, we recommend avoiding doing so as that leads to inconsistency in the identity collection
+if the pool scales out. We recommend to only update this property when the pool is sized zero.
+For more information, see the [Configure managed identities](managed-identity-pools.md) article.
+
+## Next steps
+
+- Learn more about available Batch [APIs and tools](batch-apis-tools.md).
+- Learn how to [check pools and nodes for errors](batch-pool-node-error-checking.md).
cdn Cdn Add To Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-add-to-web-app.md
# Tutorial: Add Azure Content Delivery Network to an Azure App Service web app + This tutorial shows how to add [Azure Content Delivery Network](cdn-overview.md) to a [web app in Azure App Service](../app-service/overview.md). Web apps are services for hosting web applications, REST APIs, and mobile back ends. Here's the home page of the sample static HTML site that you work with:
cdn Cdn App Dev Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-app-dev-net.md
# Get started with the Azure CDN Library for .NET ++ > [!div class="op_single_selector"] > - [Node.js](cdn-app-dev-node.md) > - [.NET](cdn-app-dev-net.md)
cdn Cdn App Dev Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-app-dev-node.md
# Get started with Azure CDN development ++ > [!div class="op_single_selector"] > - [Node.js](cdn-app-dev-node.md) > - [.NET](cdn-app-dev-net.md)
cdn Cdn Azure Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-azure-diagnostic-logs.md
# Diagnostic logs - Azure Content Delivery Network + With Azure diagnostic logs, you can view core analytics and save them into one or more destinations including: - Azure Storage account
cdn Cdn Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-billing.md
# Understanding Azure Content Delivery Network billing + This FAQ describes the billing structure for content hosted by Azure Content Delivery Network. ## What is a billing region?
cdn Cdn Caching Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-caching-policy.md
# Manage Azure CDN caching policy in Azure Media Services + Azure Media Services provides HTTP based Adaptive Streaming and progressive download. HTTP based streaming is highly scalable with benefits of caching in proxy and CDN layers and client-side caching. Streaming endpoints provides general streaming capabilities and also configuration for HTTP cache headers. Streaming endpoints sets HTTP Cache-Control: max-age and Expires headers. You can get more information for HTTP cache headers from [W3.org](https://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html). ## Default Caching headers
cdn Cdn Caching Rules Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-caching-rules-tutorial.md
# Tutorial: Set Azure Content Delivery Network caching rules + > [!NOTE] > Caching rules are available only for **Azure CDN Standard from Edgio** profiles. For **Azure CDN from Microsoft** profiles, you must use the [Standard rules engine](cdn-standard-rules-engine-reference.md) For **Azure CDN Premium from Edgio** profiles, you must use the [Edgio Premium rules engine](./cdn-verizon-premium-rules-engine.md) in the **Manage** portal for similar functionality.
cdn Cdn Caching Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-caching-rules.md
# Control Azure Content Delivery Network caching behavior with caching rules + This article describes how you can use content delivery network caching rules to set or modify default cache expiration behavior. These caching rules can either be global or with custom conditions, such as a URL path and file extension. > [!NOTE]
cdn Cdn Change Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-change-provider.md
# Migrate between content delivery network providers + Content Delivery Network services can provide resiliency and add benefits for different types of workloads. Switching between content delivery network providers is a common practice when your web delivery requirements changes or when a different service is better suited for your business needs. The purpose of this article is to share best practices when migrating from one content delivery network service to another. In this article we talk about the different Azure Content Delivery Network services, how to compare these products and best practices to consider when performing the migration.
cdn Cdn Cors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-cors.md
# Using Azure CDN with CORS + ## What is CORS? CORS (cross-origin resource sharing) is an HTTP feature that enables a web application running under one domain to access resources in another domain. In order to reduce the possibility of cross-site scripting attacks, all modern web browsers implement a security restriction known as [same-origin policy](https://www.w3.org/Security/wiki/Same_Origin_Policy). This restriction prevents a web page from calling APIs in a different domain. CORS provides a secure way to allow one origin (the origin domain) to call APIs in another origin.
cdn Cdn Create A Storage Account With Cdn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-create-a-storage-account-with-cdn.md
# Quickstart: Integrate an Azure Storage account with Azure Content Delivery Network + In this quickstart, you enable [Azure Content Delivery Network](cdn-overview.md) to cache content from Azure Storage. Azure Content Delivery Network offers developers a global solution for delivering high-bandwidth content. It can cache blobs and static content of compute instances at physical nodes in the United States, Europe, Asia, Australia, and South America. > [!NOTE]
cdn Cdn Create Endpoint How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-create-endpoint-how-to.md
# Create an Azure Content Delivery Network endpoint + This article describes all the settings for creating an [Azure Content Delivery Network](cdn-overview.md) endpoint in an existing content delivery network profile. After you've created a profile and an endpoint, you can start delivering content to your customers. For a quickstart on creating a profile and endpoint, see [Quickstart: Create an Azure Content Delivery Network profile and endpoint](cdn-create-new-endpoint.md). ## Prerequisites
cdn Cdn Create New Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-create-new-endpoint.md
# Quickstart: Create an Azure Content Delivery Network profile and endpoint + In this quickstart, you enable Azure Content Delivery Network by creating a new content delivery network profile, which is a collection of one or more content delivery network endpoints. After you've created a profile and an endpoint, you can start delivering content to your customers. ## Prerequisites
cdn Cdn Custom Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-custom-ssl.md
# Tutorial: Configure HTTPS on an Azure CDN custom domain + This tutorial shows how to enable the HTTPS protocol for a custom domain associated with an Azure CDN endpoint. The HTTPS protocol on your custom domain (for example, `https://www.contoso.com`), ensures your sensitive data is delivered securely via TLS/SSL. When your web browser is connected via HTTPS, the browser validates the web site's certificate. The browser verifies it's issued by a legitimate certificate authority. This process provides security and protects your web applications from attacks.
cdn Cdn Ddos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-ddos.md
# Azure Content Delivery Network DDoS Protection + A content delivery network provides DDoS Protection by design. In addition to the global capacity to absorb volumetric attacks, Azure Content Delivery Network has extra DDoS Protection as outlined in this article, for no extra cost. <a name='azure-cdn-from-microsoft'></a>
cdn Cdn Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-features.md
# What are the comparisons between Azure Content Delivery Network product features? + Azure Content Delivery Network includes three products: - **Azure CDN Standard from Microsoft**
The following table compares the features available with each product.
| [Real-time alerts](cdn-real-time-alerts.md) | | |**&#x2713;** | |||| | **Ease of use** | **Standard Microsoft** | **Standard Edgio** | **Premium Edgio** |
-| Easy integration with Azure services, such as [Storage](cdn-create-a-storage-account-with-cdn.md), [Web Apps](cdn-add-to-web-app.md), and [Media Services](/azure/media-services/previous/media-services-portal-manage-streaming-endpoints) | **&#x2713;** |**&#x2713;** |**&#x2713;** |
+| Easy integration with Azure services, such as [Storage](cdn-create-a-storage-account-with-cdn.md), [Web Apps](cdn-add-to-web-app.md), and Media Services | **&#x2713;** |**&#x2713;** |**&#x2713;** |
| Management via [REST API](/rest/api/cdn/), [.NET](cdn-app-dev-net.md), [Node.js](cdn-app-dev-node.md), or [PowerShell](cdn-manage-powershell.md) | **&#x2713;** |**&#x2713;** |**&#x2713;** | | [Compression MIME types](./cdn-improve-performance.md) |Configurable |Configurable |Configurable | | Compression encodings |gzip, brotli |gzip, deflate, bzip2, brotli |gzip, deflate, bzip2, brotli |
cdn Cdn How Caching Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-how-caching-works.md
# How caching works + This article provides an overview of general caching concepts and how [Azure Content Delivery Network](cdn-overview.md) uses caching to improve performance. If you'd like to learn about how to customize caching behavior on your content delivery network endpoint, see [Control Azure Content Delivery Network caching behavior with caching rules](cdn-caching-rules.md) and [Control Azure Content Delivery Network caching behavior with query strings](cdn-query-string.md). ## Introduction to caching
cdn Cdn Http Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-http-variables.md
# HTTP variables for Azure CDN rules engine + HTTP variables provide the means through which you can retrieve HTTP request and response metadata. This metadata can then be used to dynamically alter a request or a response. The use of HTTP variables is restricted to the following rules engine features: - [Cache-Key Rewrite](https://docs.vdms.com/cdn/Content/HRE/F/Cache-Key-Rewrite.htm)
cdn Cdn Http2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-http2.md
# HTTP/2 Support in Azure Content Delivery Network + HTTP/2 is a major revision to HTTP/1.1\. This technology delivers enhanced web performance, diminished response time, and an elevated user experience, all the while preserving the customary HTTP methods, status codes, and semantics. Though HTTP/2 is designed to work with HTTP and HTTPS, many client web browsers only support HTTP/2 over TLS (Transport Layer Security). ### HTTP/2 Benefits
cdn Cdn Improve Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-improve-performance.md
# Improve performance by compressing files in Azure CDN + File compression is a simple and effective method to improve file transfer speed and increase page-load performance by reducing a file's size before it's sent from the server. File compression can reduce bandwidth costs and provide a more responsive experience for your users. There are two ways to enable file compression:
cdn Cdn Large File Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-large-file-optimization.md
# Large file download optimization with Azure Content Delivery Network + File sizes of content delivered over the internet continue to grow due to enhanced functionality, improved graphics, and rich media content. This growth gets driven by many factors: broadband penetration, larger inexpensive storage devices, widespread increase of high definition video, and internet-connected devices (IoT). A fast and efficient delivery mechanism for large files is critical to ensure a smooth and enjoyable consumer experience. Delivery of large files has several challenges. First, the average time to download a large file can be significant because applications might not download all data sequentially. In some cases, applications might download the last part of a file before the first part. When only a small amount of a file is requested or a user pauses a download, the download can fail. The download also might be delayed until after the content delivery network retrieves the entire file from the origin server.
cdn Cdn Log Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-log-analysis.md
# Analyze Azure CDN usage patterns + After you enable CDN for your application, you can monitor CDN usage, check the health of your delivery, and troubleshoot potential issues. Azure CDN provides these capabilities in the following ways: ## Raw logs for Azure CDN from Microsoft
cdn Cdn Manage Expiration Of Blob Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-manage-expiration-of-blob-content.md
# Manage expiration of Azure Blob storage in Azure Content Delivery Network + > [!div class="op_single_selector"] > - [Azure web content](cdn-manage-expiration-of-cloud-service-content.md) > - [Azure Blob storage](cdn-manage-expiration-of-blob-content.md)
cdn Cdn Manage Expiration Of Cloud Service Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-manage-expiration-of-cloud-service-content.md
# Manage expiration of web content in Azure Content Delivery Network + > [!div class="op_single_selector"] > - [Azure web content](cdn-manage-expiration-of-cloud-service-content.md) > - [Azure Blob storage](cdn-manage-expiration-of-blob-content.md)
cdn Cdn Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-manage-powershell.md
# Manage Azure Content Delivery Network with PowerShell + PowerShell provides one of the most flexible methods to manage your Azure Content Delivery Network profiles and endpoints. You can use PowerShell interactively or by writing scripts to automate management tasks. This tutorial demonstrates several of the most common tasks you can accomplish with PowerShell to manage your Azure Content Delivery Network profiles and endpoints. ## Prerequisites
cdn Cdn Map Content To Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-map-content-to-custom-domain.md
# Tutorial: Add a custom domain to your endpoint + This tutorial shows how to add a custom domain to an Azure Content Delivery Network endpoint. The endpoint name in your content delivery network profile is a subdomain of azureedge.net. By default when delivering content, the content delivery network profile domain gets included in the URL.
cdn Cdn Msft Http Debug Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-msft-http-debug-headers.md
# Debug HTTP header for Azure CDN from Microsoft + The debug response header, `X-Cache`, provides details as to what layer of the CDN stack the content was served from. This header is specific to Azure CDN from Microsoft. ### Response header format
cdn Cdn Optimization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-optimization-overview.md
# Optimize Azure Content Delivery Network for the type of content delivery + When you deliver content to a large global audience, it's critical to ensure the optimized delivery of your content. [Azure Content Delivery Network](cdn-overview.md) can optimize the delivery experience based on the type of content you have. The content can be a website, a live stream, a video, or a large file for download. When you create a content delivery network endpoint, you specify a scenario in the **Optimized for** option. Your choice determines which optimization is applied to the content delivered from the content delivery network endpoint. Optimization choices are designed to use best-practice behaviors to improve content delivery performance and better origin offload. Your scenario choices affect performance by modifying configurations for partial caching, object chunking, and the origin failure retry policy.
cdn Cdn Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-overview.md
# What is a content delivery network on Azure? + A content delivery network is a distributed network of servers that can efficiently deliver web content to users. A content delivery network store cached content on edge servers in point of presence (POP) locations that are close to end users, to minimize latency. Azure Content Delivery Network offers developers a global solution for rapidly delivering high-bandwidth content to users by caching their content at strategically placed physical nodes across the world. Azure Content Delivery Network can also accelerate dynamic content, which can't get cached, by using various network optimizations using content delivery network POPs. For example, route optimization to bypass Border Gateway Protocol (BGP).
cdn Cdn Pop List Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-pop-list-api.md
# Retrieve the current POP IP list for Azure Content Delivery Network + <a name='retrieve-the-current-verizon-pop-ip-list-for-azure-cdn'></a> <a name='retrieve-the-current-edgio-pop-ip-list-for-azure-cdn'></a>
cdn Cdn Pop Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-pop-locations.md
# Azure Content Delivery Network Coverage by Metro + > [!div class="op_single_selector"] > - [POP locations by region](cdn-pop-locations.md) > - [Edgio POP locations by abbreviation](cdn-pop-abbreviations.md)
cdn Cdn Purge Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-purge-endpoint.md
# Purge an Azure Content Delivery Network endpoint + Azure Content Delivery Network edge nodes cache contents until the content's time to live (TTL) expires. After the TTL expires, when a client makes a request for the content from the edge node, the edge node will retrieve a new updated copy of the content to serve to the client. Then the refreshed content in cache of the edge node. The best practice to make sure your users always obtain the latest copy of your assets is to version your assets for each update and publish them as new URLs. Content delivery network will immediately retrieve the new assets for the next client requests. Sometimes you may wish to purge cached content from all edge nodes and force them all to retrieve new updated assets. The reason might be due to updates to your web application, or to quickly update assets that contain incorrect information.
cdn Cdn Query String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-query-string.md
# Control Azure Content Delivery Network caching behavior with query strings - standard tier + > [!div class="op_single_selector"] > - [Standard tier](cdn-query-string.md) > - [Premium tier](cdn-query-string-premium.md)
cdn Cdn Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-resource-health.md
# Monitor the health of Azure Content Delivery Network resources + Azure Content Delivery Network Resource health is a subset of [Azure resource health](/azure/service-health/resource-health-overview). You can use Azure resource health to monitor the health of Content Delivery Network resources and receive actionable guidance to troubleshoot problems. >[!IMPORTANT]
cdn Cdn Restrict Access By Country Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-restrict-access-by-country-region.md
# Restrict Azure CDN content by country/region + When a user requests your content, the content is served to users in all locations. You might want to restrict access to your content by country/region. With the *geo-filtering* feature, you can create rules on specific paths on your CDN endpoint. You can set the rules to allow or block content in selected countries/regions.
cdn Cdn Sas Storage Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-sas-storage-support.md
# Using Azure Content Delivery Network with SAS + When you set up a storage account for Azure Content Delivery Network to use to cache content, by default anyone who knows the URLs for your storage containers can access the files that you've uploaded. To protect the files in your storage account, you can set the access of your storage containers from public to private. However, if you do so, no one is able to access your files. If you want to grant limited access to private storage containers, you can use the Shared Access Signature (SAS) feature of your Azure Storage account. A SAS is a URI that grants restricted access rights to your Azure Storage resources without exposing your account key. You can provide a SAS to clients that you don't trust with your storage account key but to whom you want to delegate access to certain storage account resources. By distributing a Shared Access Signature URI to these clients, you grant them access to a resource for a specified period of time.
cdn Cdn Standard Rules Engine Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-standard-rules-engine-actions.md
# Actions in the Standard rules engine for Azure Content Delivery Network + In the [Standard rules engine](cdn-standard-rules-engine.md) for Azure Content Delivery Network, a rule consists of one or more match conditions and an action. This article provides detailed descriptions of the actions you can use in the Standard rules engine for Azure Content Delivery Network. The second part of a rule is an action. An action defines the behavior that's applied to the request type that a match condition or set of match conditions identifies.
cdn Cdn Standard Rules Engine Match Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-standard-rules-engine-match-conditions.md
# Match conditions in the Standard rules engine for Azure Content Delivery Network + In the [Standard rules engine](cdn-standard-rules-engine.md) for Azure Content Delivery Network, a rule consists of one or more match conditions and an action. This article provides detailed descriptions of the match conditions you can use in the Standard rules engine for Azure Content Delivery Network. The first part of a rule is a match condition or set of match conditions. In the Standard rules engine for Azure Content Delivery Network, each rule can have up to four match conditions. A match condition identifies specific types of requests for which defined actions are performed. If you use multiple match conditions, the match conditions are grouped together by using AND logic.
cdn Cdn Standard Rules Engine Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-standard-rules-engine-reference.md
# Standard rules engine reference for Azure Content Delivery Network + In the [Standard rules engine](cdn-standard-rules-engine.md) for Azure Content Delivery Network, a rule consists of one or more match conditions and an action. This article provides detailed descriptions of the match conditions and features that are available in the Standard rules engine for Azure Content Delivery Network. The rules engine is designed to be the final authority on how specific types of requests get processed by Standard Azure Content Delivery Network.
cdn Cdn Standard Rules Engine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-standard-rules-engine.md
# Set up the Standard rules engine for Azure Content Delivery Network + This article describes how to set up and use the Standard rules engine for Azure Content Delivery Network. ## Standard rules engine
cdn Cdn Storage Custom Domain Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-storage-custom-domain-https.md
# Tutorial: Access storage blobs using an Azure Content Delivery Network custom domain over HTTPS + After you've integrated your Azure Storage account with Azure Content Delivery Network, you can add a custom domain and enable HTTPS on that domain for your custom blob storage endpoint. > [!NOTE]
cdn Cdn Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-traffic-manager.md
# Failover across multiple endpoints with Azure Traffic Manager + When you configure Azure Content Delivery Network, you can select the optimal provider and pricing tier for your needs. Azure Content Delivery Network, with its globally distributed infrastructure, by default creates local and geographic redundancy and global load balancing to improve service availability and performance.
cdn Cdn Troubleshoot Compression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-troubleshoot-compression.md
# Troubleshooting Azure Content Delivery Network file compression + This article helps you troubleshoot issues with [CDN file compression](cdn-improve-performance.md). If you need more help at any point in this article, you can contact the Azure experts on the [MSDN Azure and the Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, you can also file an Azure Support incident. Go to the [Azure Support site](https://azure.microsoft.com/support/options/) and select **Get Support**.
cdn Cdn Troubleshoot Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-troubleshoot-endpoint.md
# Troubleshooting Azure Content Delivery Network endpoints that return a 404 status code + This article enables you to troubleshoot issues with Azure Content Delivery Network endpoints that return 404 HTTP response status codes. If you need more help at any point in this article, you can contact the Azure experts on the [MSDN Azure and the Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, you can also file an Azure Support incident. Go to the [Azure Support site](https://azure.microsoft.com/support/options/) and select **Get Support**.
cdn Create Profile Endpoint Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/create-profile-endpoint-bicep.md
# Quickstart: Create an Azure Content Delivery Network profile and endpoint - Bicep + Get started with Azure Content Delivery Network by using a Bicep file. The Bicep file deploys a profile and an endpoint. [!INCLUDE [About Bicep](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-bicep-introduction.md)]
cdn Create Profile Endpoint Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/create-profile-endpoint-template.md
# Quickstart: Create an Azure Content Delivery Network profile and endpoint - ARM template + Get started with Azure Content Delivery Network by using an Azure Resource Manager template (ARM template). The template deploys a profile and an endpoint. [!INCLUDE [About Azure Resource Manager](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-introduction.md)]
cdn Create Profile Endpoint Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/create-profile-endpoint-terraform.md
ai-usage: ai-assisted
# Quickstart: Create an Azure CDN profile and endpoint using Terraform + This article shows how to use Terraform to create an [Azure CDN profile and endpoint](/azure/cdn/cdn-overview) using [Terraform](/azure/developer/terraform/quickstart-configure). [!INCLUDE [Terraform abstract](~/azure-dev-docs-pr/articles/terraform/includes/abstract.md)]
cdn Endpoint Multiorigin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/endpoint-multiorigin.md
# Azure CDN endpoint multi-origin + Multi-origin support eliminates downtime and establishes global redundancy. When you choose multiple origins within an Azure CDN endpoint, the redundancy provided spreads the risk by probing the health of each origin and failing over if necessary.
cdn Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/managed-identity.md
# Use managed identities for Azure Content Delivery Network to access Azure Key Vault certificates + A managed identity generated by Microsoft Entra ID allows your Azure Content Delivery Network instance to easily and securely access other Microsoft Entra protected resources, such as Azure Key Vault. Azure manages the identity resource, so you don't have to create or rotate any secrets. For more information about managed identities, see [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md). Once you enable managed identity for Azure Front Door and grant proper permissions to access your Azure key vault, Azure Front Door only uses managed identity to access the certificates. If you don't **add the managed identity permission to your Key Vault**, custom certificate autorotation and adding new certificates fails without permissions to Key Vault. If you disable managed identity, Azure Front Door falls back to using the original configured Microsoft Entra App. This solution isn't recommended and will be retired in the future.
cdn Migrate Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/migrate-tier.md
# Migrate Azure CDN from Microsoft (classic) to Standard/Premium tier + Azure Front Door Standard and Premium tier bring the latest cloud delivery network features to Azure. With enhanced security features and an all-in-one service, your application content is secured and closer to your end users using the Microsoft global network. This article guides you through the migration process to move your Azure CDN from Microsoft (classic) profile to either a Standard or Premium tier profile. ## Prerequisites
cdn Monitoring And Access Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/monitoring-and-access-log.md
# Real-time Monitoring, metrics, and access Logs for Azure CDN + With Azure CDN from Microsoft, you can monitor resources in the following ways to help you troubleshoot, track, and debug issues. - Raw logs provide rich information about every request that CDN receives. Raw logs differ from activity logs. Activity logs provide visibility into the operations done on Azure resources.
cdn Onboard Apex Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/onboard-apex-domain.md
# Onboard a root or apex domain to an existing Azure CDN endpoint + Azure CDN uses CNAME records to validate domain ownership for onboarding of custom domains. CDN doesn't expose the frontend IP address associated with your CDN profile. You can't map your apex domain to an IP address if your intent is to onboard it to Azure CDN. The DNS protocol prevents the assignment of CNAME records at the zone apex. For example, if your domain is `contoso.com`; you can create CNAME records for `somelabel.contoso.com`; but you can't create a CNAME for `contoso.com` itself. This restriction presents a problem for application owners who have load-balanced applications behind Azure CDN. Since using a CDN profile requires creation of a CNAME record, it isn't possible to point at the CDN profile from the zone apex.
cdn Cdn Azure Cli Create Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/scripts/cli/cdn-azure-cli-create-endpoint.md
ms.tool: azure-cli
# Create an Azure Content Delivery Network profile and endpoint using the Azure CLI + As an alternative to the Azure portal, you can use these sample the Azure CLI scripts to manage the following content delivery network operations: - Create a content delivery network profile.
cdn Subscription Offerings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/subscription-offerings.md
# Azure CDN subscription offers and bandwidth throttling + Some Azure CDNs offerings are only available to certain subscription types. Bandwidth throttling might also apply depending on your subscription type. ## Free and Trial Subscription
cdn Tier Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/tier-migration.md
# About Azure CDN from Microsoft (classic) to Azure Front Door migration + Azure Front Door Standard and Premium tier were released in March 2022 as the next generation content delivery network service. The newer tiers combine the capabilities of Azure Front Door (classic), Microsoft CDN (classic), and Web Application Firewall (WAF). With features such as Private Link integration, enhanced rules engine and advanced diagnostics you have the ability to secure and accelerate your web applications to bring a better experience to your customers. We recommend migrating your classic profile to one of the newer tier to benefit from the new features and improvements. To ease the move to the new tiers, Azure Front Door provides a zero-downtime migration to move your workload from Azure Front Door (classic) to either Standard or Premium.
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/chat/concepts.md
For customers that use Virtual appointments, refer to our Teams Interoperability
- For chat threads with more than 20 participants, read receipts and typing indicator features aren't supported. - For Teams Interop scenarios, it's the number of Azure Communication Services users, not Teams users, that must be below 20 for the typing indicator feature to be supported. - When creating a chat thread, you can set the retention policy between 30 and 90 days.-- For Teams Interop scenarios, the typing indicator event might contain a blank display name when sent from Teams user.-- For Teams Interop scenarios, read receipts aren't supported for Teams users.
+- Moreover, in Teams Interop scenarios, there are the following limitations:
+ - The Teams user's display name in typing indicator event is blank.
+ - Read receipt isn't supported.
+ - Certain identities are not supported (i.e. [Bot users](/microsoftteams/platform/bots/what-are-bots), [Skype users](https://support.microsoft.com/en-us/office/use-skype-in-microsoft-teams-4382ea15-f963-413d-8982-491c1b9ae3bf), [non-enterprise users](https://support.microsoft.com/en-us/office/learn-more-about-subscriptions-for-microsoft-teams-free-1061bbd0-6d97-46a6-8ca0-21059be3eee3), etc.)
## Chat architecture
communication-services Button Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/button-injection.md
zone_pivot_groups: acs-plat-ios-android
# Customize the button bar - To implement custom actions or modify the current button layout, you can interact with the Native UI Library's API. This API involves defining custom button configurations, specifying actions, and managing the button bar's current actions. The API provides methods for adding custom actions, and removing existing buttons, all of which are accessible via straightforward function calls. This functionality provides a high degree of customization, and ensures that the user interface remains cohesive and consistent with the application's overall design.
communication-services Setup Title Subtitle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/setup-title-subtitle.md
zone_pivot_groups: acs-plat-ios-android
# Customize the title and subtitle - Developers now have the capability to customize the title and subtitle of a call, both during setup and while the call is in progress. This feature allows for greater flexibility in aligning the call experience with specific use cases. For instance, in a customer support scenario, the title could display the issue being addressed, while the subtitle could show the customer's name or ticket number.
container-apps Azure Arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-overview.md
Previously updated : 09/23/2024 Last updated : 09/26/2024
Arm64 based clusters aren't supported at this time.
- Update EasyAuth to support MISE
+ ### Container Apps extension v1.37.2 (September 2024)
+
+ - Updated Dapr-Metrics image to v0.6.8 to resolve network timeout issue
+ - Resolved issue in Log Processor which prevented MDSD container from starting when cluster is connected behind a Proxy
+ ## Next steps [Create a Container Apps connected environment (Preview)](azure-arc-enable-cluster.md)
container-apps Background Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/background-processing.md
- Title: 'Tutorial: Deploy a background processing application with Azure Container Apps'
-description: Learn to create an application that continuously runs in the background with Azure Container Apps
---- Previously updated : 01/10/2024----
-# Tutorial: Deploy a background processing application with Azure Container Apps
-
-Using Azure Container Apps allows you to deploy applications without requiring the exposure of public endpoints. By using Container Apps scale rules, the application can scale out and in based on the Azure Storage queue length. When there are no messages on the queue, the container app scales in to zero.
-
-You learn how to:
-
-> [!div class="checklist"]
-> * Create a Container Apps environment to deploy your container apps
-> * Create an Azure Storage Queue to send messages to the container app
-> * Deploy your background processing application as a container app
-> * Verify that the queue messages are processed by the container app
-----
-## Set up a storage queue
-
-Begin by defining a name for the storage account. Storage account names must be *unique within Azure* and be from 3 to 24 characters in length containing numbers and lowercase letters only.
-
-# [Bash](#tab/bash)
-
-```bash
-STORAGE_ACCOUNT_NAME="<STORAGE_ACCOUNT_NAME>"
-```
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-$StorageAcctName = "<StorageAccountName>"
-```
---
-Create an Azure Storage account.
-
-# [Bash](#tab/bash)
-
-```azurecli
-az storage account create \
- --name $STORAGE_ACCOUNT_NAME \
- --resource-group $RESOURCE_GROUP \
- --location "$LOCATION" \
- --sku Standard_RAGRS \
- --kind StorageV2
-```
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-$StorageAcctArgs = @{
- Name = $StorageAcctName
- ResourceGroupName = $ResourceGroupName
- Location = $location
- SkuName = 'Standard_RAGRS'
- Kind = 'StorageV2'
-}
-$StorageAcct = New-AzStorageAccount @StorageAcctArgs
-```
---
-Next, get the connection string for the queue.
-
-# [Bash](#tab/bash)
-
-```azurecli
-QUEUE_CONNECTION_STRING=`az storage account show-connection-string -g $RESOURCE_GROUP --name $STORAGE_ACCOUNT_NAME --query connectionString --out json | tr -d '"'`
-```
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-```azurepowershell
- $QueueConnectionString = (Get-AzStorageAccount -ResourceGroupName $ResourceGroupName -Name $StorageAcctName).Context.ConnectionString
-```
---
-Now you can create the message queue.
-
-# [Bash](#tab/bash)
-
-```azurecli
-az storage queue create \
- --name "myqueue" \
- --account-name $STORAGE_ACCOUNT_NAME \
- --connection-string $QUEUE_CONNECTION_STRING
-```
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-$Queue = New-AzStorageQueue -Name 'myqueue' -Context $StorageAcct.Context
-```
---
-Finally, you can send a message to the queue.
-
-# [Bash](#tab/bash)
-
-```azurecli
-az storage message put \
- --content "Hello Queue Reader App" \
- --queue-name "myqueue" \
- --connection-string $QUEUE_CONNECTION_STRING
-```
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-$QueueMessage = [Microsoft.Azure.Storage.Queue.CloudQueueMessage]::new("Hello Queue Reader App")
-$Queue.CloudQueue.AddMessageAsync($QueueMessage).GetAwaiter().GetResult()
-```
-
-A result of `Microsoft.Azure.Storage.Core.NullType` is returned when the message is added to the queue.
---
-## Deploy the background application
-
-Create a file named *queue.json* and paste the following configuration code into the file.
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "location": {
- "defaultValue": "canadacentral",
- "type": "String"
- },
- "environment_name": {
- "type": "String"
- },
- "queueconnection": {
- "type": "secureString"
- }
- },
- "variables": {},
- "resources": [
- {
- "name": "queuereader",
- "type": "Microsoft.App/containerApps",
- "apiVersion": "2022-03-01",
- "kind": "containerapp",
- "location": "[parameters('location')]",
- "properties": {
- "managedEnvironmentId": "[resourceId('Microsoft.App/managedEnvironments', parameters('environment_name'))]",
- "configuration": {
- "activeRevisionsMode": "single",
- "secrets": [
- {
- "name": "queueconnection",
- "value": "[parameters('queueconnection')]"
- }]
- },
- "template": {
- "containers": [
- {
- "image": "mcr.microsoft.com/azuredocs/containerapps-queuereader",
- "name": "queuereader",
- "env": [
- {
- "name": "QueueName",
- "value": "myqueue"
- },
- {
- "name": "QueueConnectionString",
- "secretRef": "queueconnection"
- }
- ]
- }
- ],
- "scale": {
- "minReplicas": 1,
- "maxReplicas": 10,
- "rules": [
- {
- "name": "myqueuerule",
- "azureQueue": {
- "queueName": "myqueue",
- "queueLength": 100,
- "auth": [
- {
- "secretRef": "queueconnection",
- "triggerParameter": "connection"
- }
- ]
- }
- }
- ]
- }
- }
- }
- }]
-}
-
-```
-
-Now you can create and deploy your container app.
-
-# [Bash](#tab/bash)
-
-```azurecli
-az deployment group create --resource-group "$RESOURCE_GROUP" \
- --template-file ./queue.json \
- --parameters \
- environment_name="$CONTAINERAPPS_ENVIRONMENT" \
- queueconnection="$QUEUE_CONNECTION_STRING" \
- location="$LOCATION"
-```
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-$Params = @{
- environment_name = $ContainerAppsEnvironment
- location = $Location
- queueconnection = $QueueConnectionString
-}
-
-$DeploymentArgs = @{
- ResourceGroupName = $ResourceGroupName
- TemplateParameterObject = $Params
- TemplateFile = './queue.json'
- SkipTemplateParameterPrompt = $true
-}
-New-AzResourceGroupDeployment @DeploymentArgs
-```
---
-This command deploys the demo application from the public container image called `mcr.microsoft.com/azuredocs/containerapps-queuereader` and sets secrets and environments variables used by the application.
-
-The application scales out to 10 replicas based on the queue length as defined in the `scale` section of the ARM template.
-
-## Verify the result
-
-The container app runs as a background process. As messages arrive from the Azure Storage Queue, the application creates log entries in Log analytics. You must wait a few minutes for the analytics to arrive for the first time before you're able to query the logged data.
-
-Run the following command to see logged messages. This command requires the Log analytics extension, so accept the prompt to install extension when requested.
-
-# [Bash](#tab/bash)
-
-```azurecli
-LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.appLogsConfiguration.logAnalyticsConfiguration.customerId --out tsv`
-
-az monitor log-analytics query \
- --workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \
- --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'queuereader' and Log_s contains 'Message ID' | project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Container=ContainerName_s, Message=Log_s | take 5" \
- --out table
-```
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-$queryResults = Invoke-AzOperationalInsightsQuery -WorkspaceId $WorkspaceId -Query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'queuereader' and Log_s contains 'Message ID' | project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Container=ContainerName_s, Message=Log_s | take 5"
-$queryResults.Results
-```
---
-> [!TIP]
-> Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps).
-
-## Clean up resources
-
-Once you're done, run the following command to delete the resource group that contains your Container Apps resources.
-
->[!CAUTION]
-> The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this tutorial exist in the specified resource group, they will also be deleted.
-
-# [Bash](#tab/bash)
-
-```azurecli
-az group delete \
- --resource-group $RESOURCE_GROUP
-```
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-Remove-AzResourceGroup -Name $ResourceGroupName -Force
-```
--
cost-management-billing Programmatically Create Subscription Microsoft Customer Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md
The following example creates a subscription named *Dev Team subscription* for t
### [REST](#tab/rest)
+Replace the placeholder value `sampleAlias` as needed. For more information on these REST calls, see [Create](/rest/api/subscription/alias/create) and [Get](/rest/api/subscription/alias/get).
+ ```json PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2021-10-01 ```
resource subToMG 'Microsoft.Management/managementGroups/subscriptions@2020-05-01
* Now that you've created a subscription, you can grant that ability to other users and service principals. For more information, see [Grant access to create Azure Enterprise subscriptions (preview)](grant-access-to-create-subscription.md). * For more information about managing large numbers of subscriptions using management groups, see [Organize your resources with Azure management groups](../../governance/management-groups/overview.md). * To change the management group for a subscription, see [Move subscriptions](../../governance/management-groups/manage.md#move-management-groups-and-subscriptions).
-* For advanced subscription creation scenarios using REST API, see [Alias - Create](/rest/api/subscription/2021-10-01/alias/create).
+* For advanced subscription creation scenarios using REST API, see [Alias - Create](/rest/api/subscription/alias/).
data-factory Concepts Workflow Orchestration Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-workflow-orchestration-manager.md
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-> [!NOTE]
-> Workflow Orchestration Manager is powered by Apache Airflow.
- > [!NOTE] > Apache Airflow is now accessible through Microsoft Fabric. Microsoft Fabric offers a wide range of Apache Airflow capabilities via Data Workflows. > We recommend migrating your existing Workflow Orchestration Manager (Apache Airflow in ADF) based workflows to Data Workflows (Apache Airflow in Microsoft Fabric) for a broader set of features. Apache Airflow capabilities will be Genrally Available in Q1 CY2025 only in Microsoft Fabric.
-> For new Apache Airflow projects, we strongly recommend using Apache Airflow in Microsoft Fabric. More details can be found [here](https://blog.fabric.microsoft.com/blog/introducing-data-workflows-in-microsoft-fabric?ft=All).
+> For new Apache Airflow projects, we recommend using Apache Airflow in Microsoft Fabric. More details can be found [here](https://blog.fabric.microsoft.com/blog/introducing-data-workflows-in-microsoft-fabric?ft=All).
+> New users will not be allowed to create a new workflow orchestration manager in ADF, but existing users with a workflow orchestration manager may continue to use it but plan a migration soon.
> [!NOTE] > Workflow Orchestration Manager for Azure Data Factory relies on the open source Apache Airflow application. Documentation and more tutorials for Airflow can be found on the Apache Airflow [Documentation](https://airflow.apache.org/docs/) or [Community](https://airflow.apache.org/community/) pages.
data-factory Connector Deprecation Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-deprecation-plan.md
Previously updated : 09/24/2024 Last updated : 09/27/2024 # Planned connector deprecations for Azure Data Factory
This article describes future deprecations for some connectors of Azure Data Fac
## Overview
-| Connector|Release stage |End of Support Date |Disabled Date |
-|:-- |:-- |:-- | :-- |
-| [Google BigQuery (legacy)](connector-google-bigquery-legacy.md)  | End of support announced and new version available | October 31, 2024 | January 10, 2025 |
-| [MariaDB (legacy driver version)](connector-mariadb.md)  | End of support announced and new version available | October 31, 2024 | January 10, 2025 |
-| [MySQL (legacy driver version)](connector-mysql.md)  | End of support announced and new version available | October 31, 2024| January 10, 2025|
-| [Salesforce (legacy)](connector-salesforce-legacy.md)   | End of support announced and new version available | October 11, 2024 | January 10, 2025|
-| [Salesforce Service Cloud (legacy)](connector-salesforce-service-cloud-legacy.md)   | End of support announced and new version available | October 11, 2024 |January 10, 2025 |
-| [PostgreSQL (legacy)](connector-postgresql-legacy.md)   | End of support announced and new version available |October 31, 2024 | January 10, 2025 |
-| [Snowflake (legacy)](connector-snowflake-legacy.md)   | End of support announced and new version available | October 31, 2024 | January 10, 2025 |
-| [Azure Database for MariaDB](connector-azure-database-for-mariadb.md) | End of support announced |December 31, 2024 | December 31, 2024 |
-| [Concur (Preview)](connector-concur.md) | End of support announced | December 31, 2024 | December 31, 2024 |
-| [Couchbase (Preview)](connector-couchbase.md) | End of support announced | December 31, 2024 | December 31, 2024 |
-| [Drill](connector-drill.md) | End of support announced | December 31, 2024 | December 31, 2024 |
-| [Hbase](connector-hbase.md) | End of support announced | December 31, 2024 | December 31, 2024 |
-| [Magento (Preview)](connector-magento.md) | End of support announced | December 31, 2024 | December 31, 2024 |
-| [Marketo (Preview)](connector-marketo.md) | End of support announced | December 31, 2024| December 31, 2024 |
-| [Oracle Eloqua (Preview)](connector-oracle-eloqua.md) | End of support announced | December 31, 2024 | December 31, 2024 |
-| [Oracle Responsys (Preview)](connector-oracle-responsys.md) | End of support announced | December 31, 2024 | December 31, 2024 |
-| [Oracle Service Cloud (Preview)](connector-oracle-service-cloud.md) | End of support announced | December 31, 2024 | December 31, 2024 |
-| [Paypal (Preview)](connector-paypal.md) | End of support announced |December 31, 2024 | December 31, 2024|
-| [Phoenix](connector-phoenix.md) | End of support announced | December 31, 2024 | December 31, 2024 |
-| [Salesforce Marketing Cloud](connector-salesforce-marketing-cloud.md) | End of support announced | December 31, 2024 | December 31, 2024 |
-| [Zoho (Preview)](connector-zoho.md) | End of support announced | December 31, 2024 | December 31, 2024 |
-| [Amazon Marketplace Web Service](connector-amazon-marketplace-web-service.md)| Disabled |/ |/ |
+| Connector|Upgrade Guidance|Release stage |End of Support Date |Disabled Date |
+|:-- |:-- |:-- |:-- | :-- |
+| [Google BigQuery (legacy)](connector-google-bigquery-legacy.md)  | [Link](connector-google-bigquery.md#upgrade-the-google-bigquery-linked-service) |End of support announced and new version available | October 31, 2024 | January 10, 2025 |
+| [MariaDB (legacy driver version)](connector-mariadb.md)  | [Link](connector-mariadb.md#upgrade-the-mariadb-driver-version) | End of support announced and new version available | October 31, 2024 | January 10, 2025 |
+| [MySQL (legacy driver version)](connector-mysql.md)  | [Link](connector-mysql.md#upgrade-the-mysql-driver-version) | End of support announced and new version available | October 31, 2024| January 10, 2025|
+| [Salesforce (legacy)](connector-salesforce-legacy.md)   | [Link](connector-salesforce.md#upgrade-the-salesforce-linked-service) | End of support announced and new version available | October 11, 2024 | January 10, 2025|
+| [Salesforce Service Cloud (legacy)](connector-salesforce-service-cloud-legacy.md)   | [Link](connector-salesforce-service-cloud.md#upgrade-the-salesforce-service-cloud-linked-service) | End of support announced and new version available | October 11, 2024 |January 10, 2025 |
+| [PostgreSQL (legacy)](connector-postgresql-legacy.md)   | [Link](connector-postgresql.md#upgrade-the-postgresql-linked-service)| End of support announced and new version available |October 31, 2024 | January 10, 2025 |
+| [Snowflake (legacy)](connector-snowflake-legacy.md)   | [Link](connector-snowflake.md#upgrade-the-snowflake-linked-service) | End of support announced and new version available | October 31, 2024 | January 10, 2025 |
+| [Azure Database for MariaDB](connector-azure-database-for-mariadb.md) |/ | End of support announced |December 31, 2024 | December 31, 2024 |
+| [Concur (Preview)](connector-concur.md) |/ | End of support announced | December 31, 2024 | December 31, 2024 |
+| [Couchbase (Preview)](connector-couchbase.md) |/ | End of support announced | December 31, 2024 | December 31, 2024 |
+| [Drill](connector-drill.md) |/ | End of support announced | December 31, 2024 | December 31, 2024 |
+| [Hbase](connector-hbase.md) |/ | End of support announced | December 31, 2024 | December 31, 2024 |
+| [Magento (Preview)](connector-magento.md) |/ | End of support announced | December 31, 2024 | December 31, 2024 |
+| [Marketo (Preview)](connector-marketo.md) |/ | End of support announced | December 31, 2024| December 31, 2024 |
+| [Oracle Eloqua (Preview)](connector-oracle-eloqua.md) |/ | End of support announced | December 31, 2024 | December 31, 2024 |
+| [Oracle Responsys (Preview)](connector-oracle-responsys.md) |/ | End of support announced | December 31, 2024 | December 31, 2024 |
+| [Oracle Service Cloud (Preview)](connector-oracle-service-cloud.md) |/ | End of support announced | December 31, 2024 | December 31, 2024 |
+| [Paypal (Preview)](connector-paypal.md) |/ | End of support announced |December 31, 2024 | December 31, 2024|
+| [Phoenix](connector-phoenix.md) |/ | End of support announced | December 31, 2024 | December 31, 2024 |
+| [Salesforce Marketing Cloud](connector-salesforce-marketing-cloud.md) |/ | End of support announced | December 31, 2024 | December 31, 2024 |
+| [Zoho (Preview)](connector-zoho.md) |/ | End of support announced | December 31, 2024 | December 31, 2024 |
+| [Amazon Marketplace Web Service](connector-amazon-marketplace-web-service.md)|/ | Disabled |/ |/ |
## Release stages and support
data-factory Connector Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-snowflake.md
The Snowflake connector offers new functionalities and is compatible with most f
| Support Basic and Key pair authentication. | Support Basic authentication. | | Script parameters are not supported in Script activity currently. As an alternative, utilize dynamic expressions for script parameters. For more information, see [Expressions and functions in Azure Data Factory and Azure Synapse Analytics](control-flow-expression-language-functions.md). | Support script parameters in Script activity. | | Support BigDecimal in Lookup activity. The NUMBER type, as defined in Snowflake, will be displayed as a string in Lookup activity. | BigDecimal is not supported in Lookup activity. |
+| Legacy ```connectionstring``` property is deprecated in favor of required parameters **Account**, **Warehouse**, **Database**, **Schema**, and **Role** | In the legacy Snowflake connector, the `connectionstring` property was used to establish a connection. |
To determine the version of the Snowflake connector used in your existing Snowflake linked service, check the ```type``` property. The legacy version is identified by ```"type": "Snowflake"```, while the latest V2 version is identified by ```"type": "SnowflakeV2"```.
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-self-hosted-integration-runtime.md
Last updated 09/03/2024
+ai-usage: ai-assisted
# Create and configure a self-hosted integration runtime
You can associate multiple nodes by installing the self-hosted integration runti
#### Scale out
-When processor usage is high and available memory is low on the self-hosted IR, add a new node to help scale out the load across machines. If activities fail because they time out or the self-hosted IR node is offline, it helps if you add a node to the gateway.
+When processor usage is high and available memory is low on the self-hosted IR, add a new node to help scale out the load across machines. If activities fail because they time out or the self-hosted IR node is offline, it helps if you add a node to the gateway. To add a node, complete the following steps:
+
+1. [Download the SHIR setup from the Azure Data Factory portal](create-self-hosted-integration-runtime.md).
+2. Run the Installer on the node you want to add to the cluster.
+3. During the installation, select the option to join an existing integration runtime, and provide the authentication key from the existing SHIR to link the new node to the existing SHIR cluster.
#### Scale up
data-factory Data Flow Troubleshoot Connector Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-troubleshoot-connector-format.md
You use Azure PostgreSQL as a source or sink in the data flow such as previewing
If you use the flexible server or Hyperscale (Citus) for your Azure PostgreSQL server, since the system is built via Spark upon Azure Databricks cluster, there's a limitation in Azure Databricks blocks our system to connect to the Flexible server or Hyperscale (Citus). You can review the following two links as references. - [Handshake fails trying to connect from Azure Databricks to Azure PostgreSQL with SSL](/answers/questions/170730/handshake-fails-trying-to-connect-from-azure-datab.html) -- [MCW-Real-time-data-with-Azure-Database-for-PostgreSQL-Hyperscale](https://github.com/microsoft/MCW-Real-time-data-with-Azure-Database-for-PostgreSQL-Hyperscale/blob/master/Hands-on%20lab/HOL%20step-by%20step%20-%20Real-time%20data%20with%20Azure%20Database%20for%20PostgreSQL%20Hyperscale.md)<br/>
+- MCW-Real-time-data-with-Azure-Database-for-PostgreSQL-Hyperscale<br/>
Refer to the content in the following picture in this article:<br/> :::image type="content" source="./media/data-flow-troubleshoot-connector-format/handshake-failure-cause-2.png" alt-text="Screenshot that shows the referring content in the article above.":::
data-factory Format Avro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-avro.md
The below table lists the properties supported by an avro sink. You can edit the
## Data type support ### Copy activity
-Avro [complex data types](https://avro.apache.org/docs/current/spec.html#schema_complex) are not supported (records, enums, arrays, maps, unions, and fixed) in Copy Activity.
+Avro complex data types are not supported (records, enums, arrays, maps, unions, and fixed) in Copy Activity.
### Data flows When working with Avro files in data flows, you can read and write complex data types, but be sure to clear the physical schema from the dataset first. In data flows, you can set your logical projection and derive columns that are complex structures, then auto-map those fields to an Avro file.
data-factory Supported File Formats And Compression Codecs Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/supported-file-formats-and-compression-codecs-legacy.md
To use Avro format in a Hive table, you can refer to [Apache Hive's tutorial](ht
Note the following points:
-* [Complex data types](https://avro.apache.org/docs/current/spec.html#schema_complex) are not supported (records, enums, arrays, maps, unions, and fixed).
+* Complex data types are not supported (records, enums, arrays, maps, unions, and fixed).
## <a name="compression-support"></a> Compression support (legacy)
databox-online Azure Stack Edge Pro 2 Safety Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-safety-languages.md
Previously updated : 12/05/2022 Last updated : 09/25/2024 # Safety instructions for your Azure Stack Edge Pro 2 in other languages
For safety instructions in English, go to [Safety instructions for your Azure St
## Safety instructions in Azure languages
-Use a locale code in the table below to create a URL for the article in a specific language.
+To create a URL for the article in a specific language, use a locale code in the following table.
Examples: - English-language article using the *en* locale code:
Examples:
|Language in English |Code | Download PDF | |--|--|--|
-| Amharic | am | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Amharic_RevA_5-25-2022.pdf) |
-| Azerbaijani | az | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Azerbaijani_RevA_5-25-2022.pdf) |
-| Bulgarian | bg | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Bulgarian_RevA_5-25-2022.pdf) |
-| Bengali | bn | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Bengali_RevA_5-25-2022.pdf) |
-| Bosnian | bs | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Bosnian_RevA_5-25-2022.pdf) |
-| Danish | da | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Danish_RevA_5-25-2022.pdf) |
-| Greek | el | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Greek_RevA_5-25-2022.pdf) |
-| Estonian | et | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Estonian_RevA_5-25-2022.pdf) |
-| Finnish | fi | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Finnish_RevA_5-25-2022.pdf) |
-| Hebrew | he | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Hebrew_RevA_5-25-2022.pdf) |
-| Hindi | hi | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Hindi_RevA_5-25-2022.pdf) |
-| Croatian | hr | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Croatian_RevA_5-25-2022.pdf) |
-| Hungarian | hu | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Hungarian_RevA_5-25-2022.pdf) |
-| Icelandic | is | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Icelandic_RevA_5-25-2022.pdf) |
-| Georgian | ka | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Georgian_RevA_5-25-2022.pdf) |
-| Lithuanian | lt | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Lithuanian_RevA_5-25-2022.pdf) |
-| Latvian | lv | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Latvian_RevA_5-25-2022.pdf) |
-| Macedonian | mk | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Macedonian_RevA_5-25-2022.pdf) |
-| Mongolian | mn | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Mongolian_RevA_5-25-2022.pdf) |
-| Malay | ms | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Malay_RevA_5-25-2022.pdf) |
-| Maltese | mt | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Maltese_RevA_5-25-2022.pdf) |
-| Norwegian Bokmål | nb | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Norwegian_RevA_5-25-2022.pdf) |
-| Nepali | ne | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Nepali_RevA_5-25-2022.pdf) |
-| Romanian | ro | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Romanian_RevA_5-25-2022.pdf) |
-| Slovak | sk | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Slovak_RevA_5-25-2022.pdf) |
-| Slovenian | sl | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Slovenian_RevA_5-25-2022.pdf) |
-| Montenegrin | sr | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Montenegrin_RevA_5-25-2022.pdf) |
-| Serbian | sr | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Serbian_RevA_5-25-2022.pdf) |
-| Kiswahili | sw | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Kiswahili_RevA_5-25-2022.pdf) |
-| Thai | th | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Thai_RevA_5-25-2022.pdf) |
-| Turkmen | tk | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Turkmen_RevA_5-25-2022.pdf) |
-| Ukrainian | uk | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Ukrainian_RevA_5-25-2022.pdf) |
-| Urdu | ur | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Urdu_RevA_5-25-2022.pdf) |
-| Uzbek | uz | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Uzbek_RevA_5-25-2022.pdf) |
-| Vietnamese | vi | [Download PDF](https://asedocs.blob.core.windows.net/safety-documentation/MicrosoftAzureStackEdgePro2_SafetyGuide_Vietnamese_RevA_5-25-2022.pdf) |
+| Amharic | am | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Amharic_RevA_5-25-2022.pdf) |
+| Azerbaijani | az | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Azerbaijani_RevA_5-25-2022.pdf) |
+| Bulgarian | bg | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Bulgarian_RevA_5-25-2022.pdf) |
+| Bengali | bn | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Bengali_RevA_5-25-2022.pdf) |
+| Bosnian | bs | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Bosnian_RevA_5-25-2022.pdf) |
+| Danish | da | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Danish_RevA_5-25-2022.pdf) |
+| Greek | el | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Greek_RevA_5-25-2022.pdf) |
+| Estonian | et | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Estonian_RevA_5-25-2022.pdf) |
+| Finnish | fi | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Finnish_RevA_5-25-2022.pdf) |
+| Hebrew | he | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Hebrew_RevA_5-25-2022.pdf) |
+| Hindi | hi | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Hindi_RevA_5-25-2022.pdf) |
+| Croatian | hr | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Croatian_RevA_5-25-2022.pdf) |
+| Hungarian | hu | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Hungarian_RevA_5-25-2022.pdf) |
+| Icelandic | is | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Icelandic_RevA_5-25-2022.pdf) |
+| Georgian | ka | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Georgian_RevA_5-25-2022.pdf) |
+| Lithuanian | lt | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Lithuanian_RevA_5-25-2022.pdf) |
+| Latvian | lv | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Latvian_RevA_5-25-2022.pdf) |
+| Macedonian | mk | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Macedonian_RevA_5-25-2022.pdf) |
+| Mongolian | mn | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Mongolian_RevA_5-25-2022.pdf) |
+| Malay | ms | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Malay_RevA_5-25-2022.pdf) |
+| Maltese | mt | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Maltese_RevA_5-25-2022.pdf) |
+| Norwegian Bokmål | nb | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Norwegian_RevA_5-25-2022.pdf) |
+| Nepali | ne | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Nepali_RevA_5-25-2022.pdf) |
+| Romanian | ro | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Romanian_RevA_5-25-2022.pdf) |
+| Slovak | sk | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Slovak_RevA_5-25-2022.pdf) |
+| Slovenian | sl | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Slovenian_RevA_5-25-2022.pdf) |
+| Montenegrin | sr | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Montenegrin_RevA_5-25-2022.pdf) |
+| Serbian | sr | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Serbian_RevA_5-25-2022.pdf) |
+| Kiswahili | sw | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Kiswahili_RevA_5-25-2022.pdf) |
+| Thai | th | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Thai_RevA_5-25-2022.pdf) |
+| Turkmen | tk | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Turkmen_RevA_5-25-2022.pdf) |
+| Ukrainian | uk | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Ukrainian_RevA_5-25-2022.pdf) |
+| Urdu | ur | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Urdu_RevA_5-25-2022.pdf) |
+| Uzbek | uz | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Uzbek_RevA_5-25-2022.pdf) |
+| Vietnamese | vi | [Download PDF](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/AzureStackEdgePro2Safety/MicrosoftAzureStackEdgePro2_SafetyGuide_Vietnamese_RevA_5-25-2022.pdf) |
## Next steps - Review the [Azure Stack Edge Pro GPU system requirements](azure-stack-edge-pro-2-system-requirements.md).-- [Prepare to deploy Azure Stack Edge Pro 2 device](azure-stack-edge-pro-2-deploy-prep.md)
+- [Prepare to deploy Azure Stack Edge Pro 2 device](azure-stack-edge-pro-2-deploy-prep.md).
energy-data-services How To Set Up Private Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-set-up-private-links.md
In the same virtual network, you can create multiple endpoints. Each end point w
3. This is the IP to which your resource is connected.
+## New data partitions with static IP private endpoints
+It is preferable to create private endpoints with dynamic IP to enable dynamic data partition creation. If you initiate the creation of new data partitions with static IPs private endpoint, it will fail. Each new data partition requires three additional static IPs which the static IP private endpoint is not able to provide.
++
+To create new data partitions successfully with static IP private endpoint, follow the below steps:
+1. Create a new private endpoint with either dynamic IP or enable public access.
+2. Delete existing private endpoint with static IP from Azure Data Manager for Energy instance and delete it from Azure resources also.
+3. Create new data partitions successfully.
+4. Delete the newly created private endpoint with dynamic IP and/or disable public access.
+5. Create a new private endpoint with static IP. This step will now ask to assign additional static IPs needed for new data partition.
+[![Screenshot that shows static IP with new data partition.](media/how-to-manage-private-links/private-links-19-static-ip.png)](media/how-to-manage-private-links/private-links-19-static-ip.png#lightbox)
## Next steps <!-- Add a context sentence for the following links -->
event-grid End Point Validation Cloud Events Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/end-point-validation-cloud-events-schema.md
Title: Endpoint validation with CloudEvents schema
description: This article describes WebHook event delivery and endpoint validation when using webhooks and CloudEvents v1.0 schema. Last updated 09/25/2024+ #customer intent: As a developer, I want to know hw to validate a Webhook endpoint using the CloudEvents v1.0 schema.
event-grid End Point Validation Event Grid Events Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/end-point-validation-event-grid-events-schema.md
Title: Endpoint validation with Event Grid event schema
description: This article describes WebHook event delivery and endpoint validation when using webhooks and the Event Grid event schema. Last updated 09/25/2024+ #customer intent: As a developer, I want to know hw to validate a Webhook endpoint using the Event Grid event schema.
event-grid Event Schema Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-compatibility.md
Title: Event schema compatibility
description: When a subscription is created, an outgoing event schema is defined. The following table shows you the compatibility allowed when creating a subscription. Last updated 09/25/2024+ #customer intent: As a developer, I want to know hw to validate a Webhook endpoint using the CloudEvents v1.0 schema.
event-grid Namespaces Cloud Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/namespaces-cloud-events.md
Title: Event Grid Namespaces - support for CloudEvents schema description: Desbribes how Event Grid Namespaces support CloudEvents schema, which is an open source standard for defining events. + #customer intent: As a developer or architect, I want to know whether and how Azure Event Grid Namespaces support CloudEvents schema.
frontdoor Front Door Cdn Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-cdn-comparison.md
Azure Front Door and Azure CDN are both Azure services that offer global content
The following table provides a comparison between Azure Front Door and Azure CDN services.
-| Features and optimizations | Front Door Standard | Front Door Premium | Front Door Classic | Azure CDN Standard Microsoft | Azure CDN Standard Edgio | Azure CDN Premium Edgio |
+| Features and optimizations | Front Door Standard | Front Door Premium | Front Door (classic) | CDN Standard from Microsoft (classic) | CDN Standard from Edgio | CDN Premium from Edgio |
| | | | | | | | | **Delivery and acceleration** | | | | | | | | Static file delivery | &check; | &check; | &check; | &check; | &check; | &check; |
The following table provides a comparison between Azure Front Door and Azure CDN
| Compression encodings | gzip, brotli | gzip, brotli | gzip, brotli | gzip, brotli | gzip, deflate, bzip2 | gzip, deflate, bzip2, brotli | | Azure Policy integration | &check; | &check; | &check; | | | | | Azure Advisory integration | &check; | &check; | | &check; | &check; | &check; |
-| Managed Identities with Azure Key Vault | &check; | &check; | | | | |
+| Managed Identities with Azure Key Vault | &check; | &check; | | &check; | | |
| **Pricing** | | | | | | | | Simplified pricing | &check; | &check; | | &check; | &check; | &check; |
+## Services on retirement path
+The following table lists services that are on retirement path, frequently asked questions regarding retirement, and migration guidance.
+
+| Details | Front Door (classic) | CDN Standard from Microsoft (classic) | CDN Standard from Akamai |
+| | | | |
+| Retirement Date | March 31, 2027 | September 30, 2027 | December 31, 2023 |
+| Date till new resources can be created | March 31, 2025 | September 30, 2025 | Service is already retired |
+| Documentation | [Azure update](https://azure.microsoft.com/updates/azure-front-door-classic-will-be-retired-on-31-march-2027/), [FAQ](classic-retirement-faq.md) | [Azure update](https://azure.microsoft.com/updates/v2/Azure-CDN-Standard-from-Microsoft-classic-will-be-retired-on-30-September-2027), [FAQ](../cdn/classic-cdn-retirement-faq.md) | [FAQ](../cdn/akamai-retirement-faq.md)|
+| Migration | [Considerations](tier-migration.md), [Step-by-step instructions](migrate-tier.md) | [Considerations](../cdn/tier-migration.md), [Step-by-step instructions](../cdn/migrate-tier.md) | Service is already retired |
+++ ## Next steps * Learn how to [create an Azure Front Door](create-front-door-portal.md).
hdinsight Apache Hive Warehouse Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-warehouse-connector.md
kinit USERNAME
* [HWC and Apache Spark operations](./apache-hive-warehouse-connector-operations.md) * [Use Interactive Query with HDInsight](./apache-interactive-query-get-started.md) * [HWC integration with Apache Zeppelin](./apache-hive-warehouse-connector-zeppelin.md)
-* [Submitting Spark Applications via Spark-submit utility](https://spark.apache.org/docs/2.4.0/submitting-applications.html)
+* [Submitting Spark Applications via Spark-submit utility](https://archive.apache.org/dist/spark/docs/2.4.0/submitting-applications.html)
* [HWC 1.0 supported APIs](./hive-warehouse-connector-apis.md) * [HWC 2.0 supported APIs](./hive-warehouse-connector-v2-apis.md)
hdinsight Hive Warehouse Connector V2 Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/hive-warehouse-connector-v2-apis.md
Complete the [Hive Warehouse Connector setup](./apache-hive-warehouse-connector.
* [HWC and Apache Spark operations](./apache-hive-warehouse-connector-operations.md) * [Use Interactive Query with HDInsight](./apache-interactive-query-get-started.md) * [HWC integration with Apache Zeppelin](./apache-hive-warehouse-connector-zeppelin.md)
-* [Submitting Spark Applications via Spark-submit utility](https://spark.apache.org/docs/2.4.0/submitting-applications.html)
+* [Submitting Spark Applications via Spark-submit utility](https://archive.apache.org/dist/spark/docs/2.4.0/submitting-applications.html)
healthcare-apis Enable Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/enable-diagnostic-logging.md
Last updated 06/03/2022
# Enable Diagnostic Logging in Azure API for FHIR
-In this article, you'll learn how to enable diagnostic logging in Azure API for FHIR and be able to review some sample queries for these logs. Access to diagnostic logs is essential for any healthcare service where compliance with regulatory requirements (such as HIPAA) is a must. The feature in Azure API for FHIR that enables diagnostic logs is the [**Diagnostic settings**](/azure/azure-monitor/essentials/diagnostic-settings) in the Azure portal.
+In this article, you learn how to enable diagnostic logging in Azure API for FHIR&reg; and be able to review sample queries for these logs. Access to diagnostic logs is essential for any healthcare service where compliance with regulatory requirements (such as HIPAA) is a must. The feature in Azure API for FHIR that enables diagnostic logs is the [**Diagnostic settings**](/azure/azure-monitor/essentials/diagnostic-settings) in the Azure portal.
## View and Download FHIR Metrics Data
-You can view the metrics under Monitoring | Metrics from the portal. The metrics include Number of Requests, Average Latency, Number of Errors, Data Size, RUs Used, Number of requests that exceeded capacity, and Availability (in %). The Total Request Metrics will provide the number of requests reaching the FHIR service. This means for requests such as FHIR bundles, it will be considered as single request for logging.
+You can view the metrics under Monitoring | Metrics from the portal. The metrics include Number of Requests, Average Latency, Number of Errors, Data Size, request units (RUs) Used, Number of requests that exceeded capacity, and Availability (in %). The Total Request Metrics provides the number of requests reaching the FHIR service. This means requests such as FHIR bundles are considered as single request for logging.
-The screenshot below shows RUs used for a sample environment with few activities in the last seven days. You can download the data in Json format.
+The following screenshot shows RUs used for a sample environment with few activities in the last seven days. You can download the data in Json format.
:::image type="content" source="media/diagnostic-logging/fhir-metrics-rus-screen.png" alt-text="Azure API for FHIR Metrics from the portal" lightbox="media/diagnostic-logging/fhir-metrics-rus-screen.png":::
The screenshot below shows RUs used for a sample environment with few activities
5. Select the method you want to use to access your diagnostic logs:
- 1. **Archive to a storage account** for auditing or manual inspection. The storage account you want to use needs to be already created.
+ 1. **Archive to a storage account** for auditing or manual inspection. The storage account you want to use needs to already be created.
2. **Stream to event hub** for ingestion by a third-party service or custom analytic solution. You'll need to create an event hub namespace and event hub policy before you can configure this step.
- 3. **Stream to the Log Analytics** workspace in Azure Monitor. You'll need to create your Logs Analytics Workspace before you can select this option.
+ 3. **Stream to the Log Analytics** workspace in Azure Monitor. You need to create your Logs Analytics Workspace before you can select this option.
-6. Select **AuditLogs** and/or **AllMetrics**. The metrics include service name, availability, data size, total latency, total requests, total errors and timestamp. You can find more detail on [supported metrics](/azure/azure-monitor/essentials/metrics-supported#microsofthealthcareapisservices).
+6. Select **AuditLogs** and/or **AllMetrics**. The metrics include service name, availability, data size, total latency, total requests, total errors, and timestamp. Find more detail on [supported metrics](/azure/azure-monitor/essentials/metrics-supported#microsofthealthcareapisservices).
:::image type="content" source="media/diagnostic-logging/fhir-diagnostic-setting.png" alt-text="Azure FHIR Diagnostic Settings. Select AuditLogs and/or AllMetrics." lightbox="media/diagnostic-logging/fhir-diagnostic-setting.png":::
The screenshot below shows RUs used for a sample environment with few activities
> [!Note] > It might take up to 15 minutes for the first Logs to show in Log Analytics. Also, if Azure API for FHIR is moved from one resource group or subscription to another, update the setting once the move is complete.
-For more information on how to work with diagnostic logs, please refer to the [Azure Resource Log documentation](/azure/azure-monitor/essentials/platform-logs-overview)
+For more information on how to work with diagnostic logs, refer to the [Azure Resource Log documentation](/azure/azure-monitor/essentials/platform-logs-overview).
## Audit log details
-At this time, the Azure API for FHIR service returns the following fields in the audit log:
+At this time, the Azure API for FHIR service returns the following fields in the audit log.
|Field Name |Type |Notes | |||| |CallerIdentity|Dynamic|A generic property bag containing identity information
-|CallerIdentityIssuer|String|Issuer
-|CallerIdentityObjectId|String|Object_Id
-|CallerIPAddress|String|The callerΓÇÖs IP address
-|CorrelationId|String| Correlation ID
-|FhirResourceType|String|The resource type for which the operation was executed
-|LogCategory|String|The log category (we're currently returning ΓÇÿAuditLogsΓÇÖ LogCategory)
-|Location|String|The location of the server that processed the request (for example, South Central US)
-|OperationDuration|Int|The time it took to complete this request in seconds. Note : This value is always set to 0, due to a known issue
-|OperationName|String| Describes the type of operation (for example, update, search-type)
-|RequestUri|String|The request URI
-|ResultType|String|The available values currently are **Started**, **Succeeded**, or **Failed**
-|StatusCode|Int|The HTTP status code. (for example, 200)
+|CallerIdentityIssuer|String|Issuer |
+|CallerIdentityObjectId|String|Object_Id |
+|CallerIPAddress|String|The callerΓÇÖs IP address |
+|CorrelationId|String| Correlation ID |
+|FhirResourceType|String|The resource type for which the operation was executed |
+|LogCategory|String|The log category (currently returning ΓÇÿAuditLogsΓÇÖ LogCategory) |
+|Location|String|The location of the server that processed the request (for example, South Central US) |
+|OperationDuration|Int|The time it took to complete this request in seconds. **Note** : This value is always set to 0, due to a known issue |
+|OperationName|String| Describes the type of operation (for example, update, search-type) |
+|RequestUri|String|The request URI |
+|ResultType|String|The available values currently are **Started**, **Succeeded**, or **Failed** |
+|StatusCode|Int|The HTTP status code. (for example, 200) |
|TimeGenerated|DateTime|Date and time of the event|
-|Properties|String| Describes the properties of the fhirResourceType
-|SourceSystem|String| Source System (always Azure in this case)
-|TenantId|String|Tenant ID
-|Type|String|Type of log (always MicrosoftHealthcareApisAuditLog in this case)
-|_ResourceId|String|Details about the resource
+|Properties|String| Describes the properties of the fhirResourceType |
+|SourceSystem|String| Source System (always Azure in this case) |
+|TenantId|String|Tenant ID |
+|Type|String|Type of log (always MicrosoftHealthcareApisAuditLog in this case) |
+|_ResourceId|String|Details about the resource |
## Sample queries Here are a few basic Application Insights queries you can use to explore your log data.
-Run this query to see the **100 most recent** logs:
+Run the following query to see the **100 most recent** logs.
```Application Insights MicrosoftHealthcareApisAuditLogs | limit 100 ```
-Run this query to group operations by **FHIR Resource Type**:
+Run the following query to group operations by **FHIR Resource Type**.
```Application Insights MicrosoftHealthcareApisAuditLogs | summarize count() by FhirResourceType ```
-Run this query to get all the **failed results**
+Run the following query to get all the **failed results**.
```Application Insights MicrosoftHealthcareApisAuditLogs
MicrosoftHealthcareApisAuditLogs
``` ## Conclusion
-Having access to diagnostic logs is essential for monitoring a service and providing compliance reports. Azure API for FHIR allows you to do these actions through diagnostic logs.
-
-FHIR is the registered trademark of HL7 and is used with the permission of HL7.
+Having access to diagnostic logs is essential for monitoring a service and providing compliance reports. Azure API for FHIR allows you to take these actions through diagnostic logs.
## Next steps In this article, you learned how to enable Audit Logs for Azure API for FHIR. For information about Azure API for FHIR configuration settings, see
In this article, you learned how to enable Audit Logs for Azure API for FHIR. Fo
>[!div class="nextstepaction"] >[Configure Private Link](configure-private-link.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/export-data.md
# Export FHIR data in Azure API for FHIR
-The Bulk Export feature allows data to be exported from the FHIR Server per the [FHIR specification](https://www.hl7.org/fhir/uv/bulkdata/).
+The Bulk Export feature allows data to be exported from the FHIR&reg; Server per the [FHIR specification](https://www.hl7.org/fhir/uv/bulkdata/).
-Before using $export, you want to make sure that the Azure API for FHIR is configured to use it. For configuring export settings and creating Azure storage account, refer to [the configure export data page](configure-export-data.md).
+Before using `$export`, make sure that the Azure API for FHIR is configured to use it. For configuring export settings and creating an Azure storage account, refer to the [configure export data page](configure-export-data.md).
> [!NOTE] > Only storage accounts in the same subscription as that for Azure API for FHIR are allowed to be registered as the destination for $export operations. ## Using $export command
-After configuring the Azure API for FHIR for export, you can use the $export command to export the data out of the service. The data will be stored into the storage account you specified while configuring export. To learn how to invoke $export command in FHIR server, read documentation on the [HL7 FHIR $export specification](https://www.hl7.org/fhir/uv/bulkdata/).
+After configuring the Azure API for FHIR for export, you can use the `$export` command to export the data out of the service. The data is stored in the storage account you specified while configuring export. To learn how to invoke the `$export` command in FHIR server, read documentation in the [HL7 FHIR $export specification](https://www.hl7.org/fhir/uv/bulkdata/).
**Jobs stuck in a bad state**
-In some situations, thereΓÇÖs a potential for a job to be stuck in a bad state. This can occur especially if the storage account permissions havenΓÇÖt been set up properly. One way to validate export is to check your storage account to see if the corresponding container (that is, `ndjson`) files are present. If they arenΓÇÖt present, and there are no other export jobs running, then thereΓÇÖs a possibility the current job is stuck in a bad state. You should cancel the export job by sending a cancellation request and try requeuing the job again. Our default run time for an export in bad state is 10 minutes before it will stop and move to a new job or retry the export.
+In some situations, a job may get stuck in a bad state. This can occur if the storage account permissions havenΓÇÖt been set up properly. One way to validate an export is to check your storage account to see if the corresponding container (that is, `ndjson`) files are present. If they arenΓÇÖt present, and there are no other export jobs running, then it's possible the current job is stuck in a bad state. You should cancel the export job by sending a cancellation request and try requeuing the job again. Our default run time for an export in bad state is 10 minutes before it will stop and move to a new job or retry the export.
-The Azure API For FHIR supports $export at the following levels:
+The Azure API For FHIR supports `$export` at the following levels:
* [System](https://www.hl7.org/fhir/uv/bulkdata/): `GET https://<<FHIR service base URL>>/$export>>` * [Patient](https://www.hl7.org/fhir/uv/bulkdata/): `GET https://<<FHIR service base URL>>/Patient/$export>>` * [Group of patients*](https://www.hl7.org/fhir/uv/bulkdata/) - Azure API for FHIR exports all related resources but doesn't export the characteristics of the group: `GET https://<<FHIR service base URL>>/Group/[ID]/$export>>`
-With export, data is exported in multiple files each containing resources of only one type. The number of resources in an individual file will be limited. The maximum number of resources is based on system performance. It's currently set to 5,000, but can change. The result is that you might get multiple files for a resource type. The file names will follow the format 'resourceName-number-number.ndjson'. The order of the files isn't guaranteed to correspond to any ordering of the resources in the database.
+Data is exported in multiple files, each containing resources of only one type. The number of resources in an individual file will be limited. The maximum number of resources is based on system performance. It's currently set to 5,000, but can change. The result is that you might get multiple files for a resource type. The file names follow the format 'resourceName-number-number.ndjson'. The order of the files isn't guaranteed to correspond to any ordering of the resources in the database.
> [!NOTE] > `Patient/$export` and `Group/[ID]/$export` may export duplicate resources if the resource is in a compartment of more than one resource, or is in multiple groups.
-In addition, checking the export status through the URL returned by the location header during the queuing is supported along with canceling the actual export job.
+In addition, checking the export status through the URL returned by the location header during the queuing is supported, along with canceling the actual export job.
### Exporting FHIR data to ADLS Gen2
-Currently we support $export for ADLS Gen2 enabled storage accounts, with the following limitation:
+Currently we support `$export` for ADLS Gen2 enabled storage accounts, with the following limitations:
-- User canΓÇÖt take advantage of [hierarchical namespaces](../../storage/blobs/data-lake-storage-namespace.md), yet there isn't a way to target export to a specific subdirectory within the container. We only provide the ability to target a specific container (where we create a new folder for each export).-- Once an export is complete, we never export anything to that folder again, since subsequent exports to the same container will be inside a newly created folder.
+- Users canΓÇÖt take advantage of [hierarchical namespaces](../../storage/blobs/data-lake-storage-namespace.md) - there isn't a way to target an export to a specific subdirectory within a container. We only provide the ability to target a specific container (where a new folder is created for each export).
+- Once an export is complete, nothing is ever exported to that folder again. Subsequent exports to the same container will be inside a newly created folder.
## Settings and parameters ### Headers
-There are two required header parameters that must be set for $export jobs. The values are defined by the current [$export specification](https://www.hl7.org/fhir/uv/bulkdata/).
+There are two required header parameters that must be set for `$export` jobs. The values are defined by the current [$export specification](https://www.hl7.org/fhir/uv/bulkdata/).
* **Accept** - application/fhir+json * **Prefer** - respond-async ### Query parameters
-The Azure API for FHIR supports the following query parameters. All of these parameters are optional:
+The Azure API for FHIR supports the following query parameters. All of these parameters are optional.
|Query parameter | Defined by the FHIR Spec? | Description| |||| | \_outputFormat | Yes | Currently supports three values to align to the FHIR Spec: application/fhir+ndjson, application/ndjson, or ndjson. All export jobs return `ndjson` and the passed value has no effect on code behavior. |
-| \_since | Yes | Allows you to only export resources that have been modified since the time provided |
-| \_type | Yes | Allows you to specify which types of resources will be included. For example, \_type=Patient would return only patient resources|
-| \_typefilter | Yes | To request finer-grained filtering, you can use \_typefilter along with the \_type parameter. The value of the _typeFilter parameter is a comma-separated list of FHIR queries that further restrict the results |
-| \_container | No | Specifies the container within the configured storage account where the data should be exported. If a container is specified, the data will be exported into a folder into that container. If the container isnΓÇÖt specified, the data will be exported to a new container. |
-| \_till | No | Allows you to only export resources that have been modified till the time provided. This parameter is applicable to only System-Level export. In this case, if historical versions haven't been disabled or purged, export guarantees true snapshot view, or, in other words, enables time travel. |
-|includeAssociatedData | No | Allows you to export history and soft deleted resources. This filter doesn't work with '_typeFilter' query parameter. Include value as '_history' to export history/ non latest versioned resources. Include value as '_deleted' to export soft deleted resources. |
-|\_isparallel| No |The "_isparallel" query parameter can be added to the export operation to enhance its throughput. The value needs to be set to true to enable parallelization. It is important to note that using this parameter may result in an increase in the request units consumption over the life of export. |
+| \_since | Yes | Allows you to only export resources that have been modified since the time provided. |
+| \_type | Yes | Allows you to specify which types of resources will be included. For example, \_type=Patient would return only patient resources.|
+| \_typefilter | Yes | To request finer-grained filtering, you can use \_typefilter along with the \_type parameter. The value of the _typeFilter parameter is a comma-separated list of FHIR queries that further restrict the results. |
+| \_container | No | Specifies the container within the configured storage account where the data should be exported. If a container is specified, the data is exported into a folder in that container. If the container isnΓÇÖt specified, the data is exported to a new container. |
+| \_till | No | Allows you to only export resources that have been modified up to the time provided. This parameter is only applicable to System-Level export. In this case, if historical versions haven't been disabled or purged, export guarantees a true snapshot view. In other words, enables time travel. |
+|includeAssociatedData | No | Allows you to export history and soft deleted resources. This filter doesn't work with the '_typeFilter' query parameter. Include the value as '_history' to export history (non-latest versioned) resources. Include the value as '_deleted' to export soft deleted resources. |
+|\_isparallel| No |The "_isparallel" query parameter can be added to the export operation to enhance its throughput. The value needs to be set to true to enable parallelization. Note: Using this parameter may result in an increase in request units consumption over the life of export. |
> [!NOTE]
-> There is a known issue with the $export operation that could result in incomplete exports with status success. Issue occurs when the is_parallel flag was used. Export jobs executed with _isparallel query parameter starting February 13th, 2024 are impacted with this issue.
+> There is a known issue with the `$export` operation that could result in incomplete exports with status success. The issue occurs when the is_parallel flag was used. Export jobs executed with _isparallel query parameter starting February 13th, 2024 are impacted with this issue.
## Secure Export to Azure Storage
-Azure API for FHIR supports a secure export operation. Choose one of the two options below:
+Azure API for FHIR supports a secure export operation. Choose one of the following two options.
* Allowing Azure API for FHIR as a Microsoft Trusted Service to access the Azure storage account. * Allowing specific IP addresses associated with Azure API for FHIR to access the Azure storage account.
-This option provides two different configurations depending on whether the storage account is in the same location as, or is in a different location from that of the Azure API for FHIR.
+This option provides two different configurations depending on whether the storage account is in the same or different location as the Azure API for FHIR.
### Allowing Azure API for FHIR as a Microsoft Trusted Service
Under the **Exceptions** section, select the box **Allow trusted Microsoft servi
:::image type="content" source="media/export-data/exceptions.png" alt-text="Allow trusted Microsoft services to access this storage account.":::
-You're now ready to export FHIR data to the storage account securely. Note that the storage account is on selected networks and isnΓÇÖt publicly accessible. To access the files, you can either enable and use private endpoints for the storage account, or enable all networks for the storage account for a short period of time.
+You're now ready to export FHIR data to the storage account securely. Note: The storage account is on selected networks and isnΓÇÖt publicly accessible. To access the files, you can either enable and use private endpoints for the storage account, or enable all networks for the storage account for a short period of time.
> [!IMPORTANT] > The user interface will be updated later to allow you to select the Resource type for Azure API for FHIR and a specific service instance.
You're now ready to export FHIR data to the storage account securely. Note that
## Next steps
-In this article, you've learned how to export FHIR resources using $export command. Next, to learn how to export de-identified data, see
+In this article, you learned how to export FHIR resources using `$export` command. Next, to learn how to export de-identified data, see
>[!div class="nextstepaction"] >[Export de-identified data](de-identified-export.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Configure Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deidentification/configure-private-endpoints.md
+
+ Title: Configure Private Endpoint network access to Azure Health Data Services de-identification service
+description: Learn how to restrict network access to your de-identification service.
Last updated : 09/26/2024+++++
+# customer intent: As an IT admin, I want to restrict network access to a de-identification service to a private endpoint in a virtual network.
++
+# Configure Private Endpoint network access to Azure Health Data Services de-identification service (preview)
+Azure Private Link enables you to access Azure services over a **private endpoint** in your virtual network.
+
+A private endpoint is a network interface that connects you privately and securely to an Azure service which supports Azure Private Link. The private endpoint uses a private IP address from your virtual network, effectively bringing the service into your virtual network. All traffic to the service is routed through the private endpoint, so no gateways, NAT devices, ExpressRoute or VPN connections, or public IP addresses are needed. Traffic between your virtual network and the service traverses the Microsoft backbone network, eliminating exposure from the public Internet. You can restrict connections to specific instances of an Azure service, giving you the highest level of granularity in access control.
+
+For more information, see [What is Azure Private Link?](../../private-link/private-link-overview.md)
+
+## Add a private endpoint using the Azure portal
+
+### Prerequisites
+
+> [!IMPORTANT]
+> Before enabling Private Endpoint access to your de-identification service (preview), you will need to [create a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) to request access to this feature for your subscription.
+> Create the request under **Azure Health Data Services > General question > De-identification service > Configuration and management**
+
+- A de-identification service in your Azure subscription. If you don't have a de-identification service, follow the steps in [Quickstart: Deploy the de-identification service](quickstart.md).
+- Owner or contributor permissions for the de-identification service.
+
+### Create a private endpoint
+
+Follow the steps at [Quickstart: Create a private endpoint by using the Azure portal](/azure/private-link/create-private-endpoint-portal).
+
+- Instead of a webapp, create a private endpoint to a de-identification service (preview).
+- When you reach [Create a private endpoint](/azure/private-link/create-private-endpoint-portal?tabs=dynamic-ip#create-a-private-endpoint), step 5, enter resource type **Microsoft.HealthDataAIServices/deidServices**.
+- Your private endpoint and virtual network must be in the same region. When you select a region for the private endpoint using the portal, it automatically filters virtual networks that are in that region. Your de-identification service can be in a different region.
+- When you reach [Test connectivity to the private endpoint](/azure/private-link/create-private-endpoint-portal?tabs=dynamic-ip#test-connectivity-to-the-private-endpoint) steps 8 and 10, use the service URL of your de-identification service plus the `/health` path.
+
+### Configure private access
+
+> [!IMPORTANT]
+> Creating a private endpoint does **not** restrict public network access automatically.
+
+When creating a de-identification service (preview), you can either allow public only (from all networks) or private only (only via private endpoints) access to the de-identification service.
+
+If you already have a de-identification service, you can configure network access by going to the service's Azure portal **Networking** page, and under **Public network access**, selecting **Disabled**.
+
+## Manage private endpoints using Azure portal
+
+When you create a private endpoint, the connection must be approved. If the resource for which you're creating a private endpoint is in your directory, you can approve the connection request provided you have sufficient permissions. If you're connecting to an Azure resource in another directory, you must wait for the owner of that resource to approve your connection request.
+
+There are four provisioning states:
+
+| Service action | Service consumer private endpoint state | Description |
+|--|--|--|
+| None | Pending | Connection is created manually and is pending approval from the target resource owner. |
+| Approve | Approved | Connection was automatically or manually approved and is ready to be used. |
+| Reject | Rejected | The target resource owner rejected the connection. |
+| Remove | Disconnected | The target resource owner removed the connection. The private endpoint should be deleted for cleanup. |
+
+### Approve, reject, or remove a private endpoint connection
+
+1. Sign in to the Azure portal.
+2. In the search bar, type in **de-id**.
+3. Select the **de-identification service** that you want to manage.
+4. Select the **Networking** tab.
+5. Go to the appropriate following section based on the operation you want to: approve, reject, or remove.
+
+### Approve a private endpoint connection
+1. If there are any connections that are pending, you see a connection listed with **Pending** in the provisioning state.
+2. Select the **private endpoint** you wish to approve
+3. Select the **Approve** button.
+4. On the **Approve connection** page, add a comment (optional), and select **Yes**. If you select **No**, nothing happens.
+5. You should see the status of the private endpoint connection in the list changed to **Approved**.
+
+### Reject a private endpoint connection
+
+1. If there are any private endpoint connections you want to reject, whether it's a pending request or existing connection, select the connection and select the **Reject** button.
+2. On the **Reject connection** page, enter a comment (optional), and select **Yes**. If you select **No**, nothing happens.
+3. You should see the status of the private endpoint connection in the list changed to **Rejected**.
+
+### Remove a private endpoint connection
+
+1. To remove a private endpoint connection, select it in the list, and select **Remove** on the toolbar.
+2. On the **Delete connection** page, select **Yes** to confirm the deletion of the private endpoint. If you select **No**, nothing happens.
+3. You should see the status changed to **Disconnected**. Then, the endpoint disappears from the list.
+
+## Limitations and design considerations
+
+- For pricing information, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
+- This feature is available in all Azure public regions.
+- Because network traffic is blocked at the application layer, you can still ping the public endpoint of your service even though public network access is disabled.
+
+For more, see [Azure Private Link service: Limitations](../../private-link/private-link-service-overview.md#limitations)
+
+## Related content
+
+- Learn more about [Azure Private Link](../../private-link/private-link-service-overview.md)
iot-central How To Connect Iot Edge Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-iot-edge-transparent-gateway.md
IoT Edge supports the [*transparent* and *translation* gateway patterns](../../i
:::image type="content" source="media/how-to-connect-iot-edge-transparent-gateway/edge-transparent-gateway.png" alt-text="Diagram that shows IoT Edge as a transparent gateway." border="false"::: + For simplicity, this article uses virtual machines to host the downstream and gateway devices. In a real scenario, the downstream device and gateway would run on physical devices on your local network. This article shows how to implement the scenario by using the IoT Edge 1.4 runtime.
iot-hub-device-update Device Update Tls Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-tls-download.md
The Device Update for IoT Hub samples also have functions to parse the URL that
Finally, you may also need to make changes to your own implementation, such as changing the HTTPS header buffer to manage the update URL format that your device will receive from Device Update.
+## Certificate information
+
+The certificate used to enable the TLS connection is issued by: **Microsoft Azure RSA TLS Issuing CA 03**. Devices that download content over TLS from the Device Update service will need to be provisioned with one or more certificates that have Microsoft Azure RSA TLS Issuing CA 03 as their root.
+ ## Next steps [Troubleshoot common issues](troubleshoot-device-update.md)
iot Set Up Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/set-up-environment.md
ms.devlang: azurecli
Before you can complete any of the IoT Plug and Play quickstarts and tutorials, you need to configure an IoT hub and the Device Provisioning Service (DPS) in your Azure subscription. You'll also need local copies of the model files used by the sample applications and the Azure IoT explorer tool. + ## Prerequisites If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
iot Tutorial Migrate Device To Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/tutorial-migrate-device-to-module.md
This tutorial shows you how to connect a generic IoT Plug and Play [module](../iot-hub/iot-hub-devguide-module-twins.md). + A device is an IoT Plug and Play device if it: * Publishes its model ID when it connects to an IoT hub.
To demonstrate how to implement an IoT Plug and Play module, this tutorial shows
To complete this tutorial, install the following software in your local development environment: * Install the latest .NET for your operating system from [https://dot.net](https://dot.net).
-* [Git](https://git-scm.com/download/).
+* [Git](https://git-scm.com/downloads/).
Use the Azure IoT explorer tool to add a new device called **my-module-device** to your IoT hub.
migrate Concepts Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-assessment-calculation.md
# Assessment overview (migrate to Azure VMs)
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
- This article provides an overview of assessments in the [Azure Migrate: Discovery and assessment](migrate-services-overview.md) tool. The tool can assess on-premises servers in VMware virtual and Hyper-V environment, and physical servers for migration to Azure. ## What's an assessment?
migrate Migrate Support Matrix Hyper V Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v-migration.md
# Support matrix for Hyper-V migration
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
- This article summarizes support settings and limitations for migrating Hyper-V VMs with [Migration and modernization](migrate-services-overview.md) . If you're looking for information about assessing Hyper-V VMs for migration to Azure, review the [assessment support matrix](migrate-support-matrix-hyper-v.md). ## Migration limitations
migrate Migrate Support Matrix Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v.md
ms.cutom: engagement-fy25
# Support matrix for Hyper-V assessment
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that's nearing end-of-life status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
- This article summarizes prerequisites and support requirements when you discover and assess on-premises servers running in a Hyper-V environment for migration to Azure by using the [Azure Migrate: Discovery and assessment](migrate-services-overview.md) tool. If you want to migrate servers running on Hyper-V to Azure, see the [migration support matrix](migrate-support-matrix-hyper-v-migration.md). To set up discovery and assessment of servers running on Hyper-V, you create a project and add the Azure Migrate: Discovery and assessment tool to the project. After the tool is added, you deploy the [Azure Migrate appliance](migrate-appliance.md). The appliance continuously discovers on-premises servers and sends server metadata and performance data to Azure. After discovery is complete, you gather discovered servers into groups and run an assessment for a group.
migrate Migrate Support Matrix Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical.md
# Support matrix for physical server discovery and assessment
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that's nearing end-of-life status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
- This article summarizes prerequisites and support requirements when you assess physical servers for migration to Azure by using the [Azure Migrate: Discovery and assessment](migrate-services-overview.md) tool. If you want to migrate physical servers to Azure, see the [migration support matrix](migrate-support-matrix-physical-migration.md). To assess physical servers, you create a project and add the Azure Migrate: Discovery and assessment tool to the project. After you add the tool, you deploy the [Azure Migrate appliance](migrate-appliance.md). The appliance continuously discovers on-premises servers and sends servers metadata and performance data to Azure. After discovery is finished, you gather discovered servers into groups and run an assessment for a group.
migrate Prepare For Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-for-migration.md
# Prepare on-premises machines for migration to Azure
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
- This article describes how to prepare on-premises machines before you migrate them to Azure using the [Migration and modernization](migrate-services-overview.md) tool. In this article, you:
migrate Tutorial App Containerization Java App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-java-app-service.md
Last updated 09/19/2024
# Java web app containerization and migration to Azure App Service
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
- In this article, you'll learn how to containerize Java web applications (running on Apache Tomcat) and migrate them to [Azure App Service](https://azure.microsoft.com/services/app-service/) using the Azure Migrate: App Containerization tool. The containerization process doesnΓÇÖt require access to your codebase and provides an easy way to containerize existing applications. The tool works by using the running state of the applications on a server to determine the application components and helps you package them in a container image. The containerized application can then be deployed on Azure App Service. The Azure Migrate: App Containerization tool currently supports:
migrate Tutorial App Containerization Java Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-java-kubernetes.md
Last updated 09/19/2024
# Java web app containerization and migration to Azure Kubernetes Service
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
- In this article, you'll learn how to containerize Java web applications (running on Apache Tomcat) and migrate them to [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) using the Azure Migrate: App Containerization tool. The containerization process doesnΓÇÖt require access to your codebase and provides an easy way to containerize existing applications. The tool works by using the running state of the applications on a server to determine the application components and helps you package them in a container image. The containerized application can then be deployed on Azure Kubernetes Service (AKS). The Azure Migrate: App Containerization tool currently supports -
migrate Tutorial Discover Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-import.md
# Tutorial: Build a business case or assess servers using an imported CSV file
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
- As part of your migration journey to Azure, you discover your on-premises inventory and workloads. This tutorial shows you how to build a business case or assess on-premises machines with the Azure Migrate: Discovery and Assessment tool, using an imported comma-separate values (CSV) file.
migrate Tutorial Discover Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-spring-boot.md
# Tutorial: Discover Spring Boot applications running in your datacenter (preview)
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
- This article describes how to discover Spring Boot applications running on servers in your datacenter, using Azure Migrate: Discovery and assessment tool. The discovery process is completely agentless; no agents are installed on the target servers.
After copying the script, you can go to your Linux server, save the script as *D
- | - **Supported Linux OS** | Ubuntu 20.04, RHEL 9 **Hardware configuration required** | 6 GB RAM, with 30 GB storage on root volume, 4 Core CPU
- **Network Requirements** | Access to the following endpoints: <br/><br/> https://dc.services.visualstudio.com/v2/track <br/><br/> https://legoonboarding.blob.core.windows.net <br/><br/> [Azure CLI endpoints for proxy bypass](/cli/azure/azure-cli-endpoints)
+ **Network Requirements** | Access to the following endpoints: <br/><br/> https://dc.services.visualstudio.com/v2/track <br/><br/> [Azure CLI endpoints for proxy bypass](/cli/azure/azure-cli-endpoints)
5. After copying the script, go to your Linux server, save the script as *Deploy.sh* on the server.
After you save the script on the Linux server, follow these steps:
> [!Note] > This script needs to be run after you connect to a Linux machine on its terminal that has met the networking pre-requisite and OS compatibility.
-> Ensure that you have curl installed on the server. For Ubuntu, you can install it using the command `sudo apt-get install curl`, and for other OS (RHEL/CentOS), you can use the command `yum install curl`.
+> Ensure that you have curl installed on the server. For Ubuntu, you can install it using the command `sudo apt-get install curl`, and for other OS (RHEL), you can use the command `yum install curl`.
> [!Important] > Don't edit the script unless you want to clean up the setup.
After you save the script on the Linux server, follow these steps:
> [!Note] > - This script needs to be run after you connect to a Linux machine on its terminal that meets the networking prerequisites and OS compatibility.
-> - Ensure that you have curl installed on the server. For Ubuntu, you can install it using the command `sudo apt-get install curl`, and for other OS (RHEL/CentOS), you can use the `yum install curl` command.
+> - Ensure that you have curl installed on the server. For Ubuntu, you can install it using the command `sudo apt-get install curl`, and for other OS (RHEL), you can use the `yum install curl` command.
> [!Important] > Don't edit the script unless you want to clean up the setup.
migrate Tutorial Migrate Aws Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-aws-virtual-machines.md
# Discover, assess, and migrate Amazon Web Services (AWS) VMs to Azure
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that's nearing end-of-life status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
- This tutorial shows you how to discover, assess, and migrate Amazon Web Services (AWS) virtual machines (VMs) to Azure VMs by using Azure Migrate: Server Assessment and the Migration and modernization tool. > [!NOTE]
After you verify that the test migration works as expected, you can migrate the
**Question:** Can I migrate AWS VMs running the Amazon Linux operating system?<br> **Answer:** VMs running Amazon Linux can't be migrated as is because the Amazon Linux OS is only supported on AWS.
-To migrate workloads running on Amazon Linux, you can spin up a CentOS/RHEL VM in Azure. Then you can migrate the workload running on the AWS Linux machine by using a relevant workload migration approach. For example, depending on the workload, there might be workload-specific tools to aid the migration. These tools might be for databases or deployment tools for web servers.
+To migrate workloads running on Amazon Linux, you can spin up a RHEL VM in Azure. Then you can migrate the workload running on the AWS Linux machine by using a relevant workload migration approach. For example, depending on the workload, there might be workload-specific tools to aid the migration. These tools might be for databases or deployment tools for web servers.
## Next steps
migrate Migrate Support Matrix Vmware Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/migrate-support-matrix-vmware-migration.md
This article summarizes support settings and limitations for migrating VMware vSphere VMs with [Migration and modernization](../migrate-services-overview.md) . If you're looking for information about assessing VMware vSphere VMs for migration to Azure, review the [assessment support matrix](migrate-support-matrix-vmware.md).
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
- ## Migration options You can migrate VMware vSphere VMs in a couple of ways:
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/migrate-support-matrix-vmware.md
zone_pivot_groups: vmware-discovery-requirements
# Support matrix for VMware discovery
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that's nearing end-of-life status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
- This article summarizes prerequisites and support requirements for using the [Azure Migrate: Discovery and assessment](../migrate-services-overview.md) tool to discover and assess servers in a VMware environment for migration to Azure. To assess servers, first, create an Azure Migrate project. The Azure Migrate: Discovery and assessment tool is automatically added to the project. Then, deploy the Azure Migrate appliance. The appliance continuously discovers on-premises servers and sends configuration and performance metadata to Azure. When discovery is finished, gather the discovered servers into groups and run assessments per group.
migrate Prepare For Agentless Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/prepare-for-agentless-migration.md
This article provides an overview of the changes performed when you [migrate VMware VMs to Azure via the agentless migration](./tutorial-migrate-vmware.md) method using the Migration and modernization tool.
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
- Before you migrate your on-premises VM to Azure, you may require a few changes to make the VM ready for Azure. These changes are important to ensure that the migrated VM can boot successfully in Azure and connectivity to the Azure VM can be established-. Azure Migrate automatically handles these configuration changes for the following operating system versions for both Linux and Windows. This process is called *Hydration*.
The preparation script executes the following changes based on the OS type of th
Once the root partition is discovered, the script will use the following files to determine the Linux Operating System distribution and version.
- - RHEL/CentOS: etc/redhat-release
+ - RHEL: etc/redhat-release
- OL: etc/oracle-release - SLES: etc/SuSE-release - Ubuntu: etc/lsb-release
operator-nexus Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-storage.md
The storage appliance in an Azure Operator Nexus instance is represented as an A
The Azure Operator Nexus software Kubernetes stack offers two types of storage. Operators select them through the Kubernetes StorageClass mechanism.
+> [!IMPORTANT]
+> Azure Operator Nexus does not support ephemeral volumes. Nexus recommends that the persistent volume storage mechanisms described in this document are used for all workload volumes as these provide the highest levels of performance and availability. All storage in Azure Operator Nexus is provided by the storage appliance. There is no support for storage provided by baremetal machine disks.
+ ### StorageClass: nexus-volume The default storage mechanism, *nexus-volume*, is the preferred choice for most users. It provides the highest levels of performance and availability. However, volumes can't be simultaneously shared across multiple worker nodes. Operators can access and manage these volumes by using the Azure API and portal, through the volume resource.
operator-nexus Howto Use Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-use-azure-policy.md
# Use Azure Policy to secure your Nexus resources
-In this article, you'll learn how to use Azure Policy to secure and validate the compliance status of your Nexus resources.
+In this article, you can learn how to use Azure Policy to secure and validate the compliance status of your Nexus resources.
## Before you begin
If you're new to Azure Policy, here are some helpful resources that you can use
##### Understanding Policy Definitions and Assignments -- **Policy Definitions**: These are the rules that your resources need to comply with. They can be built-in or custom.
+- **Policy Definitions**: The rules that your resources need to comply with. They can be built-in or custom.
- **Assignments**: The process of applying a policy definition to your resources. ##### Steps for security enforcement
-1. **Explore built-in policies**: Review built-in policies relevant to Nexus Bare Metal Machine (BMM) resources.
+1. **Explore built-in policies**: Review built-in policies relevant to Nexus Bare Metal Machine (BMM) and Compute Cluster resources.
2. **Customize policies**: Customize policies to address specific needs of your resources. 3. **Policy assignment**: Assign policies through the Azure portal, ensuring correct scope. 4. **Monitoring and compliance**: Regularly monitor policy compliance using Azure tools.
If you're new to Azure Policy, here are some helpful resources that you can use
## Use Azure Policy to secure your Nexus BMM resources
-The Operator Nexus service offers a built-in policy definition that is recommended to be assigned to your Nexus BMM resources. This policy definition is called **[Preview]: Nexus compute machines should meet security baseline**. This policy definition is used to ensure that your Nexus BMM resources are configured with industry best practice security settings.
+The Operator Nexus service offers a built-in policy definition that is recommended to assign to your Nexus BMM resources. This policy definition is called **[Preview]: Nexus compute machines should meet security baseline**. This policy definition is used to ensure that your Nexus BMM resources are configured with industry best practice security settings.
- [[Preview]: Nexus compute machines should meet security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fec2c1bce-5ad3-4b07-bb4f-e041410cd8db)
-## Use Azure Policy to secure your Nexus Kubernetes cluster
+## Use Azure Policy to secure your Nexus Kubernetes Compute Cluster resources
-Operator Nexus Arc-connected Nexus Kubernetes do not yet have built-in policy definitions available. However, you can create custom policy definitions to meet your organization's security and compliance requirements or utilize built-in policy definitions for AKS clusters.
+The Operator Nexus service offers a built-in initiative definition that is recommended to assign to your Nexus Kubernetes Compute Cluster resources. This initiative definition is called **[Preview]: Nexus compute cluster should meet security baseline**. This initiative definition is used to ensure that your Nexus Kubernetes Compute Cluster resources are configured with industry best practice security settings.
-- [Understand Azure Policy for Kubernetes clusters](../governance/policy/concepts/policy-for-kubernetes.md)-- [Azure Policy Built-in definitions for AKS](/azure/aks/policy-reference)
+- [[Preview]: Nexus compute cluster should meet security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/InitiativeDetail.ReactView/id/%2Fproviders%2FMicrosoft.Authorization%2FpolicySetDefinitions%2F336cb876-5cb8-4795-b9d1-bd9323d3487e)
-### Customizing Policies for Nexus Kubernetes cluster
+### Customizing Policies
-- Customize policies considering the unique aspects of Nexus Kubernetes clusters, such as network configurations and container security.
+- Customize policies considering the unique aspects of the specific resources.
- Refer to [Custom policy definitions](../governance/policy/tutorials/create-custom-policy-definition.md) for guidance. ## Apply and validate Policies for Nexus resources
-Whether you are securing Nexus BMM resources or Nexus Kubernetes clusters, the process of applying and validating policies is similar. Here's a generalized approach:
+Whether you're securing Nexus BMM resources or Nexus Kubernetes Compute Clusters, the process of applying and validating policies is similar. Here's a generalized approach:
1. **Identify Suitable Policies**: - For Nexus Bare Metal Machine resources, consider the recommended **[Preview]: Nexus compute machines should meet security baseline** policy.
- - For Nexus Kubernetes clusters, explore [built-in AKS policies](/azure/aks/policy-reference) or create custom policy definitions to meet specific security and compliance needs.
- - Review [Azure Policy Built-in definitions](../governance/policy/samples/built-in-policies.md) and [Azure Policy for Kubernetes clusters](../governance/policy/concepts/policy-for-kubernetes.md) for more insights.
+ - For Nexus Kubernetes Compute Clusters, consider the recommended **[Preview]: Nexus compute cluster should meet security baseline** initiative.
2. **Assign Policies**:
operator-nexus Reference Near Edge Storage Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-near-edge-storage-supported-versions.md
Each number in the version indicates general compatibility with the previous ver
## Release process
-1. **End of support:**
+1. **End of support:**
- Nexus will announce end of support for the oldest supported LTS version via release notes once the timeline for the new LTS version is available.
- - Nexus will stop supporting the oldest supported LTS version shortly before adding support for new LTS version (that is before the next LTS version is ready for testing in labs).
+ - Nexus will stop supporting the oldest supported LTS version shortly before adding support for new LTS version (that is before the next LTS version is ready for testing in labs).
3. **Introduction:** Nexus typically declares support for a new LTS release once the first patch release is available. This is to benefit from any critical fixes. Release cadence: - By default, the introduction of any new release support (LTS/patch) will be combined with Nexus runtime release.
- - Introduction of a new LTS release may, in rare cases, require a specific upgrade ordering and a timeline.
- - Depending on severity of Common Vulnerabilities & Exposures (CVE) fixes or blocker issues, a Purity version may be verified and introduced outside of a runtime release.
+ - Introduction of a new LTS release may, in rare cases, require a specific upgrade ordering and a timeline.
+ - Depending on severity of Common Vulnerabilities & Exposures (CVE) fixes or blocker issues, a Purity version may be verified and introduced outside of a runtime release.
## Supported Storage Software Versions (Purity)
Each number in the version indicates general compatibility with the previous ver
| 6.5.1 | Nexus 2403.x | Dec 2025* | | | 6.5.4 | Nexus 2404.x | Dec 2025* | | | 6.5.6 | Nexus 2406.2 | Dec 2025* | Aligned with Nexus runtime release |
+| 6.5.8 | Nexus 2408.2 | Dec 2025* | |
> [!IMPORTANT] > \* At max 2 LTS versions will be supported. The dates are tentative assuming that by this timeframe we will have another set of LTS versions released, making this version deprecate per our support guidelines.
Each number in the version indicates general compatibility with the previous ver
| R3 | Year 2021 | | R4 | Nexus 2404.x |
+## Supported Pure FlashArray Expansion Shelf firmware versions
+
+Azure Operator Nexus supports and tests the latest combination of a Purity version and FlashArray expansion shelf version (DFS) at the time of a Nexus release.
+
+| PurityOS Version | DFS Version|
+|||
+| 6.5.8 | 2.2.1 |
+ ## FAQ ### How does Microsoft notify me of a new supported Purity version?
partner-solutions Informatica Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/informatica/informatica-overview.md
Title: What is Informatica Intelligent Data Management Cloud?
description: Learn about using the Informatica Intelligent Data Management Cloud - Azure Native ISV Service. Previously updated : 04/02/2024 Last updated : 09/27/2024
Last updated 04/02/2024
Azure Native ISV Services enable you to easily provision, manage, and tightly integrate independent software vendor (ISV) software and services on Azure. This Azure Native ISV Service is developed and managed by Microsoft and Informatica.
-You can find Informatica Intelligent Data Management Cloud - Azure Native ISV Service in the [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Dynatrace.Observability%2Fmonitors) or get it on [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dynatrace.dynatrace_portal_integration?tab=Overview).
+<!-- You can find Informatica Intelligent Data Management Cloud - Azure Native ISV Service in the [Azure portal](https://portal.azure.com/) or get it on [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/).-->
Use this offering to manage your Informatica organization as an Azure Native ISV Service. You can easily run and manage Informatica Organizations and advanced serverless environments as you need and get started through Azure Clients.
reliability Reliability Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-event-hubs.md
This article describes reliability support in [Azure Event Hubs](../event-hubs/e
[!INCLUDE [Availability zone description](includes/reliability-availability-zone-description-include.md)]
-Event Hubs implements transparent failure detection and failover mechanisms so that, when failure occurs, the service continues to operate within the assured service-levels and without noticeable interruptions. If you create an Event Hubs namespace in a region that supports availability zones, [zone redundancy](./availability-zones-overview.md#zonal-and-zone-redundant-services) is automatically enabled. With zone-redundancy, fault tolerance is increased and the service has enough capacity reserves to cope with the outage of an entire facility. Both metadata and data (events) are replicated across data centers in each zone.
+Event Hubs implements transparent failure detection and failover mechanisms so that, when failure occurs, the service continues to operate within the assured service-levels and without noticeable interruptions. If you create an Event Hubs namespace in a region that supports availability zones, [zone redundancy](./availability-zones-overview.md#zonal-and-zone-redundant-services) is automatically enabled. With zone-redundancy, fault tolerance is increased and the service has enough capacity reserves to cope with the outage of an entire facility. Both metadata and data (events) are replicated across data centers in each zone.
### Prerequisites
The Azure portal doesn't support disabling availability zones. To disable availa
### Availability zone migration
-When you create availability zones in a region that supports them, availability zones are automatically enabled. If you wish to learn how to move your Event Hub to a new region that supports availability zones, see
+When you create availability zones in a region that supports them, availability zones are automatically enabled. If you wish to learn how to move your Event Hubs namespace to a new region that supports availability zones, see
[Relocate Event Hubs to another region](../operational-excellence/relocation-event-hub.md).
-### Pricing
-Need Info. Any pricing considerations when using availability zones?
-- ## Cross-region disaster recovery and business continuity [!INCLUDE [introduction to disaster recovery](includes/reliability-disaster-recovery-description-include.md)]
There are two features that provide geo-disaster recovery in Azure Event Hubs.
Geo-Disaster recovery ensures that the entire configuration of a namespace (Event Hubs, Consumer Groups, and settings) is continuously replicated from a primary namespace to a secondary namespace when paired.
- The Geo-disaster recovery feature of Azure Event Hubs is a disaster recovery solution. The concepts and workflow described in this article apply to disaster scenarios, and not to temporary outages. For a detailed discussion of disaster recovery in Microsoft Azure, see [this article](/azure/architecture/resiliency/disaster-recovery-azure-applications).
+ The Geo-disaster recovery feature of Azure Event Hubs is a disaster recovery solution. The concepts and workflow described in this article apply to disaster scenarios, and not to temporary outages. For a detailed discussion of disaster recovery in Microsoft Azure, see [this article](/azure/architecture/resiliency/disaster-recovery-azure-applications).
With Geo-Disaster recovery, you can initiate a once-only failover move from the primary to the secondary at any time. The failover move points the chosen alias name for the namespace to the secondary namespace. After the move, the pairing is then removed. The failover is nearly instantaneous once initiated.
- For detailed information, as well as samples and further documentation, on Geo-Disaster recovery in Event Hubs, see [Azure Event Hubs - Geo-disaster recovery](../event-hubs/event-hubs-geo-dr.md).
+ For detailed information, samples, and further documentation, on Geo-Disaster recovery in Event Hubs, see [Azure Event Hubs - Geo-disaster recovery](../event-hubs/event-hubs-geo-dr.md).
- **Geo-replication (public preview)**, which provides replication of both metadata and data, replicates configuration information and all of the data from a primary namespace to one, or more secondary namespaces. When a failover is performed, the selected secondary becomes the primary and the previous primary becomes a secondary. Users can perform a failover back to the original primary when desired.
- For detailed information, as well as samples and further documentation, on Geo-replication in Event Hubs, see [Geo-replication ](../event-hubs/geo-replication.md).
+ For detailed information, samples, and further documentation, on Geo-replication in Event Hubs, see [Geo-replication ](../event-hubs/geo-replication.md).
reliability Reliability Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-virtual-machine-scale-sets.md
The following example creates a Linux single-zone scale set named *myScaleSet* i
} ```
-For a complete example of a single-zone scale set and network resources, see [our sample Resource Manager template](https://github.com/Azure/vm-scale-sets/blob/master/z_deprecated/preview/zones/singlezone.json).
- ### Zone-redundant scale set To create a zone-redundant scale set, specify multiple values in the `zones` property for the *Microsoft.Compute/virtualMachineScaleSets* resource type. The following example creates a zone-redundant scale set named *myScaleSet* across *East US 2* zones *1,2,3*:
To create a zone-redundant scale set, specify multiple values in the `zones` pro
``` If you create a public IP address or a load balancer, specify the *"sku": { "name": "Standard" }"* property to create zone-redundant network resources. You also need to create a Network Security Group and rules to permit any traffic. For more information, see [Azure Load Balancer Standard overview](../load-balancer/load-balancer-overview.md) and [Standard Load Balancer and Availability Zones](../load-balancer/load-balancer-standard-availability-zones.md).
-For a complete example of a zone-redundant scale set and network resources, see [our sample Resource Manager template](https://github.com/Azure/vm-scale-sets/blob/master/z_deprecated/preview/zones/multizone.json).
- -
sentinel Awake Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/awake-security.md
- Title: "Awake Security connector for Microsoft Sentinel"
-description: "Learn how to install the connector Awake Security to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# Awake Security connector for Microsoft Sentinel
-
-The Awake Security CEF connector allows users to send detection model matches from the Awake Security Platform to Microsoft Sentinel. Remediate threats quickly with the power of network detection and response and speed up investigations with deep visibility especially into unmanaged entities including users, devices and applications on your network. The connector also enables the creation of network security-focused custom alerts, incidents, workbooks and notebooks that align with your existing security operations workflows.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog (AwakeSecurity)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Arista - Awake Security](https://awakesecurity.com/) |
-
-## Query samples
-
-**Top 5 Adversarial Model Matches by Severity**
-
- ```kusto
-union CommonSecurityLog
-
- | where DeviceVendor == "Arista Networks" and DeviceProduct == "Awake Security"
-
- | summarize TotalActivities=sum(EventCount) by Activity,LogSeverity
-
- | top 5 by LogSeverity desc
- ```
-
-**Top 5 Devices by Device Risk Score**
-
- ```kusto
-CommonSecurityLog
- | where DeviceVendor == "Arista Networks" and DeviceProduct == "Awake Security"
- | extend DeviceCustomNumber1 = coalesce(column_ifexists("FieldDeviceCustomNumber1", long(null)), DeviceCustomNumber1, long(null))
- | summarize MaxDeviceRiskScore=max(DeviceCustomNumber1),TimesAlerted=count() by SourceHostName=coalesce(SourceHostName,"Unknown")
- | top 5 by MaxDeviceRiskScore desc
- ```
---
-## Vendor installation instructions
-
-1. Linux Syslog agent configuration
-
-Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
-
-> Notice that the data from all regions will be stored in the selected workspace
-
-1.1 Select or create a Linux machine
-
-Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
-
-1.2 Install the CEF collector on the Linux machine
-
-Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
-
-> 1. Make sure that you have Python on your machine using the following command: python -version.
-
-> 2. You must have elevated permissions (sudo) on your machine.
-
- Run the following command to install and apply the CEF collector:
-
- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
-
-2. Forward Awake Adversarial Model match results to a CEF collector.
-
-Perform the following steps to forward Awake Adversarial Model match results to a CEF collector listening on TCP port **514** at IP **192.168.0.1**:
-- Navigate to the Detection Management Skills page in the Awake UI.-- Click + Add New Skill.-- Set the Expression field to,
->integrations.cef.tcp { destination: "192.168.0.1", port: 514, secure: false, severity: Warning }
-- Set the Title field to a descriptive name like,
->Forward Awake Adversarial Model match result to Microsoft Sentinel.
-- Set the Reference Identifier to something easily discoverable like,
->integrations.cef.sentinel-forwarder
-- Click Save.-
-Note: Within a few minutes of saving the definition and other fields the system will begin sending new model match results to the CEF events collector as they are detected.
-
-For more information, refer to the **Adding a Security Information and Event Management Push Integration** page from the Help Documentation in the Awake UI.
-
-3. Validate connection
-
-Follow the instructions to validate your connectivity:
-
-Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
-
->It may take about 20 minutes until the connection streams data to your workspace.
-
-If the logs are not received, run the following connectivity validation script:
-
-> 1. Make sure that you have Python on your machine using the following command: python -version
-
->2. You must have elevated permissions (sudo) on your machine
-
- Run the following command to validate your connectivity:
-
- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
-
-4. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/arista-networks.awake-security?tab=Overview) in the Azure Marketplace.
sentinel Troubleshooting Cef Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/troubleshooting-cef-syslog.md
- Title: Troubleshoot a connection between Microsoft Sentinel and a CEF or Syslog data connector| Microsoft Docs
-description: Learn how to troubleshoot issues with your Microsoft Sentinel CEF or Syslog data connector.
---- Previously updated : 09/17/2024--
-# [Deprecated] Troubleshoot your CEF or Syslog data connector
--
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that has reached End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
-
-This article describes common methods for verifying and troubleshooting a CEF or Syslog data connector for Microsoft Sentinel.
-
-For example, if your log messages aren't appearing in the *Syslog* or *CommonSecurityLog* tables, your data source might not be connecting properly. There might also be another reason your data isn't being received.
-
-Other symptoms of a failed connector deployment include when either the **security_events.conf** or the **security-omsagent.config.conf** files are missing, or if the rsyslog server isn't listening on port 514.
-
-For more information, see [Connect your external solution using Common Event Format](connect-common-event-format.md) and [Collect data from Linux-based sources using Syslog](connect-syslog.md).
-
-If you deployed your connector using a different method than the documented procedure, and if you're having issues, we recommend that you scrap the deployment and start over, this time following the documented instructions.
-
-This article shows you how to troubleshoot CEF or Syslog connectors with the Log Analytics agent. For troubleshooting information related to ingesting CEF logs via the Azure Monitor Agent (AMA), review the [Common Event Format (CEF) via AMA](connect-cef-ama.md) connector instructions.
-
-> [!IMPORTANT]
->
-> On **February 28th 2023**, we introduced changes to the CommonSecurityLog table schema. Following this change, you might need to review and update custom queries. For more details, see the [recommended actions section](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/upcoming-changes-to-the-commonsecuritylog-table/ba-p/3643232) in this blog post. Out-of-the-box content (detections, hunting queries, workbooks, parsers, etc.) has been updated by Microsoft Sentinel.
-
-## How to use this article
-
-When information in this article is relevant only for Syslog or only for CEF connectors, it's presented in separate tabs. Make sure that you're using the instructions on the correct tab for your connector type.
-
-For example, if you're troubleshooting a CEF connector, start with [Validate CEF connectivity](#validate-cef-connectivity). If you're troubleshooting a Syslog connector, start with [Verify your data connector prerequisites](#verify-your-data-connector-prerequisites).
-
-# [CEF](#tab/cef)
-
-### Validate CEF connectivity
-
-After you [deploy your log forwarder](connect-common-event-format.md) and [configure your security solution to send it CEF messages](./connect-common-event-format.md), use the steps in this section to verify connectivity between your security solution and Microsoft Sentinel.
-
-This procedure is relevant only for CEF connections, and is *not* relevant for Syslog connections.
-
-1. Make sure that you have the following prerequisites:
-
- - You must have elevated permissions (sudo) on your log forwarder machine.
-
- - You must have **python 2.7** or **3** installed on your log forwarder machine. Use the `python --version` command to check.
-
- - You might need the Workspace ID and Workspace Primary Key at some point in this process. You can find them in the workspace resource, under **Agents management**.
-
-1. From the Microsoft Sentinel navigation menu, open **Logs**. Run a query using the **CommonSecurityLog** schema to see if you're receiving logs from your security solution.
-
- It might take about 20 minutes until your logs start to appear in **Log Analytics**.
-
-1. If you don't see any results from the query, verify that your security solution is generating log messages. Or, try taking some actions to generate log messages, and verify that the messages are forwarded to your designated Syslog forwarder machine.
-
-1. To check connectivity between your security solution, the log forwarder, and Microsoft Sentinel, run the following script on the log forwarder (applying the Workspace ID in place of the placeholder). This script checks that the daemon is listening on the correct ports, that the forwarding is properly configured, and that nothing is blocking communication between the daemon and the Log Analytics agent. It also sends mock messages 'TestCommonEventFormat' to check end-to-end connectivity. <br>
-
- ```bash
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py [WorkspaceID]
- ```
-
- - You might get a message directing you to run a command to correct an issue with the **mapping of the *Computer* field**. See the [explanation in the validation script](#mapping-command) for details.
-
- - You might get a message directing you to run a command to correct an issue with the **parsing of Cisco ASA firewall logs**. See the [explanation in the validation script](#parsing-command) for details.
-
-### CEF validation script explained
-
-The following section describes the CEF validation script, for the [rsyslog daemon](#rsyslog-daemon) and the [syslog-ng daemon](#syslog-ng-daemon).
-
-#### rsyslog daemon
-
-For an rsyslog daemon, the CEF validation script runs the following checks:
-
-1. Checks that the file<br>
- `/etc/opt/microsoft/omsagent/[WorkspaceID]/conf/omsagent.d/security_events.conf`<br>
- exists and is valid.
-
-1. Checks that the file includes the following text:
-
- ```bash
- <source>
- type syslog
- port 25226
- bind 127.0.0.1
- protocol_type tcp
- tag oms.security
- format /(?<time>(?:\w+ +){2,3}(?:\d+:){2}\d+|\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.[\w\-\:\+]{3,12}):?\s*(?:(?<host>[^: ]+) ?:?)?\s*(?<ident>.*CEF.+?(?=0\|)|%ASA[0-9\-]{8,10})\s*:?(?<message>0\|.*|.*)/
- <parse>
- message_format auto
- </parse>
- </source>
-
- <filter oms.security.**>
- type filter_syslog_security
- </filter>
- ```
-
-1. Checks that the parsing for Cisco ASA Firewall events is configured as expected, using the following command:
-
- ```bash
- grep -i "return ident if ident.include?('%ASA')" /opt/microsoft/omsagent/plugin/security_lib.rb
- ```
-
- - <a name="parsing-command"></a>If there's an issue with the parsing, the script produces an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command ensures the correct parsing and restarts the agent.
-
- ```bash
- # Cisco ASA parsing fix
- sed -i "s|return '%ASA' if ident.include?('%ASA')|return ident if ident.include?('%ASA')|g" /opt/microsoft/omsagent/plugin/security_lib.rb && sudo /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
- ```
-
-1. Checks that the *Computer* field in the syslog source is properly mapped in the Log Analytics agent, using the following command:
-
- ```bash
- grep -i "'Host' => record\['host'\]" /opt/microsoft/omsagent/plugin/filter_syslog_security.rb
- ```
-
- - <a name="mapping-command"></a>If there's an issue with the mapping, the script produces an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command ensures the correct mapping and restarts the agent.
-
- ```bash
- # Computer field mapping fix
- sed -i -e "/'Severity' => tags\[tags.size - 1\]/ a \ \t 'Host' => record['host']" -e "s/'Severity' => tags\[tags.size - 1\]/&,/" /opt/microsoft/omsagent/plugin/filter_syslog_security.rb && sudo /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
- ```
-
-1. Checks if there are any security enhancements on the machine that might be blocking network traffic (such as a host firewall).
-
-1. Checks that the syslog daemon (rsyslog) is properly configured to send messages (that it identifies as CEF) to the Log Analytics agent on TCP port 25226:
-
- Configuration file: `/etc/rsyslog.d/security-config-omsagent.conf`
-
- ```bash
- if $rawmsg contains "CEF:" or $rawmsg contains "ASA-" then @@127.0.0.1:25226
- ```
-
-1. Restarts the syslog daemon and the Log Analytics agent:
-
- ```bash
- service rsyslog restart
-
- /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
- ```
-
-1. Checks that the necessary connections are established: tcp 514 for receiving data, tcp 25226 for internal communication between the syslog daemon and the Log Analytics agent:
-
- ```bash
- netstat -an | grep 514
-
- netstat -an | grep 25226
- ```
-
-1. Checks that the syslog daemon is receiving data on port 514, and that the agent is receiving data on port 25226:
-
- ```bash
- sudo tcpdump -A -ni any port 514 -vv
-
- sudo tcpdump -A -ni any port 25226 -vv
- ```
-
-1. Sends MOCK data to port 514 on localhost. This data should be observable in the Microsoft Sentinel workspace by running the following query:
-
- ```kusto
- CommonSecurityLog
- | where DeviceProduct == "MOCK"
- ```
-
-#### syslog-ng daemon
-
-For a syslog-ng daemon, the CEF validation script runs the following checks:
-
-1. Checks that the file<br>
- `/etc/opt/microsoft/omsagent/[WorkspaceID]/conf/omsagent.d/security_events.conf`<br>
- exists and is valid.
-
-1. Checks that the file includes the following text:
-
- ```bash
- <source>
- type syslog
- port 25226
- bind 127.0.0.1
- protocol_type tcp
- tag oms.security
- format /(?<time>(?:\w+ +){2,3}(?:\d+:){2}\d+|\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.[\w\-\:\+]{3,12}):?\s*(?:(?<host>[^: ]+) ?:?)?\s*(?<ident>.*CEF.+?(?=0\|)|%ASA[0-9\-]{8,10})\s*:?(?<message>0\|.*|.*)/
- <parse>
- message_format auto
- </parse>
- </source>
-
- <filter oms.security.**>
- type filter_syslog_security
- </filter>
- ```
-
-1. Checks that the parsing for Cisco ASA Firewall events is configured as expected, using the following command:
-
- ```bash
- grep -i "return ident if ident.include?('%ASA')" /opt/microsoft/omsagent/plugin/security_lib.rb
- ```
-
- - <a name="parsing-command"></a>If there's an issue with the parsing, the script produces an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command ensures the correct parsing and restarts the agent.
-
- ```bash
- # Cisco ASA parsing fix
- sed -i "s|return '%ASA' if ident.include?('%ASA')|return ident if ident.include?('%ASA')|g" /opt/microsoft/omsagent/plugin/security_lib.rb && sudo /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
- ```
-
-1. Checks that the *Computer* field in the syslog source is properly mapped in the Log Analytics agent, using the following command:
-
- ```bash
- grep -i "'Host' => record\['host'\]" /opt/microsoft/omsagent/plugin/filter_syslog_security.rb
- ```
-
- - <a name="mapping-command"></a>If there's an issue with the mapping, the script produces an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command ensures the correct mapping and restarts the agent.
-
- ```bash
- # Computer field mapping fix
- sed -i -e "/'Severity' => tags\[tags.size - 1\]/ a \ \t 'Host' => record['host']" -e "s/'Severity' => tags\[tags.size - 1\]/&,/" /opt/microsoft/omsagent/plugin/filter_syslog_security.rb && sudo /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
- ```
-
-1. Checks if there are any security enhancements on the machine that might be blocking network traffic (such as a host firewall).
-
-1. Checks that the syslog daemon (syslog-ng) is properly configured to send messages that it identifies as CEF (using a regex) to the Log Analytics agent on TCP port 25226:
-
- - Configuration file: `/etc/syslog-ng/conf.d/security-config-omsagent.conf`
-
- ```bash
- filter f_oms_filter {match(\"CEF\|ASA\" ) ;};destination oms_destination {tcp(\"127.0.0.1\" port(25226));};
- log {source(s_src);filter(f_oms_filter);destination(oms_destination);};
- ```
-
-1. Restarts the syslog daemon and the Log Analytics agent:
-
- ```bash
- service syslog-ng restart
-
- /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
- ```
-
-1. Checks that the necessary connections are established: tcp 514 for receiving data, tcp 25226 for internal communication between the syslog daemon and the Log Analytics agent:
-
- ```bash
- netstat -an | grep 514
-
- netstat -an | grep 25226
- ```
-
-1. Checks that the syslog daemon is receiving data on port 514, and that the agent is receiving data on port 25226:
-
- ```bash
- sudo tcpdump -A -ni any port 514 -vv
-
- sudo tcpdump -A -ni any port 25226 -vv
- ```
-
-1. Sends MOCK data to port 514 on localhost. This data should be observable in the Microsoft Sentinel workspace by running the following query:
-
- ```kusto
- CommonSecurityLog
- | where DeviceProduct == "MOCK"
- ```
-
-# [Syslog](#tab/syslog)
-
-### Troubleshooting Syslog data connectors
-
-If you're troubleshooting a Syslog data connector, start with verifying your prerequisites in the [next section](#verify-your-data-connector-prerequisites), using the information in the **Syslog** tab.
---
-## Verify your data connector prerequisites
-
-Use the following sections to check your CEF or Syslog data connector prerequisites.
-
-# [CEF](#tab/cef)
-
-### Azure Virtual Machine as a CEF collector
-
-If you're using an Azure Virtual Machine as a CEF collector, verify the following:
--- Before you deploy the [Common Event Format Data connector Python script](./connect-log-forwarder.md), make sure that your Virtual Machine isn't already connected to an existing Log Analytics workspace. You can find this information on the Log Analytics Workspace Virtual Machine list, where a VM that's connected to a Syslog workspace is listed as **Connected**.--- Make sure that Microsoft Sentinel is connected to the correct Log Analytics workspace, with the **SecurityInsights** solution installed.-
- For more information, see [Step 1: Deploy the log forwarder](./connect-log-forwarder.md).
--- Make sure that your machine is sized correctly with at least the minimum required prerequisites. For more information, see [CEF prerequisites](connect-common-event-format.md#prerequisites).-
-### On-premises or a non-Azure Virtual Machine
-
-If you're using an on-premises machine or a non-Azure virtual machine for your data connector, make sure that you've run the installation script on a fresh installation of a supported Linux operating system:
-
-> [!TIP]
-> You can also find this script from the **Common Event Format** data connector page in Microsoft Sentinel.
->
-
-```cli
-sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py <WorkspaceId> <Primary Key>
-```
-
-### Enable your CEF facility and log severity collection
-
-The Syslog server, either rsyslog or syslog-ng, forwards any data defined in the relevant configuration file, which is automatically populated by the settings defined in your Log Analytics workspace.
-
-Make sure to add details about the facilities and severity log levels that you want to be ingested into Microsoft Sentinel. The configuration process may take about 20 minutes.
-
-For more information, see [Deployment script explained](./connect-log-forwarder.md#deployment-script-explained).
-
-For example, for an rsyslog server, run the following command to display the current settings for your Syslog forwarding, and review any changes to the configuration file:
-
-```bash
-cat /etc/rsyslog.d/security-config-omsagent.conf
-```
-
-In this case, for rsyslog, output similar to the following should display:
-
-```bash
-if $rawmsg contains "CEF:" or $rawmsg contains "ASA-" then @@127.0.0.1:25226
-```
--
-# [Syslog](#tab/syslog)
-
-### Azure Virtual Machine as a Syslog collector
-
-If you're using an Azure Virtual Machine as a Syslog collector, verify the following:
--- While you're setting up your Syslog data connector, make sure to turn off your [Microsoft Defender for Cloud auto-provisioning settings](../security-center/security-center-enable-data-collection.md) for the [MM#connector-options).-
- You can turn them back on after your data connector is completely set up.
--- Make sure that Microsoft Sentinel is connected to the correct Log Analytics workspace, with the **SecurityInsights** solution installed.-
- For more information, see [Step 1: Deploy the log forwarder](./connect-log-forwarder.md).
--- Make sure that your machine is sized correctly with at least the minimum required prerequisites. For more information, see [CEF prerequisites](connect-common-event-format.md#prerequisites).-
-### Enable your Syslog facility and log severity collection
-
-The Syslog server, either rsyslog or syslog-ng, forwards any data defined in the relevant configuration file, which is automatically populated by the settings defined in your Log Analytics workspace.
-
-Make sure to add details about the facilities and severity log levels that you want to be ingested into Microsoft Sentinel. The configuration process may take about 20 minutes.
-
-For more information, see [Deployment script explained](./connect-log-forwarder.md#deployment-script-explained). and [Configure Syslog in the Azure portal](/azure/azure-monitor/agents/data-sources-syslog).
-
-**For example, for an rsyslog server**, run the following command to display the current settings for your Syslog forwarding, and review any changes to the configuration file:
-
-```bash
-cat /etc/rsyslog.d/95-omsagent.conf
-```
-
-In this case, for rsyslog, output similar to the following should display. The contents of this file should reflect what's defined in the Syslog Configuration on the **Log Analytics Workspace Client configuration - Syslog facility settings** screen.
--
-```bash
-OMS Syslog collection for workspace c69fa733-da2e-4cf9-8d92-eee3bd23fe81
-auth.=alert;auth.=crit;auth.=debug;auth.=emerg;auth.=err;auth.=info;auth.=notice;auth.=warning @127.0.0.1:25224
-authpriv.=alert;authpriv.=crit;authpriv.=debug;authpriv.=emerg;authpriv.=err;authpriv.=info;authpriv.=notice;authpriv.=warning @127.0.0.1:25224
-cron.=alert;cron.=crit;cron.=debug;cron.=emerg;cron.=err;cron.=info;cron.=notice;cron.=warning @127.0.0.1:25224
-local0.=alert;local0.=crit;local0.=debug;local0.=emerg;local0.=err;local0.=info;local0.=notice;local0.=warning @127.0.0.1:25224
-local4.=alert;local4.=crit;local4.=debug;local4.=emerg;local4.=err;local4.=info;local4.=notice;local4.=warning @127.0.0.1:25224
-syslog.=alert;syslog.=crit;syslog.=debug;syslog.=emerg;syslog.=err;syslog.=info;syslog.=notice;syslog.=warning @127.0.0.1:25224
-```
----
-## Troubleshoot operating system issues
-
-This section describes how to troubleshoot issues that are certainly derived from the operating system configuration.
-
-# [CEF](#tab/cef)
-
-**To troubleshoot operating system issues**:
-
-1. If you haven't yet, verify that you're working with a supported operating system and Python version. For more information, see [CEF prerequisites](connect-common-event-format.md#prerequisites).
-
-1. If your Virtual Machine is in Azure, verify that the network security group (NSG) allows inbound TCP/UDP connectivity from your log client (Sender) on port 514.
-
-1. Verify that packets are arriving to the Syslog Collector. To capture the syslog packets arriving to the Syslog Collector, run:
-
- ```config
- tcpdump -Ani any port 514 and host <ip_address_of_sender> -vv
- ```
-
-1. Do one of the following:
-
- - If you don't see any packets arriving, confirm the NSG security group permissions and the routing path to the Syslog Collector.
-
- - If you do see packets arriving, confirm that they aren't being rejected.
-
- If you see rejected packets, confirm that the IP tables aren't blocking the connections.
-
- To confirm that packets aren't being rejected, run:
-
- ```config
- watch -n 2 -d iptables -nvL
- ```
-
-1. Verify whether the CEF server is processing the logs. Run:
-
- ```config
- tail -f /var/log/messages or tail -f /var/log/syslog
- ```
-
- Any CEF logs being processed are displayed in plain text.
-
-1. Confirm that the rsyslog server is listening on TCP/UDP port 514. Run:
-
- ```config
- netstat -anp | grep syslog
- ```
-
- If you have any CEF or ASA logs being sent to your Syslog Collector, you should see an established connection on TCP port 25226.
-
- For example:
-
- ```config
- 0 127.0.0.1:36120 127.0.0.1:25226 ESTABLISHED 1055/rsyslogd
- ```
-
- If the connection is blocked, you may have a [blocked SELinux connection to the OMS agent](#selinux-blocking-connection-to-the-oms-agent), or a [blocked firewall process](#blocked-firewall-policy). Use the relevant instructions further on to determine the issue.
--
-# [Syslog](#tab/syslog)
-
-**To troubleshoot operating system issues**:
-
-1. If you haven't yet, verify that you're working with a supported operating system and Python version. For more information, see [Configure your Linux machine or appliance](connect-syslog.md#configure-your-linux-machine-or-appliance).
-
-1. If your Virtual Machine is in Azure, verify that the network security group (NSG) allows inbound TCP/UDP connectivity from your log client (Sender) on port 514.
-
-1. Verify that packets are arriving to the Syslog Collector. To capture the syslog packets arriving to the Syslog Collector, run:
-
- ```config
- tcpdump -Ani any port 514 and host <ip_address_of_sender> -vv
- ```
-
-1. Do one of the following:
-
- - If you don't see any packets arriving, confirm the NSG security group permissions and the routing path to the Syslog Collector.
-
- - If you do see packets arriving, confirm that they aren't being rejected.
-
- If you see rejected packets, confirm that the IP tables aren't blocking the connections.
-
- To confirm that packets aren't being rejected, run:
-
- ```config
- watch -n 2 -d iptables -nvL
- ```
-
-1. Verify whether the Syslog server is processing the logs. Run:
-
- ```config
- tail -f /var/log/messages or tail -f /var/log/syslog
- ```
-
- Any Syslog logs being processed are displayed in plain text.
-
-1. Confirm that the rsyslog server is listening on TCP/UDP port 514. Run:
-
- ```config
- netstat -anp | grep syslog
- ```
--
-### SELinux blocking connection to the OMS agent
-
-This procedure describes how to confirm whether SELinux is currently in a `permissive` state, or is blocking a connection to the OMS agent. This procedure is relevant when your operating system is a distribution from RedHat or CentOS, and for both CEF and Syslog data connectors.
-
-> [!NOTE]
-> Microsoft Sentinel support for CEF and Syslog only includes FIPS hardening. Other hardening methods, such as SELinux or CIS are not currently supported.
->
-
-1. Run:
-
- ```config
- sestatus
- ```
-
- The status is displayed as one of the following:
-
- - `disabled`. This configuration is supported for your connection to Microsoft Sentinel.
- - `permissive`. This configuration is supported for your connection to Microsoft Sentinel.
- - `enforced`. This configuration is not supported, and you must either disable the status or set it to `permissive`.
-
-1. If the status is currently set to `enforced`, turn it off temporarily to confirm whether this was the blocker. Run:
-
- ```config
- setenforce 0
- ```
-
- > [!NOTE]
- > This step turns off SELinux only until the server reboots. Modify the SELinux configuration to keep it turned off.
- >
-
-1. To verify whether the change was successful, run:
-
- ```
- getenforce
- ```
-
- The `permissive` state should be returned.
-
-> [!IMPORTANT]
-> This setting update is lost when the system is rebooted. To permanently update this setting to `permissive`, modify the **/etc/selinux/config** file, changing the `SELINUX` value to `SELINUX=permissive`.
->
-> For more information, see [RedHat documentation](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/using_selinux/changing-selinux-states-and-modes_using-selinux).
--
-### Blocked firewall policy
-
-This procedure describes how to verify whether a firewall policy is blocking the connection from the Rsyslog daemon to the OMS agent, and how to disable it as needed. This procedure is relevant for both CEF and Syslog data connectors.
--
-1. Run the following command to verify whether there are any rejects in the IP tables, indicating traffic that's being dropped by the firewall policy:
-
- ```config
- watch -n 2 -d iptables -nvL
- ```
-
-1. To keep the firewall policy enabled, create a policy rule to allow the connections. Add rules as needed to allow the TCP/UDP ports 25226 and 25224 through the active firewall.
-
- For example:
-
- ```config
- Every 2.0s: iptables -nvL rsyslog: Wed Jul 7 15:56:13 2021
-
- Chain INPUT (policy ACCEPT 6185K packets, 2466M bytes)
- pkts bytes target prot opt in out source destination
--
- Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
- pkts bytes target prot opt in out source destination
--
- Chain OUTPUT (policy ACCEPT 6792K packets, 6348M bytes)
- pkts bytes target prot opt in out source destination
- ```
-
-1. To create a rule to allow TCP/UDP ports 25226 and 25224 through the active firewall, add rules as needed.
-
- 1. To install the Firewall Policy editor, run:
-
- ```config
- yum install policycoreutils-python
- ```
-
- 1. Add the firewall rules to the firewall policy. For example:
-
- ```config
- sudo firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p tcp --dport 25226 -j ACCEPT
- sudo firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p udp --dport 25224 -j ACCEPT
- sudo firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p tcp --dport 25224 -j ACCEPT
- ```
-
- 1. Verify that the exception was added. Run:
-
- ```config
- sudo firewall-cmd --direct --get-rules ipv4 filter INPUT
- ```
-
- 1. Reload the firewall. Run:
-
- ```config
- sudo firewall-cmd --reload
- ```
-
-> [!NOTE]
-> To disable the firewall, run: `sudo systemctl disable firewalld`
->
-
-## Linux and OMS Agent-related issues
-
-# [CEF](#tab/cef)
-
-If the steps described earlier in this article don't solve your issue, you may have a connectivity problem between the OMS Agent and the Microsoft Sentinel workspace.
-
-In such cases, continue troubleshooting by verifying the following:
--- Make sure that you can see packets arriving on TCP/UDP port 514 on the Syslog collector--- Make sure that you can see logs being written to the local log file, either **/var/log/messages** or **/var/log/syslog**--- Make sure that you can see data packets flowing on port 25226--- Make sure that your virtual machine has an outbound connection to port 443 via TCP, or can connect to the [Log Analytics endpoints](/azure/azure-monitor/agents/log-analytics-agent#network-requirements)--- Make sure that you have access to required URLs from your CEF collector through your firewall policy. For more information, see [Log Analytics agent firewall requirements](/azure/azure-monitor/agents/log-analytics-agent#firewall-requirements).-
-Run the following command to determine if the agent is communicating successfully with Azure, or if the OMS agent is blocked from connecting to the Log Analytics workspace.
-
-```config
-Heartbeat
- | where Computer contains "<computername>"
- | sort by TimeGenerated desc
-```
-
-A log entry is returned if the agent is communicating successfully. Otherwise, the OMS agent may be blocked.
--
-# [Syslog](#tab/syslog)
-
-If the steps described earlier in this article don't solve your issue, you may have a connectivity problem between the OMS Agent and the Microsoft Sentinel workspace.
-
-In such cases, continue troubleshooting by verifying the following:
--- Make sure that you can see packets arriving on TCP/UDP port 514 on the Syslog collector--- Make sure that you can see logs being written to the local log file, either **/var/log/messages** or **/var/log/syslog**--- Make sure that you can see data packets flowing on port 25224--- Make sure that your virtual machine has an outbound connection to port 443 via TCP, or can connect to the [Log Analytics endpoints](/azure/azure-monitor/agents/log-analytics-agent#network-requirements)--- Make sure that you have access to required URLs from your Syslog or CEF collector through your firewall policy. For more information, see [Log Analytics agent firewall requirements](/azure/azure-monitor/agents/log-analytics-agent#firewall-requirements).--- Make sure that your Azure Virtual Machine is shown as connected in your workspace's list of virtual machines.-
-Run the following command to determine if the agent is communicating successfully with Azure, or if the OMS agent is blocked from connecting to the Log Analytics workspace.
-
-```config
-Heartbeat
- | where Computer contains "<computername>"
- | sort by TimeGenerated desc
-```
-
-A log entry is returned if the agent is communicating successfully. Otherwise, the OMS agent may be blocked.
---
-## Next steps
-
-If the troubleshooting steps in this article haven't helped your issue, open a support ticket or use the Microsoft Sentinel community resources. For more information, see [Useful resources for working with Microsoft Sentinel](resources.md).
-
-To learn more about Microsoft Sentinel, see the following articles:
--- Learn about [CEF and CommonSecurityLog field mapping](cef-name-mapping.md).-- Learn how to [get visibility into your data, and potential threats](get-visibility.md).-- Get started [detecting threats with Microsoft Sentinel](./detect-threats-built-in.md).-- [Use workbooks](monitor-your-data.md) to monitor your data.
service-bus-messaging Compare Messaging Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/compare-messaging-services.md
Azure Event Grid is a highly scalable, fully managed Pub Sub message distributio
The service provides an eventing backbone that enables event-driven and reactive programming. It uses the publish-subscribe model. Publishers emit events, but have no expectation about how the events are handled. Subscribers decide on which events they want to handle.
-Event Grid is deeply integrated with other Azure services and can be integrated with third-party services. It simplifies event consumption and lowers costs by eliminating the need for constant polling. Event Grid efficiently and reliably routes events from Azure and non-Azure resources. It distributes the events to registered subscriber endpoints. The event message has the information you need to react to changes in services and applications. Event Grid isn't a data pipeline, and doesn't deliver the actual object that was updated.
+Event Grid is deeply integrated with other Azure services and can be integrated with third-party services. It simplifies event consumption and lowers costs by eliminating the need for constant polling. Event Grid efficiently and reliably routes events from Azure and non-Azure resources. It distributes the events to registered subscriber endpoints. The event message has the information you need to react to changes in services and applications.
It has the following characteristics:
site-recovery Site Recovery Runbook Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-runbook-automation.md
Title: Add Azure Automation runbooks to Site Recovery recovery plans description: Learn how to extend recovery plans with Azure Automation for disaster recovery using Azure Site Recovery. - -+ Last updated 03/07/2024
site-recovery Site Recovery Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-sql.md
Title: Set up disaster recovery for SQL Server with Azure Site Recovery description: This article describes how to set up disaster recovery for SQL Server by using SQL Server and Azure Site Recovery.- - -+ Last updated 03/28/2023
SQL Server on an Azure infrastructure as a service (IaaS) virtual machine (VM) o
SQL Server on an Azure IaaS VM or at on-premises.| [Failover clustering (Always On FCI)](/sql/sql-server/failover-clusters/windows/windows-server-failover-clustering-wsfc-with-sql-server) | The time taken to fail over between the nodes. | Because Always On FCI uses shared storage, the same view of the storage instance is available on failover. SQL Server on an Azure IaaS VM or at on-premises.| [Database mirroring (high-performance mode)](/sql/database-engine/database-mirroring/database-mirroring-sql-server) | The time taken to force the service, which uses the mirror server as a warm standby server. | Replication is asynchronous. The mirror database might lag somewhat behind the principal database. The lag is typically small. But it can become large if the principal or mirror server's system is under a heavy load.<br/><br/>Log shipping can be a supplement to database mirroring. It's a favorable alternative to asynchronous database mirroring. SQL as platform as a service (PaaS) on Azure.<br/><br/>This deployment type includes single databases and elastic pools. | Active geo-replication | 30 seconds after failover is triggered.<br/><br/>When failover is activated for one of the secondary databases, all other secondaries are automatically linked to the new primary. | RPO of five seconds.<br/><br/>Active geo-replication uses the Always On technology of SQL Server. It asynchronously replicates committed transactions on the primary database to a secondary database by using snapshot isolation.<br/><br/>The secondary data is guaranteed to never have partial transactions.
-SQL as PaaS configured with active geo-replication on Azure.<br/><br/>This deployment type includes a managed instances, elastic pools, and single databases. | Auto-failover groups | RTO of one hour. | RPO of five seconds.<br/><br/>Auto-failover groups provide the group semantics on top of active geo-replication. But the same asynchronous replication mechanism is used.
+SQL as PaaS configured with active geo-replication on Azure.<br/><br/>This deployment type includes managed instances, elastic pools, and single databases. | Auto-failover groups | RTO of one hour. | RPO of five seconds.<br/><br/>Auto-failover groups provide the group semantics on top of active geo-replication. But the same asynchronous replication mechanism is used.
SQL Server on an Azure IaaS VM or at on-premises.| Replication with Azure Site Recovery | RTO is typically less than 15 minutes. To learn more, read the [RTO SLA provided by Site Recovery](https://azure.microsoft.com/support/legal/sla/site-recovery/v1_2/). | One hour for application consistency and five minutes for crash consistency. If you are looking for lower RPO, use other BCDR technologies. > [!NOTE]
Site Recovery replication for SQL Server is covered under the Software Assurance
Site Recovery is application agnostic. Site Recovery can help protect any version of SQL Server that is deployed on a supported operating system. For more, see the [support matrix for recovery](vmware-physical-azure-support-matrix.md#replicated-machines) of replicated machines.
-### Does ASR Work with SQL Transactional Replication?
+### Does Azure Site Recovery Work with SQL Transactional Replication?
-Due to ASR using file-level copy, SQL cannot guarantee that the servers in an associated SQL replication topology are in sync at the time of ASR failover. This may cause the logreader and/or distribution agents to fail due to LSN mismatch, which can break replication. If you failover the publisher, distributor, or subscriber in a replication topology, you need to rebuild replication. It is recommended to [reinitialize the subscription to SQL Server](/sql/relational-databases/replication/reinitialize-a-subscription).
+Due to Azure Site Recovery using file-level copy, SQL cannot guarantee that the servers in an associated SQL replication topology are in sync at the time of Azure Site Recovery failover. This may cause the log reader and/or distribution agents to fail due to LSN mismatch, which can break replication. If you failover the publisher, distributor, or subscriber in a replication topology, you need to rebuild replication. It is recommended to [reinitialize the subscription to SQL Server](/sql/relational-databases/replication/reinitialize-a-subscription).
## Next steps
site-recovery Site Recovery Test Failover To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-test-failover-to-azure.md
Title: Run a test failover (disaster recovery drill) to Azure in Azure Site Recovery description: Learn about running a test failover from on-premises to Azure, using the Azure Site Recovery service. - Previously updated : 12/14/2023+ Last updated : 09/25/2024
In the following scenarios, failover requires an extra intermediate step that u
* storflt * intelide * atapi
-* VMware VM that don't have DHCP enabled , irrespective of whether they are using DHCP or static IP addresses.
+* VMware VMs that don't have DHCP enabled , irrespective of whether they are using DHCP or static IP addresses.
In all the other cases, no intermediate step is not required, and failover takes significantly less time.
site-recovery Site Recovery Vmware Deployment Planner Analyze Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-vmware-deployment-planner-analyze-report.md
Title: Analyze the Deployment Planner report for VMware disaster recovery with Azure Site Recovery description: This article describes how to analyze the report generated by the Recovery Deployment Planner for VMware disaster recovery to Azure, using Azure Site Recovery. - -+ Last updated 05/27/2021
Last updated 05/27/2021
# Analyze the Deployment Planner report for VMware disaster recovery to Azure The generated Microsoft Excel report contains the following sheets:+ ## On-premises summary+ The On-premises summary worksheet provides an overview of the profiled VMware environment. ![On-premises summary of VMware environment](media/site-recovery-vmware-deployment-planner-analyze-report/on-premises-summary-v2a.png)
-**Start Date** and **End Date**: The start and end dates of the profiling data considered for report generation. By default, the start date is the date when profiling starts, and the end date is the date when profiling stops. This can be the ΓÇÿStartDateΓÇÖ and ΓÇÿEndDateΓÇÖ values if the report is generated with these parameters.
+**Start Date** and **End Date**: The start and end dates of the profiling data considered for report generation. By default, the start date is the date when profiling starts, and the end date is the date when profiling stops. This can be the `StartDate` and `EndDate` values if the report is generated with these parameters.
**Total number of profiling days**: The total number of days of profiling between the start and end dates for which the report is generated.
-**Number of compatible virtual machines**: The total number of compatible VMs for which the required network bandwidth, required number of storage accounts, Microsoft Azure cores, configuration servers and additional process servers are calculated.
+**Number of compatible virtual machines**: The total number of compatible virtual machines for which the required network bandwidth, required number of storage accounts, Microsoft Azure cores, configuration servers and extra process servers are calculated.
-**Total number of disks across all compatible virtual machines**: The number that's used as one of the inputs to decide the number of configuration servers and additional process servers to be used in the deployment.
+**Total number of disks across all compatible virtual machines**: The number that's used as one of the inputs to decide the number of configuration servers and extra process servers to be used in the deployment.
-**Average number of disks per compatible virtual machine**: The average number of disks calculated across all compatible VMs.
+**Average number of disks per compatible virtual machine**: The average number of disks calculated across all compatible virtual machines.
-**Average disk size (GB)**: The average disk size calculated across all compatible VMs.
+**Average disk size (GB)**: The average disk size calculated across all compatible virtual machines.
-**Desired RPO (minutes)**: Either the default recovery point objective or the value passed for the ΓÇÿDesiredRPOΓÇÖ parameter at the time of report generation to estimate required bandwidth.
+**Desired RPO (minutes)**: Either the default recovery point objective or the value passed for the `DesiredRPO` parameter at the time of report generation to estimate required bandwidth.
-**Desired bandwidth (Mbps)**: The value that you have passed for the ΓÇÿBandwidthΓÇÖ parameter at the time of report generation to estimate achievable RPO.
+**Desired bandwidth (Mbps)**: The value that you have passed for the `Bandwidth` parameter at the time of report generation to estimate achievable RPO.
-**Observed typical data churn per day (GB)**: The average data churn observed across all profiling days. This number is used as one of the inputs to decide the number of configuration servers and additional process servers to be used in the deployment.
+**Observed typical data churn per day (GB)**: The average data churn observed across all profiling days. This number is used as one of the inputs to decide the number of configuration servers and extra process servers to be used in the deployment.
## Recommendations
The recommendations sheet of the VMware to Azure report has the following detail
![Recommendations for VMware to Azure report](media/site-recovery-vmware-deployment-planner-analyze-report/Recommendations-v2a.png) ### Profiled data+ ![The profiled-data view in the deployment planner](media/site-recovery-vmware-deployment-planner-analyze-report/profiled-data-v2a.png)
-**Profiled data period**: The period during which the profiling was run. By default, the tool includes all profiled data in the calculation, unless it generates the report for a specific period by using StartDate and EndDate options during report generation.
+**Profiled data period**: The period during which the profiling was run. By default, the tool includes all profiled data in the calculation, unless it generates the report for a specific period by using `StartDate` and `EndDate` options during report generation.
-**Server Name**: The name or IP address of the VMware vCenter or ESXi host whose VMsΓÇÖ report is generated.
+**Server Name**: The name or IP address of the VMware vCenter or ESXi host whose virtual machinesΓÇÖ report is generated.
**Desired RPO**: The recovery point objective for your deployment. By default, the required network bandwidth is calculated for RPO values of 15, 30, and 60 minutes. Based on the selection, the affected values are updated on the sheet. If you have used the *DesiredRPOinMin* parameter while generating the report, that value is shown in the Desired RPO result.
The recommendations sheet of the VMware to Azure report has the following detail
![Profiling results in the deployment planner](media/site-recovery-vmware-deployment-planner-analyze-report/profiling-overview-v2a.png)
-**Total Profiled Virtual Machines**: The total number of VMs whose profiled data is available. If the VMListFile has names of any VMs which were not profiled, those VMs are not considered in the report generation and are excluded from the total profiled VMs count.
+**Total Profiled Virtual Machines**: The total number of virtual machines whose profiled data is available. If the VMListFile has names of any virtual machines which weren't profiled, the report generation excludes those VMs and does not count them among the total profiled virtual machine.
-**Compatible Virtual Machines**: The number of VMs that can be protected to Azure by using Site Recovery. It is the total number of compatible VMs for which the required network bandwidth, number of storage accounts, number of Azure cores, and number of configuration servers and additional process servers are calculated. The details of every compatible VM are available in the "Compatible VMs" section.
+**Compatible Virtual Machines**: The number of virtual machines that can be protected to Azure by using Site Recovery. The calculation of required network bandwidth, storage accounts, Azure cores, configuration servers, and additional process servers is based on the total number of compatible virtual machines. The details of every compatible virtual machine are available in the *Compatible virtual machines* section.
-**Incompatible Virtual Machines**: The number of profiled VMs that are incompatible for protection with Site Recovery. The reasons for incompatibility are noted in the "Incompatible VMs" section. If the VMListFile has names of any VMs that were not profiled, those VMs are excluded from the incompatible VMs count. These VMs are listed as "Data not found" at the end of the "Incompatible VMs" section.
+**Incompatible Virtual Machines**: The number of profiled virtual machines that are incompatible for protection with Site Recovery. The reasons for incompatibility are noted in the *Incompatible virtual machines* section. If the VMListFile has names of any virtual machines that weren't profiled, those virtual machines are excluded from the incompatible virtual machines count. These virtual machines are listed as "Data not found" at the end of the *Incompatible virtual machines* section.
**Desired RPO**: Your desired recovery point objective, in minutes. The report is generated for three RPO values: 15 (default), 30, and 60 minutes. The bandwidth recommendation in the report is changed based on your selection in the Desired RPO drop-down list at the top right of the sheet. If you have generated the report by using the *-DesiredRPO* parameter with a custom value, this custom value will show as the default in the Desired RPO drop-down list.
The recommendations sheet of the VMware to Azure report has the following detail
![Required network bandwidth in the deployment planner](media/site-recovery-vmware-deployment-planner-analyze-report/required-network-bandwidth-v2a.png)
-**To meet RPO 100 percent of the time:** The recommended bandwidth in Mbps to be allocated to meet your desired RPO 100 percent of the time. This amount of bandwidth must be dedicated for steady-state delta replication of all your compatible VMs to avoid any RPO violations.
+**To meet RPO 100 percent of the time:** The recommended bandwidth in Mbps to be allocated to meet your desired RPO 100 percent of the time. This amount of bandwidth must be dedicated for steady-state delta replication of all your compatible virtual machines to avoid any RPO violations.
-**To meet RPO 90 percent of the time**: Because of broadband pricing or for any other reason, if you cannot set the bandwidth needed to meet your desired RPO 100 percent of the time, you can choose to go with a lower bandwidth setting that can meet your desired RPO 90 percent of the time. To understand the implications of setting this lower bandwidth, the report provides a what-if analysis on the number and duration of RPO violations to expect.
+**To meet RPO 90 percent of the time**: If broadband pricing or other factors prevent you from setting the necessary bandwidth to achieve your desired RPO 100 percent of the time, you can opt for a lower bandwidth setting that meets your desired RPO 90 percent of the time. To understand the implications of setting this lower bandwidth, the report provides a what-if analysis on the number and duration of RPO violations to expect.
-**Achieved Throughput:** The throughput from the server on which you have run the GetThroughput command to the Microsoft Azure region where the storage account is located. This throughput number indicates the estimated level that you can achieve when you protect the compatible VMs by using Site Recovery, provided that your configuration server or process server storage and network characteristics remain the same as that of the server from which you have run the tool.
+**Achieved Throughput:** The throughput from the server on which you run the GetThroughput command to the Microsoft Azure region where the storage account is located. This throughput number indicates the estimated level that you can achieve when you protect the compatible virtual machines by using Site Recovery, provided that your configuration server or process server storage and network characteristics remain the same as that of the server from which you run the tool.
For replication, you should set the recommended bandwidth to meet the RPO 100 percent of the time. After you set the bandwidth, if you donΓÇÖt see any increase in the achieved throughput, as reported by the tool, do the following:
For replication, you should set the recommended bandwidth to meet the RPO 100 pe
4. Change the Site Recovery settings in the process server to [increase the amount network bandwidth used for replication](./site-recovery-plan-capacity-vmware.md#control-network-bandwidth).
-If you are running the tool on a configuration server or process server that already has protected VMs, run the tool a few times. The achieved throughput number changes depending on the amount of churn being processed at that point in time.
+If you're running the tool on a configuration server or process server that already has protected virtual machines, run the tool a few times. The achieved throughput number changes depending on the amount of churn being processed then.
For all enterprise Site Recovery deployments, we recommend that you use [ExpressRoute](https://aka.ms/expressroute). ### Required storage accounts
-The following chart shows the total number of storage accounts (standard and premium) that are required to protect all the compatible VMs. To learn which storage account to use for each VM, see the "VM-storage placement" section. If you are using v2.5 of Deployment Planner, this recommendation only shows the number of standard cache storage accounts which are needed for replication since the data is being directly written to Managed Disks.
+The following chart shows the total number of storage accounts (standard and premium) that are required to protect all the compatible virtual machines. To learn which storage account to use for each virtual machine, see the *VM-storage placement* section. If you're using v2.5 of Deployment Planner, this recommendation only shows the number of standard cache storage accounts which are needed for replication since the data is being directly written to Managed Disks.
![Required storage accounts in the deployment planner](media/site-recovery-vmware-deployment-planner-analyze-report/required-storage-accounts-v2a.png) ### Required number of Azure cores
-This result is the total number of cores to be set up before failover or test failover of all the compatible VMs. If too few cores are available in the subscription, Site Recovery fails to create VMs at the time of test failover or failover.
+This result is the total number of cores to be set up before failover or test failover of all the compatible virtual machines. If too few cores are available in the subscription, Site Recovery fails to create virtual machines at the time of test failover or failover.
![Required number of Azure cores in the deployment planner](media/site-recovery-vmware-deployment-planner-analyze-report/required-cores-v2a.png) ### Required on-premises infrastructure
-This figure is the total number of configuration servers and additional process servers to be configured that would suffice to protect all the compatible VMs. Depending on the supported [size recommendations for the configuration server](/en-in/azure/site-recovery/site-recovery-plan-capacity-vmware#size-recommendations-for-the-configuration-server), the tool might recommend additional servers. The recommendation is based on the larger of either the per-day churn or the maximum number of protected VMs (assuming an average of three disks per VM), whichever is hit first on the configuration server or the additional process server. You'll find the details of total churn per day and total number of protected disks in the "On-premises summary" section.
+This figure is the total number of configuration servers and extra process servers to be configured that would suffice to protect all the compatible virtual machines. Depending on the supported [size recommendations for the configuration server](./site-recovery-plan-capacity-vmware.md#size-recommendations-for-the-configuration-server-and-inbuilt-process-server), the tool might recommend extra servers. The recommendation is based on the larger of either the per-day churn or the maximum number of protected virtual machines (assuming an average of three disks per virtual machine), whichever is hit first on the configuration server or the extra process server. You'll find the details of total churn per day and total number of protected disks in the "On-premises summary" section.
![Required on-premises infrastructure in the deployment planner](media/site-recovery-vmware-deployment-planner-analyze-report/required-on-premises-components-v2a.png) ### What-if analysis This analysis outlines how many violations could occur during the profiling period when you set a lower bandwidth for the desired RPO to be met only 90 percent of the time. One or more RPO violations can occur on any given day. The graph shows the peak RPO of the day.
-Based on this analysis, you can decide if the number of RPO violations across all days and peak RPO hit per day is acceptable with the specified lower bandwidth. If it is acceptable, you can allocate the lower bandwidth for replication, else allocate the higher bandwidth as suggested to meet the desired RPO 100 percent of the time.
+Based on this analysis, you can decide if the number of RPO violations across all days and peak RPO hit per day is acceptable with the specified lower bandwidth. If it's acceptable, you can allocate the lower bandwidth for replication, else allocate the higher bandwidth as suggested to meet the desired RPO 100 percent of the time.
![What-if analysis in the deployment planner](media/site-recovery-vmware-deployment-planner-analyze-report/what-if-analysis-v2a.png)
-### Recommended VM batch size for initial replication
-In this section, we recommend the number of VMs that can be protected in parallel to complete the initial replication within 72 hours with the suggested bandwidth to meet desired RPO 100 percent of the time being set. This value is configurable value. To change it at report-generation time, use the *GoalToCompleteIR* parameter.
+### Recommended virtual machine batch size for initial replication
+In this section, we recommend the number of virtual machines that can be protected in parallel to complete the initial replication within 72 hours with the suggested bandwidth to meet desired RPO 100 percent of the time being set. This value is configurable value. To change it at report-generation time, use the *GoalToCompleteIR* parameter.
-The graph here shows a range of bandwidth values and a calculated VM batch size count to complete initial replication in 72 hours, based on the average detected VM size across all the compatible VMs.
+The graph here shows a range of bandwidth values and a calculated virtual machine batch size count to complete initial replication in 72 hours, based on the average detected virtual machine size across all the compatible virtual machines.
-In the public preview, the report does not specify which VMs should be included in a batch. You can use the disk size shown in the "Compatible VMs" section to find each VMΓÇÖs size and select them for a batch, or you can select the VMs based on known workload characteristics. The completion time of the initial replication changes proportionally, based on the actual VM disk size, used disk space, and available network throughput.
+In the public preview, the report does not specify which virtual machines should be included in a batch. You can use the disk size shown in the *Compatible VMs* section to find each virtual machineΓÇÖs size and select them for a batch, or you can select the virtual machines based on known workload characteristics. The completion time of the initial replication changes proportionally, based on the actual virtual machine disk size, used disk space, and available network throughput.
-![Recommended VM batch size](media/site-recovery-vmware-deployment-planner-analyze-report/ir-batching-v2a.png)
+![Recommended virtual machine batch size](media/site-recovery-vmware-deployment-planner-analyze-report/ir-batching-v2a.png)
### Cost estimation The graph shows the summary view of the estimated total disaster recovery (DR) cost to Azure of your chosen target region and the currency that you have specified for report generation. ![Cost estimation summary](media/site-recovery-vmware-deployment-planner-analyze-report/cost-estimation-summary-v2a.png)
-The summary helps you to understand the cost that you need to pay for storage, compute, network, and license when you protect all your compatible VMs to Azure using Azure Site Recovery. The cost is calculated on for compatible VMs and not on all the profiled VMs.
+The summary helps you to understand the cost that you need to pay for storage, compute, network, and license when you protect all your compatible virtual machines to Azure using Azure Site Recovery. The cost is calculated on for compatible virtual machines and not on all the profiled virtual machines.
You can view the cost either monthly or yearly. Learn more about [supported target regions](./site-recovery-vmware-deployment-planner-cost-estimation.md#supported-target-regions) and [supported currencies](./site-recovery-vmware-deployment-planner-cost-estimation.md#supported-currencies). **Cost by components**
-The total DR cost is divided into four components: Compute, Storage, Network, and Azure Site Recovery license cost. The cost is calculated based on the consumption that will be incurred during replication and at DR drill time for compute, storage (premium and standard), ExpressRoute/VPN that is configured between the on-premises site and Azure, and Azure Site Recovery license.
+The total DR cost is divided into four components: Compute, Storage, Network, and Azure Site Recovery license cost. The cost is calculated based on the consumption that is incurred during replication and at DR drill time for compute, storage (premium and standard), ExpressRoute/VPN that is configured between the on-premises site and Azure, and Azure Site Recovery license.
**Cost by states** The total disaster recovery (DR) cost is categories based on two different states - Replication and DR drill.
-**Replication cost**: The cost that will be incurred during replication. It covers the cost of storage, network, and Azure Site Recovery license.
+**Replication cost**: The cost that is incurred during replication. It covers the cost of storage, network, and Azure Site Recovery license.
-**DR-Drill cost**: The cost that will be incurred during test failovers. Azure Site Recovery spins up VMs during test failover. The DR drill cost covers the running VMsΓÇÖ compute and storage cost.
+**DR-Drill cost**: The cost that is incurred during test failovers. Azure Site Recovery spins up virtual machines during test failover. The DR drill cost covers the running virtual machinesΓÇÖ compute and storage cost.
**Azure storage cost per Month/Year**
-It shows the total storage cost that will be incurred for premium and standard storage for replication and DR drill.
-You can view detailed cost analysis per VM in the [Cost Estimation](site-recovery-vmware-deployment-planner-cost-estimation.md) sheet.
+It shows the total storage cost that is incurred for premium and standard storage for replication and DR drill.
+You can view detailed cost analysis per virtual machine in the [Cost Estimation](site-recovery-vmware-deployment-planner-cost-estimation.md) sheet.
### Growth factor and percentile values used
-This section at the bottom of the sheet shows the percentile value used for all the performance counters of the profiled VMs (default is 95th percentile), and the growth factor (default is 30 percent) that's used in all the calculations.
+This section at the bottom of the sheet shows the percentile value used for all the performance counters of the profiled virtual machines (default is 95th percentile), and the growth factor (default is 30 percent) that's used in all the calculations.
![Growth factor and percentile values used](media/site-recovery-vmware-deployment-planner-analyze-report/growth-factor-v2a.png)
This section at the bottom of the sheet shows the percentile value used for all
![Recommendations with available bandwidth as input](media/site-recovery-vmware-deployment-planner-analyze-report/profiling-overview-bandwidth-input-v2a.png)
-You might have a situation where you know that you cannot set a bandwidth of more than x Mbps for Site Recovery replication. The tool allows you to input available bandwidth (using the -Bandwidth parameter during report generation) and get the achievable RPO in minutes. With this achievable RPO value, you can decide whether you need to set up additional bandwidth or you are OK with having a disaster recovery solution with this RPO.
+You might have a situation where you know that you cannot set a bandwidth of more than x Mbps for Site Recovery replication. The tool allows you to input available bandwidth (using the -Bandwidth parameter during report generation) and get the achievable RPO in minutes. With this achievable RPO value, you can decide whether you need to set up extra bandwidth or you're OK with having a disaster recovery solution with this RPO.
![Achievable RPO for 500 Mbps bandwidth](media/site-recovery-vmware-deployment-planner-analyze-report/achievable-rpo-v2a.png)
-## VM-storage placement
+## virtual machine-storage placement
>[!Note] >Deployment Planner v2.5 onwards recommends the storage placement for machines which will replicate directly to managed disks.
-![VM-storage placement](media/site-recovery-vmware-deployment-planner-analyze-report/vm-storage-placement-v2a.png)
+![virtual machine-storage placement](media/site-recovery-vmware-deployment-planner-analyze-report/vm-storage-placement-v2a.png)
-**Replication Storage Type**: Either a standard or premium managed disk, which is used to replicate all the corresponding VMs mentioned in the **VMs to Place** column.
+**Replication Storage Type**: Either a standard or premium managed disk, which is used to replicate all the corresponding virtual machines mentioned in the **VMs to Place** column.
**Log Storage Account Type**: All the replication logs are stored in a standard storage account.
-**Suggested Prefix for Storage Account**: The suggested three-character prefix that can be used for naming the cache storage account. You can use your own prefix, but the tool's suggestion follows the [partition naming convention for storage accounts](/en-in/azure/storage/blobs/storage-performance-checklist).
+**Suggested Prefix for Storage Account**: The suggested three-character prefix that can be used for naming the cache storage account. You can use your own prefix, but the tool's suggestion follows the [partition naming convention for storage accounts](../storage/blobs/storage-performance-checklist.md).
**Suggested Log Account Name**: The storage-account name after you include the suggested prefix. Replace the name within the angle brackets (< and >) with your custom input.
-**Placement Summary**: A summary of the disks needed to protected VMs by storage type. It includes the total number of VMs, total provisioned size across all disks, and total number of disks.
+**Placement Summary**: A summary of the disks needed to protected virtual machines by storage type. It includes the total number of virtual machines, total provisioned size across all disks, and total number of disks.
-**Virtual Machines to Place**: A list of all the VMs that should be placed on the given storage account for optimal performance and use.
+**Virtual Machines to Place**: A list of all the virtual machines that should be placed on the given storage account for optimal performance and use.
-## Compatible VMs
-![Excel spreadsheet of compatible VMs](media/site-recovery-vmware-deployment-planner-analyze-report/compatible-vms-v2a.png)
+## Compatible virtual machines
+![Excel spreadsheet of compatible virtual machines](media/site-recovery-vmware-deployment-planner-analyze-report/compatible-vms-v2a.png)
-**VM Name**: The VM name or IP address that's used in the VMListFile when a report is generated. This column also lists the disks (VMDKs) that are attached to the VMs. To distinguish vCenter VMs with duplicate names or IP addresses, the names include the ESXi host name. The listed ESXi host is the one where the VM was placed when the tool discovered during the profiling period.
+**Virtual machine Name**: The virtual machine name or IP address that's used in the VMListFile when a report is generated. This column also lists the disks (VMDKs) that are attached to the virtual machines. To distinguish vCenter virtual machines with duplicate names or IP addresses, the names include the ESXi host name. The listed ESXi host is the one where the virtual machine was placed when the tool discovered during the profiling period.
-**VM Compatibility**: Values are **Yes** and **Yes\***. **Yes**\* is for instances in which the VM is a fit for [premium SSDs](/azure/virtual-machines/disks-types). Here, the profiled high-churn or IOPS disk fits in the P20 or P30 category, but the size of the disk causes it to be mapped down to a P10 or P20. The storage account decides which premium storage disk type to map a disk to, based on its size. For example:
+**Virtual machine Compatibility**: Values are **Yes** and **Yes\***. **Yes**\* is for instances in which the virtual machine is a fit for [premium SSDs](/azure/virtual-machines/disks-types). Here, the profiled high-churn or Input/output operations per second (IOPS) disk fits in the P20 or P30 category, but the size of the disk causes it to be mapped down to a P10 or P20. The storage account decides which premium storage disk type to map a disk to, based on its size. For example:
* <128 GB is a P10. * 128 GB to 256 GB is a P15 * 256 GB to 512 GB is a P20.
You might have a situation where you know that you cannot set a bandwidth of mor
* 1025 GB to 2048 GB is a P40. * 2049 GB to 4095 GB is a P50.
-For example, if the workload characteristics of a disk put it in the P20 or P30 category, but the size maps it down to a lower premium storage disk type, the tool marks that VM as **Yes**\*. The tool also recommends that you either change the source disk size to fit into the recommended premium storage disk type or change the target disk type post-failover.
+For example, if the workload characteristics of a disk put it in the P20 or P30 category, but the size maps it down to a lower premium storage disk type, the tool marks that virtual machine as **Yes**\*. The tool also recommends that you either change the source disk size to fit into the recommended premium storage disk type or change the target disk type post-failover.
**Storage Type**: Standard or premium. **Asrseeddisk (Managed Disk) created for replication**: The name of the disk that is created when you enable replication. It stores the data and its snapshots in Azure.
-**Peak R/W IOPS (with Growth Factor)**: The peak workload read/write IOPS on the disk (default is 95th percentile), including the future growth factor (default is 30 percent). Note that the total read/write IOPS of a VM is not always the sum of the VMΓÇÖs individual disksΓÇÖ read/write IOPS, because the peak read/write IOPS of the VM is the peak of the sum of its individual disks' read/write IOPS during every minute of the profiling period.
+**Peak R/W IOPS (with Growth Factor)**: The peak workload read/write IOPS on the disk (default is 95th percentile), including the future growth factor (default is 30 percent). The total read/write IOPS of a virtual machine isn't always the sum of the virtual machineΓÇÖs individual disksΓÇÖ read/write IOPS, because the peak read/write IOPS of the virtual machine is the peak of the sum of its individual disks' read/write IOPS during every minute of the profiling period.
-**Peak Data Churn in Mbps (with Growth Factor)**: The peak churn rate on the disk (default is 95th percentile), including the future growth factor (default is 30 percent). Note that the total data churn of the VM is not always the sum of the VMΓÇÖs individual disksΓÇÖ data churn, because the peak data churn of the VM is the peak of the sum of its individual disks' churn during every minute of the profiling period.
+**Peak Data Churn in Mbps (with Growth Factor)**: The peak churn rate on the disk (default is 95th percentile), including the future growth factor (default is 30 percent). The total data churn of the virtual machine isn't always the sum of the virtual machineΓÇÖs individual disksΓÇÖ data churn, because the peak data churn of the virtual machine is the peak of the sum of its individual disks' churn during every minute of the profiling period.
-**Azure VM Size**: The ideal mapped Azure Cloud Services virtual-machine size for this on-premises VM. The mapping is based on the on-premises VMΓÇÖs memory, number of disks/cores/NICs, and read/write IOPS. The recommendation is always the lowest Azure VM size that matches all of the on-premises VM characteristics.
+**Azure virtual machine Size**: The ideal mapped Azure Cloud Services virtual machine size for this on-premises virtual machine. The mapping is based on the on-premises virtual machineΓÇÖs memory, number of disks/cores/NICs, and read/write IOPS. The recommendation is always the lowest Azure virtual machine size that matches all of the on-premises virtual machine characteristics.
-**Number of Disks**: The total number of virtual machine disks (VMDKs) on the VM.
+**Number of Disks**: The total number of virtual machine disks (VMDKs) on the virtual machine.
-**Disk size (GB)**: The total setup size of all disks of the VM. The tool also shows the disk size for the individual disks in the VM.
+**Disk size (GB)**: The total setup size of all disks of the virtual machine. The tool also shows the disk size for the individual disks in the virtual machine.
-**Cores**: The number of CPU cores on the VM.
+**Cores**: The number of CPU cores on the virtual machine.
-**Memory (MB)**: The RAM on the VM.
+**Memory (MB)**: The RAM on the virtual machine.
-**NICs**: The number of NICs on the VM.
+**NICs**: The number of NICs on the virtual machine.
-**Boot Type**: Boot type of the VM. It can be either BIOS or EFI. Currently Azure Site Recovery supports Windows Server EFI VMs (Windows Server 2012, 2012 R2 and 2016) provided the number of partitions in the boot disk is less than 4 and boot sector size is 512 bytes. To protect EFI VMs, Azure Site Recovery mobility service version must be 9.13 or above. Only failover is supported for EFI VMs. Failback is not supported.
+**Boot Type**: Boot type of the virtual machine. It can be either BIOS or EFI. Currently Azure Site Recovery supports Windows Server EFI virtual machines (Windows Server 2012, 2012 R2 and 2016) provided the number of partitions in the boot disk is less than 4 and boot sector size is 512 bytes. To protect EFI virtual machines, Azure Site Recovery mobility service version must be 9.13 or later. Only failover is supported for EFI virtual machines. Failback isn't supported.
-**OS Type**: It is OS type of the VM. It can be either Windows or Linux or other based on the chosen template from VMware vSphere while creating the VM.
+**OS Type**: It's OS type of the virtual machine. It can be either Windows or Linux or other based on the chosen template from virtual machineware vSphere while creating the virtual machine.
-## Incompatible VMs
+## Incompatible virtual machines
-![Excel spreadsheet of incompatible VMs
+![Excel spreadsheet of incompatible virtual machines
](media/site-recovery-vmware-deployment-planner-analyze-report/incompatible-vms-v2a.png)
-**VM Name**: The VM name or IP address that's used in the VMListFile when a report is generated. This column also lists the VMDKs that are attached to the VMs. To distinguish vCenter VMs with duplicate names or IP addresses, the names include the ESXi host name. The listed ESXi host is the one where the VM was placed when the tool discovered during the profiling period.
+**Virtual machine Name**: The virtual machine name or IP address that's used in the VMListFile when a report is generated. This column also lists the VMDKs that are attached to the virtual machines. To distinguish vCenter virtual machines with duplicate names or IP addresses, the names include the ESXi host name. The listed ESXi host is the one where the virtual machine was placed when the tool discovered during the profiling period.
-**VM Compatibility**: Indicates why the given VM is incompatible for use with Site Recovery. The reasons are described for each incompatible disk of the VM and, based on published [storage limits](/en-in/azure/storage/common/scalability-targets-standard-account), can be any of the following:
+**Virtual machine Compatibility**: Indicates why the given virtual machine is incompatible for use with Site Recovery. The reasons are described for each incompatible disk of the virtual machine and, based on published [storage limits](../storage/common/scalability-targets-standard-account.md), can be any of the following:
* Wrong data disk size or wrong OS disk size. [Review](vmware-physical-azure-support-matrix.md#azure-vm-requirements) the support limits.
-* Total VM size (replication + TFO) exceeds the supported storage-account size limit (35 TB). This incompatibility usually occurs when a single disk in the VM has a performance characteristic that exceeds the maximum supported Azure or Site Recovery limits for standard storage. Such an instance pushes the VM into the premium storage zone. However, the maximum supported size of a premium storage account is 35 TB, and a single protected VM cannot be protected across multiple storage accounts. Also note that when a test failover is executed on a protected VM, it runs in the same storage account where replication is progressing. In this instance, set up 2x the size of the disk for replication to progress and test failover to succeed in parallel.
+* Total virtual machine size (replication + TFO) exceeds the supported storage-account size limit (35 TB). This incompatibility usually occurs when a single disk in the virtual machine has a performance characteristic that exceeds the maximum supported Azure or Site Recovery limits for standard storage. Such an instance pushes the virtual machine into the premium storage zone. However, the maximum supported size of a premium storage account is 35 TB, and a single protected virtual machine cannot be protected across multiple storage accounts. Also note that when a test failover is executed on a protected virtual machine, it runs in the same storage account where replication is progressing. In this instance, set up 2x the size of the disk for replication to progress and test failover to succeed in parallel.
* Source IOPS exceeds supported storage IOPS limit of 7500 per disk.
-* Source IOPS exceeds supported storage IOPS limit of 80,000 per VM.
+* Source IOPS exceeds supported storage IOPS limit of 80,000 per virtual machine.
* Average data churn exceeds supported Site Recovery data churn limit of 20 MB/s for average I/O size for the disk.
-* Peak data churn across all disks on the VM exceeds the maximum supported Site Recovery peak data churn limit of 54 MB/s per VM.
+* Peak data churn across all disks on the virtual machine exceeds the maximum supported Site Recovery peak data churn limit of 54 MB/s per virtual machine.
* Average effective write IOPS exceeds the supported Site Recovery IOPS limit of 840 for disk.
For example, if the workload characteristics of a disk put it in the P20 or P30
* Total data churn per day exceeds supported churn per day limit of 2 TB by a Process Server.
-**Peak R/W IOPS (with Growth Factor)**: The peak workload IOPS on the disk (default is 95th percentile), including the future growth factor (default is 30 percent). Note that the total read/write IOPS of the VM is not always the sum of the VMΓÇÖs individual disksΓÇÖ read/write IOPS, because the peak read/write IOPS of the VM is the peak of the sum of its individual disks' read/write IOPS during every minute of the profiling period.
+**Peak R/W IOPS (with Growth Factor)**: The peak workload IOPS on the disk (default is 95th percentile), including the future growth factor (default is 30 percent). The total read/write IOPS of the virtual machine isn't always the sum of the virtual machineΓÇÖs individual disksΓÇÖ read/write IOPS, because the peak read/write IOPS of the virtual machine is the peak of the sum of its individual disks' read/write IOPS during every minute of the profiling period.
-**Peak Data Churn in Mbps (with Growth Factor)**: The peak churn rate on the disk (default 95th percentile) including the future growth factor (default 30 percent). Note that the total data churn of the VM is not always the sum of the VMΓÇÖs individual disksΓÇÖ data churn, because the peak data churn of the VM is the peak of the sum of its individual disks' churn during every minute of the profiling period.
+**Peak Data Churn in Mbps (with Growth Factor)**: The peak churn rate on the disk (default 95th percentile) including the future growth factor (default 30 percent). The total data churn of the virtual machine isn't always the sum of the virtual machineΓÇÖs individual disksΓÇÖ data churn, because the peak data churn of the virtual machine is the peak of the sum of its individual disks' churn during every minute of the profiling period.
-**Number of Disks**: The total number of VMDKs on the VM.
+**Number of Disks**: The total number of VMDKs on the virtual machine.
-**Disk size (GB)**: The total setup size of all disks of the VM. The tool also shows the disk size for the individual disks in the VM.
+**Disk size (GB)**: The total setup size of all disks of the virtual machine. The tool also shows the disk size for the individual disks in the virtual machine.
-**Cores**: The number of CPU cores on the VM.
+**Cores**: The number of CPU cores on the virtual machine.
-**Memory (MB)**: The amount of RAM on the VM.
+**Memory (MB)**: The amount of RAM on the virtual machine.
-**NICs**: The number of NICs on the VM.
+**NICs**: The number of NICs on the virtual machine.
-**Boot Type**: Boot type of the VM. It can be either BIOS or EFI. Currently Azure Site Recovery supports Windows Server EFI VMs (Windows Server 2012, 2012 R2 and 2016) provided the number of partitions in the boot disk is less than 4 and boot sector size is 512 bytes. To protect EFI VMs, Azure Site Recovery mobility service version must be 9.13 or above. Only failover is supported for EFI VMs. Failback is not supported.
+**Boot Type**: Boot type of the virtual machine. It can be either BIOS or EFI. Currently Azure Site Recovery supports Windows Server EFI virtual machines (Windows Server 2012, 2012 R2 and 2016) provided the number of partitions in the boot disk is less than 4 and boot sector size is 512 bytes. To protect EFI virtual machines, Azure Site Recovery mobility service version must be 9.13 or above. Only failover is supported for EFI virtual machines. Failback isn't supported.
-**OS Type**: It is OS type of the VM. It can be either Windows or Linux or other based on the chosen template from VMware vSphere while creating the VM.
+**OS Type**: It's OS type of the virtual machine. It can be either Windows or Linux or other based on the chosen template from VMware vSphere while creating the virtual machine.
## Azure Site Recovery limits The following table provides the Azure Site Recovery limits. These limits are based on our tests, but they cannot cover all possible application I/O combinations. Actual results can vary based on your application I/O mix. For best results, even after deployment planning, we always recommend that you perform extensive application testing by issuing a test failover to get the true performance picture of the application.
Premium P20 or P30 or P40 or P50 disk | 16 KB or greater | 20 MB/s | 1684 GB per
**Source data churn** | **Maximum Limit** |
-Peak data churn across all disks on a VM | 54 MB/s
+Peak data churn across all disks on a virtual machine | 54 MB/s
Maximum data churn per day supported by a Process Server | 2 TB
-These are average numbers assuming a 30 percent I/O overlap. Site Recovery is capable of handling higher throughput based on overlap ratio, larger write sizes, and actual workload I/O behavior. The preceding numbers assume a typical backlog of approximately five minutes. That is, after data is uploaded, it is processed and a recovery point is created within five minutes.
+These are average numbers assuming a 30 percent I/O overlap. Site Recovery is capable of handling higher throughput based on overlap ratio, larger write sizes, and actual workload I/O behavior. The preceding numbers assume a typical backlog of approximately five minutes. That is, after data is uploaded, it's processed and a recovery point is created within five minutes.
## Cost estimation
site-recovery Site Recovery Vmware Deployment Planner Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-vmware-deployment-planner-run.md
Title: Run the Deployment Planner for VMware disaster recovery with Azure Site Recovery description: This article describes how to run Azure Site Recovery Deployment Planner for VMware disaster recovery to Azure. - -+ Last updated 12/15/2023
This article is the Azure Site Recovery Deployment Planner user guide for VMware
## Modes of running deployment planner You can run the command-line tool (ASRDeploymentPlanner.exe) in any of the following three modes:
-1. [Profiling](#profile-vmware-vms)
+1. [Profiling](#profile-vmware-virtual-machines)
2. [Report generation](#generate-report) 3. [Get throughput](#get-throughput)
-First, run the tool in profiling mode to gather VM data churn and IOPS. Next, run the tool to generate the report to find the network bandwidth, storage requirements and DR cost.
+First, run the tool in profiling mode to gather virtual machine data churn and IOPS. Next, run the tool to generate the report to find the network bandwidth, storage requirements and DR cost.
-## Profile VMware VMs
-In profiling mode, the deployment planner tool connects to the vCenter server/vSphere ESXi host to collect performance data about the VM.
+## Profile VMware virtual machines
+In profiling mode, the deployment planner tool connects to the vCenter server/vSphere ESXi host to collect performance data about the virtual machine.
-* Profiling does not affect the performance of the production VMs, because no direct connection is made to them. All performance data is collected from the vCenter server/vSphere ESXi host.
-* To ensure that there is a negligible impact on the server because of profiling, the tool queries the vCenter server/vSphere ESXi host once every 15 minutes. This query interval does not compromise profiling accuracy, because the tool stores every minuteΓÇÖs performance counter data.
+* Profiling doesn't affect the performance of the production virtual machines, because no direct connection is made to them. All performance data is collected from the vCenter server/vSphere ESXi host.
+* To ensure that there is a negligible impact on the server because of profiling, the tool queries the vCenter server/vSphere ESXi host once every 15 minutes. This query interval doesn't compromise profiling accuracy, because the tool stores every minuteΓÇÖs performance counter data.
-### Create a list of VMs to profile
-First, you need a list of the VMs to be profiled. You can get all the names of VMs on a vCenter server/vSphere ESXi host by using the VMware vSphere PowerCLI commands in the following procedure. Alternatively, you can list in a file the friendly names or IP addresses of the VMs that you want to profile manually.
+### Create a list of virtual machines to profile
+First, you need a list of the virtual machines to be profiled. You can get all the names of virtual machines on a vCenter server/vSphere ESXi host by using the VMware vSphere PowerCLI commands in the following procedure. Alternatively, you can list in a file the friendly names or IP addresses of the virtual machines that you want to profile manually.
-1. Sign in to the VM that VMware vSphere PowerCLI is installed in.
+1. Sign in to the virtual machine that VMware vSphere PowerCLI is installed in.
2. Open the VMware vSphere PowerCLI console.
-3. Ensure that the execution policy is enabled for the script. If it is disabled, launch the VMware vSphere PowerCLI console in administrator mode, and then enable it by running the following command:
+3. Ensure that the execution policy is enabled for the script. If it's disabled, launch the VMware vSphere PowerCLI console in administrator mode, and then enable it by running the following command:
```powershell Set-ExecutionPolicy ΓÇôExecutionPolicy AllSigned ```
-4. You may optionally need to run the following command if Connect-VIServer is not recognized as the name of cmdlet.
+4. You may optionally need to run the following command if Connect-VIServer isn'trecognized as the name of cmdlet.
```powershell Add-PSSnapin VMware.VimAutomation.Core ```
-5. To get all the names of VMs on a vCenter server/vSphere ESXi host and store the list in a .txt file, run the two commands listed here.
+5. To get all the names of virtual machines on a vCenter server/vSphere ESXi host and store the list in a .txt file, run the two commands listed here.
Replace &lsaquo;server name&rsaquo;, &lsaquo;user name&rsaquo;, &lsaquo;password&rsaquo;, &lsaquo;outputfile.txt&rsaquo;; with your inputs. ```powershell
Replace &lsaquo;server name&rsaquo;, &lsaquo;user name&rsaquo;, &lsaquo;password
Get-VM | Select Name | Sort-Object -Property Name > <outputfile.txt> ```
-6. Open the output file in Notepad, and then copy the names of all VMs that you want to profile to another file (for example, ProfileVMList.txt), one VM name per line. This file is used as input to the *-VMListFile* parameter of the command-line tool.
+6. Open the output file in Notepad, and then copy the names of all virtual machines that you want to profile to another file (for example, ProfileVMList.txt), one virtual machine name per line. This file is used as input to the *-VMListFile* parameter of the command-line tool.
![VM name list in the deployment planner ](media/site-recovery-vmware-deployment-planner-run/profile-vm-list-v2a.png) ### Start profiling
-After you have the list of VMs to be profiled, you can run the tool in profiling mode. Here is the list of mandatory and optional parameters of the tool to run in profiling mode.
+After you have the list of virtual machines to be profiled, you can run the tool in profiling mode. Here is the list of mandatory and optional parameters of the tool to run in profiling mode.
``` ASRDeploymentPlanner.exe -Operation StartProfiling /?
ASRDeploymentPlanner.exe -Operation StartProfiling /?
| Parameter name | Description | ||| | -Operation | StartProfiling |
-| -Server | The fully qualified domain name or IP address of the vCenter server/vSphere ESXi host whose VMs are to be profiled.|
+| -Server | The fully qualified domain name or IP address of the vCenter server/vSphere ESXi host whose virtual machines are to be profiled.|
| -User | The user name to connect to the vCenter server/vSphere ESXi host. The user needs to have read-only access, at minimum.|
-| -VMListFile | The file that contains the list of VMs to be profiled. The file path can be absolute or relative. The file should contain one VM name/IP address per line. Virtual machine name specified in the file should be the same as the VM name on the vCenter server/vSphere ESXi host.<br>For example, the file VMList.txt contains the following VMs:<ul><li>virtual_machine_A</li><li>10.150.29.110</li><li>virtual_machine_B</li><ul> |
+| -VMListFile | The file that contains the list of virtual machines to be profiled. The file path can be absolute or relative. The file should contain one VM name/IP address per line. Virtual machine name specified in the file should be the same as the virtual machine name on the vCenter server/vSphere ESXi host.<br>For example, the file VMList.txt contains the following VMs:<ul><li>virtual_machine_A</li><li>10.150.29.110</li><li>virtual_machine_B</li><ul> |
|-NoOfMinutesToProfile|The number of minutes for which profiling is to be run. Minimum is 30 minutes.| |-NoOfHoursToProfile|The number of hours for which profiling is to be run.| | -NoOfDaysToProfile | The number of days for which profiling is to be run. We recommend that you run profiling for more than 7 days to ensure that the workload pattern in your environment over the specified period is observed and used to provide an accurate recommendation. | |-Virtualization|Specify the virtualization type (VMware or Hyper-V).|
-| -Directory | (Optional) The universal naming convention (UNC) or local directory path to store profiling data generated during profiling. If a directory name is not given, the directory named ΓÇÿProfiledDataΓÇÖ under the current path is used as the default directory. |
-| -Password | (Optional) The password to use to connect to the vCenter server/vSphere ESXi host. If you do not specify one now, you are prompted for it when the command is executed.|
+| -Directory | (Optional) The universal naming convention (UNC) or local directory path to store profiling data generated during profiling. If a directory name isn'tgiven, the directory named ΓÇÿProfiledDataΓÇÖ under the current path is used as the default directory. |
+| -Password | (Optional) The password to use to connect to the vCenter server/vSphere ESXi host. If you don't specify one now, you're prompted for it when the command is executed.|
|-Port|(Optional) Port number to connect to vCenter/ESXi host. Default port is 443.| |-Protocol| (Optional) Specified the protocol either ΓÇÿhttpΓÇÖ or ΓÇÿhttpsΓÇÖ to connect to vCenter. Default protocol is https.| | -StorageAccountName | (Optional) The storage-account name that's used to find the throughput achievable for replication of data from on-premises to Azure. The tool uploads test data to this storage account to calculate throughput. The storage account must be General-purpose v1 (GPv1) type. | | -StorageAccountKey | (Optional) The storage-account key that's used to access the storage account. Go to the Azure portal > Storage accounts > <*Storage account name*> > Settings > Access Keys > Key1. |
-| -Environment | (optional) This is your target Azure Storage account environment. This can be one of three values - AzureCloud,AzureUSGovernment, AzureChinaCloud. Default is AzureCloud. Use the parameter when your target Azure region is either Azure US Government or Microsoft Azure operated by 21Vianet. |
+| -Environment | (optional) This is your target Azure Storage account environment. This can be one of three values - AzureCloud, AzureUSGovernment, AzureChinaCloud. Default is AzureCloud. Use the parameter when your target Azure region is either Azure US Government or Microsoft Azure operated by 21Vianet. |
-We recommend that you profile your VMs for more than 7 days. If churn pattern varies in a month, we recommend to profile during the week when you see the maximum churn. The best way is to profile for 31 days to get better recommendation. During the profiling period, ASRDeploymentPlanner.exe keeps running. The tool takes profiling time input in days. For a quick test of the tool or for proof of concept you can profile for few hours or minutes. The minimum allowed profiling time is 30 minutes.
+We recommend that you profile your virtual machines for more than 7 days. If churn pattern varies in a month, we recommend profiling during the week when you see the maximum churn. The best way is to profile for 31 days to get better recommendation. During the profiling period, ASRDeploymentPlanner.exe keeps running. The tool takes profiling time input in days. For a quick test of the tool or for proof of concept you can profile for few hours or minutes. The minimum allowed profiling time is 30 minutes.
-During profiling, you can optionally pass a storage-account name and key to find the throughput that Site Recovery can achieve at the time of replication from the configuration server or process server to Azure. If the storage-account name and key are not passed during profiling, the tool does not calculate achievable throughput.
+During profiling, you can optionally pass a storage-account name and key to find the throughput that Site Recovery can achieve at the time of replication from the configuration server or process server to Azure. If the storage-account name and key are not passed during profiling, the tool doesn't calculate achievable throughput.
-You can run multiple instances of the tool for various sets of VMs. Ensure that the VM names are not repeated in any of the profiling sets. For example, if you have profiled ten VMs (VM1 through VM10) and after few days you want to profile another five VMs (VM11 through VM15), you can run the tool from another command-line console for the second set of VMs (VM11 through VM15). Ensure that the second set of VMs do not have any VM names from the first profiling instance or you use a different output directory for the second run. If two instances of the tool are used for profiling the same VMs and use the same output directory, the generated report is incorrect.
+You can run multiple instances of the tool for various sets of virtual machines. Ensure that the virtual machine names are not repeated in any of the profiling sets. For example, if you have profiled ten virtual machines (VM1 through VM10) and after few days you want to profile another five virtual machines (VM11 through VM15), you can run the tool from another command-line console for the second set of virtual machines (VM11 through VM15). Ensure that the second set of virtual machines doesn't have any virtual machine names from the first profiling instance or you use a different output directory for the second run. If two instances of the tool are used for profiling the same virtual machines and use the same output directory, the generated report is incorrect.
-By default, the tool is configured to profile and generate report up to 1000 VMs. You can change limit by changing MaxVMsSupported key value in *ASRDeploymentPlanner.exe.config* file.
+By default, the tool is configured to profile and generate report up to 1000 virtual machines. You can change limit by changing MaxVMsSupported key value in *ASRDeploymentPlanner.exe.config* file.
``` <!-- Maximum number of vms supported--> <add key="MaxVmsSupported" value="1000"/> ```
-With the default settings, to profile say 1500 VMs, create two VMList.txt files. One with 1000 VMs and other with 500 VM list. Run the two instances of Azure Site Recovery Deployment Planner, one with VMList1.txt and other with VMList2.txt. You can use the same directory path to store the profiled data of both the VMList VMs.
+With the default settings, to profile say 1500 virtual machines, create two VMList.txt files. One with 1000 virtual machines and other with 500 virtual machine list. Run the two instances of Azure Site Recovery Deployment Planner, one with VMList1.txt and other with VMList2.txt. You can use the same directory path to store the profiled data of both the VMList virtual machines.
We have seen that based on the hardware configuration especially RAM size of the server from where the tool is run to generate the report, the operation may fail with insufficient memory. If you have good hardware, you can change the MaxVMsSupported any higher value. If you have multiple vCenter servers, you need to run one instance of ASRDeploymentPlanner for each vCenter server for profiling.
-VM configurations are captured once at the beginning of the profiling operation and stored in a file called VMDetailList.xml. This information is used when the report is generated. Any change in VM configuration (for example, an increased number of cores, disks, or NICs) from the beginning to the end of profiling is not captured. If a profiled VM configuration has changed during the profiling, in the public preview, here is the workaround to get latest VM details when generating the report:
+Virtual machine configurations are captured once at the beginning of the profiling operation and stored in a file called VMDetailList.xml. This information is used when the report is generated. Any change in virtual machine configuration (for example, an increased number of cores, disks, or NICs) from the beginning to the end of profiling isn'tcaptured. If a profiled virtual machine configuration has changed during the profiling, in the public preview, here is the workaround to get latest virtual machine details when generating the report:
* Back up VMdetailList.xml, and delete the file from its current location. * Pass -User and -Password arguments at the time of report generation.
-The profiling command generates several files in the profiling directory. Do not delete any of the files, because doing so affects report generation.
+The profiling command generates several files in the profiling directory. Don't delete any of the files, because doing so affects report generation.
-#### Example 1: Profile VMs for 30 days, and find the throughput from on-premises to Azure
+#### Example 1: Profile virtual machines for 30 days, and find the throughput from on-premises to Azure
``` ASRDeploymentPlanner.exe -Operation StartProfiling -Virtualization VMware -Directory ΓÇ£E:\vCenter1_ProfiledDataΓÇ¥ -Server vCenter1.contoso.com -VMListFile ΓÇ£E:\vCenter1_ProfiledData\ProfileVMList1.txtΓÇ¥ -NoOfDaysToProfile 30 -User vCenterUser1 -StorageAccountName asrspfarm1 -StorageAccountKey Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw== ```
ASRDeploymentPlanner.exe -Operation StartProfiling -Virtualization VMware -Direc
ASRDeploymentPlanner.exe -Operation StartProfiling -Virtualization VMware -Directory ΓÇ£E:\vCenter1_ProfiledDataΓÇ¥ -Server vCenter1.contoso.com -VMListFile ΓÇ£E:\vCenter1_ProfiledData\ProfileVMList1.txtΓÇ¥ -NoOfDaysToProfile 15 -User vCenterUser1 ```
-#### Example 3: Profile VMs for 60 minutes for a quick test of the tool
+#### Example 3: Profile virtual machines for 60 minutes for a quick test of the tool
```
-ASRDeploymentPlanner.exe -Operation StartProfiling -Virtualization VMware -Directory ΓÇ£E:\vCenter1_ProfiledDataΓÇ¥ -Server vCenter1.contoso.com -VMListFile ΓÇ£E:\vCenter1_ProfiledData\ProfileVMList1.txtΓÇ¥ -NoOfMinutesToProfile 60 -User vCenterUser1
+ASRDeploymentPlanner.exe -Operation StartProfiling -Virtualization VMware -Directory ΓÇ£E:\vCenter1_ProfiledDataΓÇ¥ -Server vCenter1.contoso.com -virtual machineListFile ΓÇ£E:\vCenter1_ProfiledData\ProfileVMList1.txtΓÇ¥ -NoOfMinutesToProfile 60 -User vCenterUser1
```
-#### Example 4: Profile VMs for 2 hours for a proof of concept
+#### Example 4: Profile virtual machines for 2 hours for a proof of concept
``` ASRDeploymentPlanner.exe -Operation StartProfiling -Virtualization VMware -Directory ΓÇ£E:\vCenter1_ProfiledDataΓÇ¥ -Server vCenter1.contoso.com -VMListFile ΓÇ£E:\vCenter1_ProfiledData\ProfileVMList1.txtΓÇ¥ -NoOfHoursToProfile 2 -User vCenterUser1 ```
ASRDeploymentPlanner.exe -Operation StartProfiling -Virtualization VMware -Direc
>[!NOTE] > >* If the server that the tool is running on is rebooted or has crashed, or if you close the tool by using Ctrl + C, the profiled data is preserved. However, there is a chance of missing the last 15 minutes of profiled data. In such an instance, rerun the tool in profiling mode after the server restarts.
->* When the storage-account name and key are passed, the tool measures the throughput at the last step of profiling. If the tool is closed before profiling is completed, the throughput is not calculated. To find the throughput before generating the report, you can run the GetThroughput operation from the command-line console. Otherwise, the generated report won't contain the throughput information.
+>* When the storage-account name and key are passed, the tool measures the throughput at the last step of profiling. If the tool is closed before profiling is completed, the throughput isn'tcalculated. To find the throughput before generating the report, you can run the GetThroughput operation from the command-line console. Otherwise, the generated report won't contain the throughput information.
## Generate report
After profiling is complete, you can run the tool in report-generation mode. The
|Parameter name | Description | |-|-| | -Operation | GenerateReport |
-| -Server | The vCenter/vSphere server fully qualified domain name or IP address (use the same name or IP address that you used at the time of profiling) where the profiled VMs whose report is to be generated are located. If you used a vCenter server at the time of profiling, you cannot use a vSphere server for report generation, and vice-versa.|
-| -VMListFile | The file that contains the list of profiled VMs that the report is to be generated for. The file path can be absolute or relative. The file should contain one VM name or IP address per line. The VM names that are specified in the file should be the same as the VM names on the vCenter server/vSphere ESXi host, and match what was used during profiling.|
+| -Server | The vCenter/vSphere server fully qualified domain name or IP address (use the same name or IP address that you used at the time of profiling) where the profiled virtual machines whose report is to be generated are located. If you used a vCenter server at the time of profiling, you can't use a vSphere server for report generation, and vice-versa.|
+| -VMListFile | The file that contains the list of profiled virtual machines that the report is to be generated for. The file path can be absolute or relative. The file should contain one virtual machine name or IP address per line. The virtual machine names that are specified in the file should be the same as the virtual machine names on the vCenter server/vSphere ESXi host, and match what was used during profiling.|
|-Virtualization|Specify the virtualization type (VMware or Hyper-V).| | -Directory | (Optional) The UNC or local directory path where the profiled data (files generated during profiling) is stored. This data is required for generating the report. If a name isn't specified, ΓÇÿProfiledDataΓÇÖ directory is used. |
-| -GoalToCompleteIR | (Optional) The number of hours in which the initial replication of the profiled VMs needs to be completed. The generated report provides the number of VMs for which initial replication can be completed in the specified time. The default is 72 hours. |
-| -User | (Optional) The user name to use to connect to the vCenter/vSphere server. The name is used to fetch the latest configuration information of the VMs, such as the number of disks, number of cores, and number of NICs, to use in the report. If the name isn't provided, the configuration information collected at the beginning of the profiling kickoff is used. |
-| -Password | (Optional) The password to use to connect to the vCenter server/vSphere ESXi host. If the password isn't specified as a parameter, you is prompted for it later when the command is executed. |
+| -GoalToCompleteIR | (Optional) The number of hours in which the initial replication of the profiled virtual machines needs to be completed. The generated report provides the number of virtual machines for which initial replication can be completed in the specified time. The default is 72 hours. |
+| -User | (Optional) The user name to use to connect to the vCenter/vSphere server. The name is used to fetch the latest configuration information of the virtual machines, such as the number of disks, number of cores, and number of NICs, to use in the report. If the name isn't provided, the configuration information collected at the beginning of the profiling kickoff is used. |
+| -Password | (Optional) The password to use to connect to the vCenter server/vSphere ESXi host. If the password isn't specified as a parameter, you're prompted for it later when the command is executed. |
|-Port|(Optional) Port number to connect to vCenter/ESXi host. Default port is 443.| |-Protocol|(Optional) Specified the protocol either ΓÇÿhttpΓÇÖ or ΓÇÿhttpsΓÇÖ to connect to vCenter. Default protocol is https.| | -DesiredRPO | (Optional) The desired recovery point objective, in minutes. The default is 15 minutes.|
After profiling is complete, you can run the tool in report-generation mode. The
| -UseManagedDisks | (Optional) UseManagedDisks - Yes/No. Default is Yes. The number of virtual machines that can be placed into a single storage account is calculated considering whether Failover/Test failover of virtual machines is done on managed disk instead of unmanaged disk. | |-SubscriptionId |(Optional) The subscription GUID. This parameter is required when you need to generate the cost estimation report with the latest price based on your subscription, the offer that is associated with your subscription and for your specific target Azure region in the **specified currency**.| |-TargetRegion|(Optional) The Azure region where replication is targeted. Since Azure has different costs per region, to generate report with specific target Azure region use this parameter.<br>Default is WestUS2 or the last used target region.<br>Refer to the list of [supported target regions](site-recovery-vmware-deployment-planner-cost-estimation.md#supported-target-regions).|
-|-OfferId|(Optional) The offer associated with the give subscription. Default is MS-AZR-0003P (Pay-As-You-Go).|
+|-OfferId|(Optional) The offer associated with the given subscription. Default is MS-AZR-0003P (pay-as-you-go).|
|-Currency|(Optional) The currency in which cost is shown in the generated report. Default is US Dollar ($) or the last used currency.<br>Refer to the list of [supported currencies](site-recovery-vmware-deployment-planner-cost-estimation.md#supported-currencies).|
-By default, the tool is configured to profile and generate report up to 1000 VMs. You can change limit by changing MaxVMsSupported key value in *ASRDeploymentPlanner.exe.config* file.
+By default, the tool is configured to profile and generate report up to 1000 virtual machines. You can change limit by changing MaxVMsSupported key value in *ASRDeploymentPlanner.exe.config* file.
```xml <!-- Maximum number of vms supported--> <add key="MaxVmsSupported" value="1000"/>
ASRDeploymentPlanner.exe -Operation GenerateReport -Virtualization VMware -Dire
## Percentile value used for the calculation **What default percentile value of the performance metrics collected during profiling does the tool use when it generates a report?**
-The tool defaults to the 95th percentile values of read/write IOPS, write IOPS, and data churn that are collected during profiling of all the VMs. This metric ensures that the 100th percentile spike your VMs might see because of temporary events is not used to determine your target storage-account and source-bandwidth requirements. For example, a temporary event might be a backup job running once a day, a periodic database indexing or analytics report-generation activity, or other similar short-lived, point-in-time events.
+The tool defaults to the 95th percentile values of read/write IOPS, write IOPS, and data churn that are collected during profiling of all the virtual machines. This metric ensures that the 100th percentile spike your virtual machines might see because of temporary events aren't used to determine your target storage-account and source-bandwidth requirements. For example, a temporary event might be a backup job running once a day, a periodic database indexing or analytics report-generation activity, or other similar short-lived, point-in-time events.
-Using 95th percentile values gives a true picture of real workload characteristics, and it gives you the best performance when the workloads are running on Azure. We do not anticipate that you would need to change this number. If you do change the value (to the 90th percentile, for example), you can update the configuration file *ASRDeploymentPlanner.exe.config* in the default folder and save it to generate a new report on the existing profiled data.
+Using 95th percentile values gives a true picture of real workload characteristics, and it gives you the best performance when the workloads are running on Azure. We don't anticipate that you would need to change this number. If you do change the value (to the 90th percentile, for example), you can update the configuration file *ASRDeploymentPlanner.exe.config* in the default folder and save it to generate a new report on the existing profiled data.
```xml <add key="WriteIOPSPercentile" value="95" /> <add key="ReadWriteIOPSPercentile" value="95" />
Using 95th percentile values gives a true picture of real workload characteristi
## Growth-factor considerations **Why should I consider growth factor when I plan deployments?**
-It is critical to account for growth in your workload characteristics, assuming a potential increase in usage over time. After protection is in place, if your workload characteristics change, you cannot switch to a different storage account for protection without disabling and re-enabling the protection.
+It's critical to account for growth in your workload characteristics, assuming a potential increase in usage over time. After protection is in place, if your workload characteristics change, you can't switch to a different storage account for protection without disabling and re-enabling the protection.
-For example, let's say that today your VM fits in a standard storage replication account. Over the next three months, several changes are likely to occur:
+For example, let's say that today your virtual machine fits in a standard storage replication account. Over the next three months, several changes are likely to occur:
-* The number of users of the application that runs on the VM increases.
-* The resulting increased churn on the VM requires the VM to go to premium storage so that Site Recovery replication can keep pace.
-* Consequently, you have to disable and re-enable protection to a premium storage account.
+* The number of users of the application that runs on the virtual machine increases.
+* The resulting increased churn on the virtual machine requires the virtual machine to go to premium storage so that Site Recovery replication can keep pace.
+* So, you have to disable and re-enable protection to a premium storage account.
-We strongly recommend that you plan for growth during deployment planning and while the default value is 30 percent. You are the expert on your application usage pattern and growth projections, and you can change this number accordingly while generating a report. Moreover, you can generate multiple reports with various growth factors with the same profiled data and determine what target storage and source bandwidth recommendations work best for you.
+We strongly recommend that you plan for growth during deployment planning and while the default value is 30 percent. You're the expert on your application usage pattern and growth projections, and you can change this number accordingly while generating a report. Moreover, you can generate multiple reports with various growth factors with the same profiled data and determine what target storage and source bandwidth recommendations work best for you.
The generated Microsoft Excel report contains the following information: * [On-premises Summary](site-recovery-vmware-deployment-planner-analyze-report.md#on-premises-summary) * [Recommendations](site-recovery-vmware-deployment-planner-analyze-report.md#recommendations)
-* [VM<->Storage Placement](site-recovery-vmware-deployment-planner-analyze-report.md#vm-storage-placement)
-* [Compatible VMs](site-recovery-vmware-deployment-planner-analyze-report.md#compatible-vms)
-* [Incompatible VMs](site-recovery-vmware-deployment-planner-analyze-report.md#incompatible-vms)
+* [Virtual machine<->Storage Placement](site-recovery-vmware-deployment-planner-analyze-report.md#virtual-machine-storage-placement)
+* [Compatible VMs](site-recovery-vmware-deployment-planner-analyze-report.md#compatible-virtual-machines)
+* [Incompatible VMs](site-recovery-vmware-deployment-planner-analyze-report.md#incompatible-virtual-machines)
* [Cost Estimation](site-recovery-vmware-deployment-planner-cost-estimation.md) ![Deployment planner](media/site-recovery-vmware-deployment-planner-analyze-report/Recommendations-v2a.png)
Open a command-line console, and go to the Site Recovery deployment planning too
|-|-| | -Operation | GetThroughput | |-Virtualization|Specify the virtualization type (VMware or Hyper-V).|
-| -Directory | (Optional) The UNC or local directory path where the profiled data (files generated during profiling) is stored. This data is required for generating the report. If a directory name is not specified, ΓÇÿProfiledDataΓÇÖ directory is used. |
+| -Directory | (Optional) The UNC or local directory path where the profiled data (files generated during profiling) is stored. This data is required for generating the report. If a directory name isn'tspecified, ΓÇÿProfiledDataΓÇÖ directory is used. |
| -StorageAccountName | The storage-account name that's used to find the bandwidth consumed for replication of data from on-premises to Azure. The tool uploads test data to this storage account to find the bandwidth consumed. The storage account must be either General-purpose v1 (GPv1) type.| | -StorageAccountKey | The storage-account key that's used to access the storage account. Go to the Azure portal > Storage accounts > <*Storage account name*> > Settings > Access Keys > Key1 (or a primary access key for a classic storage account). |
-| -VMListFile | The file that contains the list of VMs to be profiled for calculating the bandwidth consumed. The file path can be absolute or relative. The file should contain one VM name/IP address per line. The VM names specified in the file should be the same as the VM names on the vCenter server/vSphere ESXi host.<br>For example, the file VMList.txt contains the following VMs:<ul><li>VM_A</li><li>10.150.29.110</li><li>VM_B</li></ul>|
-| -Environment | (optional) This is your target Azure Storage account environment. This can be one of three values - AzureCloud,AzureUSGovernment, AzureChinaCloud. Default is AzureCloud. Use the parameter when your target Azure region is either Azure US Government or Microsoft Azure operated by 21Vianet. |
+| -VMListFile | The file that contains the list of virtual machines to be profiled for calculating the bandwidth consumed. The file path can be absolute or relative. The file should contain one virtual machine name/IP address per line. The virtual machine names specified in the file should be the same as the virtual machine names on the vCenter server/vSphere ESXi host.<br>For example, the file VMList.txt contains the following VMs:<ul><li>VM_A</li><li>10.150.29.110</li><li>VM_B</li></ul>|
+| -Environment | (optional) This is your target Azure Storage account environment. This can be one of three values - AzureCloud, AzureUSGovernment, AzureChinaCloud. Default is AzureCloud. Use the parameter when your target Azure region is either Azure US Government or Microsoft Azure operated by 21Vianet. |
-The tool creates several 64-MB asrvhdfile<#>.vhd files (where "#" is the number of files) on the specified directory. The tool uploads the files to the storage account to find the throughput. After the throughput is measured, the tool deletes all the files from the storage account and from the local server. If the tool is terminated for any reason while it is calculating throughput, it doesn't delete the files from the storage or from the local server. You have to delete them manually.
+The tool creates several 64-MB asrvhdfile<#>.vhd files (where "#" is the number of files) on the specified directory. The tool uploads the files to the storage account to find the throughput. After the throughput is measured, the tool deletes all the files from the storage account and from the local server. If the tool is terminated for any reason while it's calculating throughput, it doesn't delete the files from the storage or from the local server. You have to delete them manually.
-The throughput is measured at a specified point in time, and it is the maximum throughput that Site Recovery can achieve during replication, provided that all other factors remain the same. For example, if any application starts consuming more bandwidth on the same network, the actual throughput varies during replication. If you are running the GetThroughput command from a configuration server, the tool is unaware of any protected VMs and ongoing replication. The result of the measured throughput is different if the GetThroughput operation is run when the protected VMs have high data churn. We recommend that you run the tool at various points in time during profiling to understand what throughput levels can be achieved at various times. In the report, the tool shows the last measured throughput.
+The throughput is measured at a specified point in time, and it's the maximum throughput that Site Recovery can achieve during replication, if all other factors remain the same. For example, if any application starts consuming more bandwidth on the same network, the actual throughput varies during replication. If you're running the GetThroughput command from a configuration server, the tool is unaware of any protected virtual machines and ongoing replication. The result of the measured throughput is different if the GetThroughput operation is run when the protected virtual machines have high data churn. We recommend that you run the tool at various points in time during profiling to understand what throughput levels can be achieved at various times. In the report, the tool shows the last measured throughput.
### Example ```
site-recovery Site Recovery Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new-archive.md
Title: Archive for What's new in Azure Site Recovery description: An archive for features and updates in the Azure Site Recovery service.-+
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
Title: What's new in Azure Site Recovery description: Provides a summary of the latest updates in the Azure Site Recovery service.-+
site-recovery Site Recovery Workload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-workload.md
Title: About disaster recovery for on-premises apps with Azure Site Recovery description: Describes the workloads that can be protected using disaster recovery with the Azure Site Recovery service.-+ Last updated 01/10/2024
Use Site Recovery to protect your SAP deployment, as follows:
Use Site Recovery to protect your Internet Information Services (IIS) deployment, as follows:
-Azure Site Recovery provides disaster recovery by replicating the critical components in your environment to a cold remote site or a public cloud like Microsoft Azure. Since the virtual machines with the web server and the database are replicated to the recovery site, there's no requirement for a separate backup for configuration files or certificates. The application mappings and bindings dependent on environment variables that are changed post failover can be updated through scripts integrated into the disaster recovery plans. Virtual machines are brought up on the recovery site only during a failover. Azure Site Recovery also helps you orchestrate the end-to-end failover by providing you the following capabilities:
+Azure Site Recovery provides disaster recovery by replicating the critical components in your environment to a cold remote site or a public cloud like Microsoft Azure. Since the virtual machines with the web server and the database are replicated to the recovery site, there's no requirement for a separate backup for configuration files or certificates. The application mappings and bindings dependent on environment variables that are changed post failover can be updated through scripts integrated into the disaster recovery plans. Virtual machines are brought up on the recovery site only during a failover. Azure Site Recovery also helps you orchestrate the end-to-end failover by providing you with the following capabilities:
- Sequencing the shutdown and startup of virtual machines in the various tiers. - Adding scripts to allow updates of application dependencies and bindings on the virtual machines after they've started. The scripts can also be used to update the DNS server to point to the recovery site.
site-recovery Transport Layer Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/transport-layer-security.md
Title: Transport Layer Security in Azure Site Recovery description: Learn how to enable Azure Site Recovery to use the encryption protocol Transport Layer Security (TLS) to keep data secure when being transferred over a network.-+ Last updated 12/15/2023
site-recovery Vmware Azure Architecture Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-architecture-modernized.md
Title: VMware VM disaster recovery architecture in Azure Site Recovery - Modernized description: This article provides an overview of components and architecture used when setting up disaster recovery of on-premises VMware VMs to Azure with Azure Site Recovery - Modernized -+ Last updated 12/04/2023
site-recovery Vmware Azure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-architecture.md
Title: VMware VM disaster recovery architecture in Azure Site Recovery - Classic description: This article provides an overview of components and architecture used when setting up disaster recovery of on-premises VMware VMs to Azure with Azure Site Recovery - Classic -+ Last updated 09/10/2024
A crash consistent snapshot captures data that was on the disk when the snapshot
**Description** | **Details** | **Recommendation** | |
-App-consistent recovery points are created from app-consistent snapshots.<br/><br/> An app-consistent snapshot contain all the information in a crash-consistent snapshot, plus all the data in memory and transactions in progress. | App-consistent snapshots use the Volume Shadow Copy Service (VSS):<br/><br/> 1) Azure Site Recovery uses Copy Only backup (VSS_BT_COPY) method which does not change Microsoft SQL's transaction log backup time and sequence number </br></br> 2) When a snapshot is initiated, VSS perform a copy-on-write (COW) operation on the volume.<br/><br/> 3) Before it performs the COW, VSS informs every app on the machine that it needs to flush its memory-resident data to disk.<br/><br/> 4) VSS then allows the backup/disaster recovery app (in this case Site Recovery) to read the snapshot data and proceed. | App-consistent snapshots are taken in accordance with the frequency you specify. This frequency should always be less than you set for retaining recovery points. For example, if you retain recovery points using the default setting of 24 hours, you should set the frequency at less than 24 hours.<br/><br/>They're more complex and take longer to complete than crash-consistent snapshots.<br/><br/> They affect the performance of apps running on a VM enabled for replication.
+App-consistent recovery points are created from app-consistent snapshots.<br/><br/> An app-consistent snapshot contains all the information in a crash-consistent snapshot, plus all the data in memory and transactions in progress. | App-consistent snapshots use the Volume Shadow Copy Service (VSS):<br/><br/> 1) Azure Site Recovery uses Copy Only backup (VSS_BT_COPY) method which does not change Microsoft SQL's transaction log backup time and sequence number </br></br> 2) When a snapshot is initiated, VSS perform a copy-on-write (COW) operation on the volume.<br/><br/> 3) Before it performs the COW, VSS informs every app on the machine that it needs to flush its memory-resident data to disk.<br/><br/> 4) VSS then allows the backup/disaster recovery app (in this case Site Recovery) to read the snapshot data and proceed. | App-consistent snapshots are taken in accordance with the frequency you specify. This frequency should always be less than you set for retaining recovery points. For example, if you retain recovery points using the default setting of 24 hours, you should set the frequency at less than 24 hours.<br/><br/>They're more complex and take longer to complete than crash-consistent snapshots.<br/><br/> They affect the performance of apps running on a VM enabled for replication.
## Failover and failback process
site-recovery Vmware Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-common-questions.md
Title: Common questions about VMware disaster recovery with Azure Site Recovery description: Get answers to common questions about disaster recovery of on-premises VMware VMs to Azure by using Azure Site Recovery. Last updated 07/10/2024-+
No. Azure Site Recovery cannot use On-demand capacity reservation unless it's Az
### The application license is based on UUID of VMware virtual machine. Is the UUID of a VMware virtual machine changed when it is failed over to Azure?
-Yes, the UUID of the Azure virtual machine is different from the on-prem VMware virtual machine. However, most application vendors support transferring the license to a new UUID. If the application supports it, the customer can work with the vendor to transfer the license to the VM with the new UUID.
+Yes, the UUID of the Azure virtual machine is different from the on-premises VMware virtual machine. However, most application vendors support transferring the license to a new UUID. If the application supports it, the customer can work with the vendor to transfer the license to the VM with the new UUID.
## Automation and scripting
site-recovery Vmware Azure Set Up Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-set-up-replication.md
Title: Set up replication policies for VMware disaster recovery with Azure Site Recovery| Microsoft Docs description: Describes how to configure replication settings for VMware disaster recovery to Azure with Azure Site Recovery. - -+ Last updated 05/27/2021
site-recovery Vmware Physical Large Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-large-deployment.md
The Deployment Planner helps you to gather information about your VMware on-prem
- Run the Deployment Planner during a period that represents typical churn for your VMs. This will generate more accurate estimates and recommendations. - We recommend that you run the Deployment Planner on the configuration server machine, since the Planner calculates throughput from the server on which it's running. [Learn more](site-recovery-vmware-deployment-planner-run.md#get-throughput) about measuring throughput.-- If you don't yet have a configuration server set up:
+- If you don't yet have a configuration server setup:
- [Get an overview](vmware-physical-azure-config-process-server-overview.md) of Site Recovery components. - [Set up a configuration server](vmware-azure-deploy-configuration-server.md), in order to run the Deployment Planner on it.
Configuration server capacity is affected by the number of machines replicating,
| | | 8 vCPUs<br> 2 sockets * 4 cores @ 2.5 Ghz | 16 GB | 600 GB | Up to 550 machines<br> Assumes that each machine has three disks of 100 GB each. -- These limits are based on a configuration server set up using an OVF template.
+- These limits are based on a configuration server setup using an OVF template.
- The limits assume that you're not using the process server that's running by default on the configuration server. If you need to add a new configuration server, follow these instructions:
After planning capacity and deploying the required components and infrastructure
1. Sort machines into batches. You enable replication for VMs within a batch, and then move on to the next batch.
- - For VMware VMs, you can use the [recommended VM batch size](site-recovery-vmware-deployment-planner-analyze-report.md#recommended-vm-batch-size-for-initial-replication) in the Deployment Planner report.
+ - For VMware VMs, you can use the [recommended VM batch size](site-recovery-vmware-deployment-planner-analyze-report.md#recommended-virtual-machine-batch-size-for-initial-replication) in the Deployment Planner report.
- For physical machines, we recommend you identify batches based on machines that have a similar size and amount of data, and on available network throughput. The aim is to batch machines that are likely to finish their initial replication in around the same amount of time. 2. If disk churn for a machine is high, or exceeds limits in Deployment thePlanner, you can move non-critical files you don't need to replicate (such as log dumps or temp files) off the machine. For VMware VMs, you can move these files to a separate disk, and then [exclude that disk](vmware-azure-exclude-disk.md) from replication.
storage Storage Blob Scalable App Create Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-scalable-app-create-vm.md
- Title: Create a VM and storage account for a scalable application in Azure
-description: Learn how to deploy a VM to be used to run a scalable application using Azure blob storage
--- Previously updated : 02/20/2018----
-# Create a virtual machine and storage account for a scalable application
-
-This tutorial is part one of a series. This tutorial shows you deploy an application that uploads and download large amounts of random data with an Azure storage account. When you're finished, you have a console application running on a virtual machine that you upload and download large amounts of data to a storage account.
-
-In part one of the series, you learn how to:
-
-> [!div class="checklist"]
-> - Create a storage account
-> - Create a virtual machine
-> - Configure a custom script extension
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
---
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module Az version 0.7 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-
-## Create a resource group
-
-Create an Azure resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which Azure resources are deployed and managed.
-
-```azurepowershell-interactive
-New-AzResourceGroup -Name myResourceGroup -Location EastUS
-```
-
-## Create a storage account
-
-The sample uploads 50 large files to a blob container in an Azure Storage account. A storage account provides a unique namespace to store and access your Azure storage data objects. Create a storage account in the resource group you created by using the [New-AzStorageAccount](/powershell/module/az.Storage/New-azStorageAccount) command.
-
-In the following command, substitute your own globally unique name for the Blob storage account where you see the `<blob_storage_account>` placeholder.
-
-```powershell-interactive
-$storageAccount = New-AzStorageAccount -ResourceGroupName myResourceGroup `
- -Name "<blob_storage_account>" `
- -Location EastUS `
- -SkuName Standard_LRS `
- -Kind Storage `
- -AllowBlobPublicAccess $false
-```
-
-## Create a virtual machine
-
-Create a virtual machine configuration. This configuration includes the settings that are used when deploying the virtual machine such as a virtual machine image, size, and authentication configuration. When running this step, you are prompted for credentials. The values that you enter are configured as the user name and password for the virtual machine.
-
-Create the virtual machine with [New-AzVM](/powershell/module/az.compute/new-azvm).
-
-```azurepowershell-interactive
-# Variables for common values
-$resourceGroup = "myResourceGroup"
-$location = "eastus"
-$vmName = "myVM"
-
-# Create user object
-$cred = Get-Credential -Message "Enter a username and password for the virtual machine."
-
-# Create a subnet configuration
-$subnetConfig = New-AzVirtualNetworkSubnetConfig -Name mySubnet -AddressPrefix 192.168.1.0/24
-
-# Create a virtual network
-$vnet = New-AzVirtualNetwork -ResourceGroupName $resourceGroup -Location $location `
- -Name MYvNET -AddressPrefix 192.168.0.0/16 -Subnet $subnetConfig
-
-# Create a public IP address and specify a DNS name
-$pip = New-AzPublicIpAddress -ResourceGroupName $resourceGroup -Location $location `
- -Name "mypublicdns$(Get-Random)" -AllocationMethod Static -IdleTimeoutInMinutes 4
-
-# Create a virtual network card and associate with public IP address
-$nic = New-AzNetworkInterface -Name myNic -ResourceGroupName $resourceGroup -Location $location `
- -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id
-
-# Create a virtual machine configuration
-$vmConfig = New-AzVMConfig -VMName myVM -VMSize Standard_DS14_v2 | `
- Set-AzVMOperatingSystem -Windows -ComputerName myVM -Credential $cred | `
- Set-AzVMSourceImage -PublisherName MicrosoftWindowsServer -Offer WindowsServer `
- -Skus 2016-Datacenter -Version latest | Add-AzVMNetworkInterface -Id $nic.Id
-
-# Create a virtual machine
-New-AzVM -ResourceGroupName $resourceGroup -Location $location -VM $vmConfig
-
-Write-host "Your public IP address is $($pip.IpAddress)"
-```
-
-## Deploy configuration
-
-For this tutorial, there are pre-requisites that must be installed on the virtual machine. The custom script extension is used to run a PowerShell script that completes the following tasks:
-
-> [!div class="checklist"]
-> - Install .NET core 2.0
-> - Install chocolatey
-> - Install GIT
-> - Clone the sample repo
-> - Restore NuGet packages
-> - Creates 50 1-GB files with random data
-
-Run the following cmdlet to finalize configuration of the virtual machine. This step takes 5-15 minutes to complete.
-
-```azurepowershell-interactive
-# Start a CustomScript extension to use a simple PowerShell script to install .NET core, dependencies, and pre-create the files to upload.
-Set-AzVMCustomScriptExtension -ResourceGroupName myResourceGroup `
- -VMName myVM `
- -Location EastUS `
- -FileUri https://raw.githubusercontent.com/azure-samples/storage-dotnet-perf-scale-app/master/setup_env.ps1 `
- -Run 'setup_env.ps1' `
- -Name DemoScriptExtension
-```
-
-## Next steps
-
-In part one of the series, you learned about creating a storage account, deploying a virtual machine and configuring the virtual machine with the required pre-requisites such as how to:
-
-> [!div class="checklist"]
-> - Create a storage account
-> - Create a virtual machine
-> - Configure a custom script extension
-
-Advance to part two of the series to upload large amounts of data to a storage account using exponential retry and parallelism.
-
-> [!div class="nextstepaction"]
-> [Upload large amounts of large files in parallel to a storage account](storage-blob-scalable-app-upload-files.md)
storage Storage Blob Scalable App Download Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-scalable-app-download-files.md
- Title: Download large amounts of random data from Azure Storage
-description: Learn how to use the Azure SDK to download large amounts of random data from an Azure Storage account
--- Previously updated : 08/07/2024----
-# Download large amounts of random data from Azure storage
-
-This tutorial is part three of a series. This tutorial shows you how to download large amounts of data from Azure storage.
-
-In part three of the series, you learn how to:
-
-> [!div class="checklist"]
-> - Update the application
-> - Run the application
-> - Validate the number of connections
-
-## Prerequisites
-
-To complete this tutorial, you must have completed the previous Storage tutorial: [Upload large amounts of random data in parallel to Azure storage][previous-tutorial].
-
-## Remote into your virtual machine
-
- To create a remote desktop session with the virtual machine, use the following command on your local machine. Replace the IP address with the publicIPAddress of your virtual machine. When prompted, enter the credentials used when creating the virtual machine.
-
-```console
-mstsc /v:<publicIpAddress>
-```
-
-## Update the application
-
-In the previous tutorial, you only uploaded files to the storage account. Open `D:\git\storage-dotnet-perf-scale-app\Program.cs` in a text editor. Replace the `Main` method with the following sample. This example comments out the upload task and uncomments the download task and the task to delete the content in the storage account when complete.
-
-```csharp
-public static void Main(string[] args)
-{
- Console.WriteLine("Azure Blob storage performance and scalability sample");
- // Set threading and default connection limit to 100 to
- // ensure multiple threads and connections can be opened.
- // This is in addition to parallelism with the storage
- // client library that is defined in the functions below.
- ThreadPool.SetMinThreads(100, 4);
- ServicePointManager.DefaultConnectionLimit = 100; // (Or More)
-
- bool exception = false;
- try
- {
- // Call the UploadFilesAsync function.
- // await UploadFilesAsync();
-
- // Uncomment the following line to enable downloading of files from the storage account.
- // This is commented out initially to support the tutorial at
- // https://learn.microsoft.com/azure/storage/blobs/storage-blob-scalable-app-download-files
- await DownloadFilesAsync();
- }
- catch (Exception ex)
- {
- Console.WriteLine(ex.Message);
- exception = true;
- }
- finally
- {
- // The following function will delete the container and all files contained in them.
- // This is commented out initially as the tutorial at
- // https://learn.microsoft.com/azure/storage/blobs/storage-blob-scalable-app-download-files
- // has you upload only for one tutorial and download for the other.
- if (!exception)
- {
- // await DeleteExistingContainersAsync();
- }
- Console.WriteLine("Press any key to exit the application");
- Console.ReadKey();
- }
-}
-```
-
-After the application has been updated, you need to build the application again. Open a `Command Prompt` and navigate to `D:\git\storage-dotnet-perf-scale-app`. Rebuild the application by running `dotnet build` as seen in the following example:
-
-```console
-dotnet build
-```
-
-## Run the application
-
-Now that the application has been rebuilt it is time to run the application with the updated code. If not already open, open a `Command Prompt` and navigate to `D:\git\storage-dotnet-perf-scale-app`.
-
-Type `dotnet run` to run the application.
-
-```console
-dotnet run
-```
-
-The `DownloadFilesAsync` task is shown in the following example:
-
-The application reads the containers located in the storage account. It iterates through the blobs using the [GetBlobs](/dotnet/api/azure.storage.blobs.blobcontainerclient.getblobs) method and downloads them to the local machine using the [DownloadToAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadtoasync) method.
--
-### Validate the connections
-
-While the files are being downloaded, you can verify the number of concurrent connections to your storage account. Open a console window and type `netstat -a | find /c "blob:https"`. This command shows the number of connections that are currently opened. As you can see from the following example, over 280 connections were open when downloading files from the storage account.
-
-```console
-C:\>netstat -a | find /c "blob:https"
-289
-
-C:\>
-```
-
-## Next steps
-
-In part three of the series, you learned about downloading large amounts of data from a storage account, including how to:
-
-> [!div class="checklist"]
-> - Run the application
-> - Validate the number of connections
-
-Go to part four of the series to verify throughput and latency metrics in the portal.
-
-> [!div class="nextstepaction"]
-> [Verify throughput and latency metrics in the portal](storage-blob-scalable-app-verify-metrics.md)
-
-[previous-tutorial]: storage-blob-scalable-app-upload-files.md
storage Storage Blob Scalable App Upload Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-scalable-app-upload-files.md
- Title: Upload large amounts of random data in parallel to Azure Storage
-description: Learn how to use the Azure Storage client library to upload large amounts of random data in parallel to an Azure Storage account
--- Previously updated : 08/14/2024---
-# Upload large amounts of random data in parallel to Azure storage
-
-This tutorial is part two of a series. This tutorial shows you how to deploy an application that uploads large amount of random data to an Azure storage account.
-
-In part two of the series, you learn how to:
-
-> [!div class="checklist"]
-> - Configure the connection string
-> - Build the application
-> - Run the application
-> - Validate the number of connections
-
-Microsoft Azure Blob Storage provides a scalable service for storing your data. To ensure your application is as performant as possible, an understanding of how blob storage works is recommended. Knowledge of the limits for Azure blobs is important, to learn more about these limits visit: [Scalability and performance targets for Blob storage](../blobs/scalability-targets.md).
-
-[Partition naming](../blobs/storage-performance-checklist.md#partitioning) is another potentially important factor when designing a high-performance application using blobs. For block sizes greater than or equal to 4 MiB, [High-Throughput block blobs](https://azure.microsoft.com/blog/high-throughput-with-azure-blob-storage/) are used, and partition naming will not impact performance. For block sizes less than 4 MiB, Azure storage uses a range-based partitioning scheme to scale and load balance. This configuration means that files with similar naming conventions or prefixes go to the same partition. This logic includes the name of the container that the files are being uploaded to. In this tutorial, you use files that have GUIDs for names as well as randomly generated content. They are then uploaded to five different containers with random names.
-
-## Prerequisites
-
-To complete this tutorial, you must have completed the previous Storage tutorial: [Create a virtual machine and storage account for a scalable application][previous-tutorial].
-
-## Remote into your virtual machine
-
-Use the following command on your local machine to create a remote desktop session with the virtual machine. Replace the IP address with the publicIPAddress of your virtual machine. When prompted, enter the credentials you used when creating the virtual machine.
-
-```console
-mstsc /v:<publicIpAddress>
-```
-
-## Configure the connection string
-
-In the Azure portal, navigate to your storage account. Select **Access keys** under **Settings** in your storage account. Copy the **connection string** from the primary or secondary key. Log in to the virtual machine you created in the previous tutorial. Open a **Command Prompt** as an administrator and run the `setx` command with the `/m` switch, this command saves a machine setting environment variable. The environment variable is not available until you reload the **Command Prompt**. Replace **\<storageConnectionString\>** in the following sample:
-
-```console
-setx storageconnectionstring "<storageConnectionString>" /m
-```
-
-> [!IMPORTANT]
-> This code example uses a connection string to authorize access to your storage account. This configuration is for example purposes. Connection strings and account access keys should be used with caution in application code. If your account access key is lost or accidentally placed in an insecure location, your service may become vulnerable. Anyone who has the access key is able to authorize requests against the storage account, and effectively has access to all the data.
-
-When finished, open another **Command Prompt**, navigate to `D:\git\storage-dotnet-perf-scale-app` and type `dotnet build` to build the application.
-
-## Run the application
-
-Navigate to `D:\git\storage-dotnet-perf-scale-app`.
-
-Type `dotnet run` to run the application. The first time you run `dotnet` it populates your local package cache, to improve restore speed and enable offline access. This command takes up to a minute to complete and only happens once.
-
-```console
-dotnet run
-```
-
-The application creates five randomly named containers and begins uploading the files in the staging directory to the storage account.
-
-The `UploadFilesAsync` method is shown in the following example:
--
-The following example is a truncated application output running on a Windows system.
-
-```console
-Created container 2dbb45f4-099e-49eb-880c-5b02ebac135e
-Created container 0d784365-3bdf-4ef2-b2b2-c17b6480792b
-Created container 42ac67f2-a316-49c9-8fdb-860fb32845d7
-Created container f0357772-cb04-45c3-b6ad-ff9b7a5ee467
-Created container 92480da9-f695-4a42-abe8-fb35e71eb887
-Iterating in directory: C:\git\myapp\upload
-Found 5 file(s)
-Uploading 1d596d16-f6de-4c4c-8058-50ebd8141e4d.pdf to container 2dbb45f4-099e-49eb-880c-5b02ebac135e
-Uploading 242ff392-78be-41fb-b9d4-aee8152a6279.pdf to container 0d784365-3bdf-4ef2-b2b2-c17b6480792b
-Uploading 38d4d7e2-acb4-4efc-ba39-f9611d0d55ef.pdf to container 42ac67f2-a316-49c9-8fdb-860fb32845d7
-Uploading 45930d63-b0d0-425f-a766-cda27ff00d32.pdf to container f0357772-cb04-45c3-b6ad-ff9b7a5ee467
-Uploading 5129b385-5781-43be-8bac-e2fbb7d2bd82.pdf to container 92480da9-f695-4a42-abe8-fb35e71eb887
-Uploaded 5 files in 16.9552163 seconds
-```
-
-### Validate the connections
-
-While the files are being uploaded, you can verify the number of concurrent connections to your storage account. Open a console window and type `netstat -a | find /c "blob:https"`. This command shows the number of connections that are currently opened. As you can see from the following example, 800 connections were open when uploading the random files to the storage account. This value changes throughout running the upload. By uploading in parallel block chunks, the amount of time required to transfer the contents is greatly reduced.
-
-```
-C:\>netstat -a | find /c "blob:https"
-800
-
-C:\>
-```
-
-## Next steps
-
-In part two of the series, you learned about uploading large amounts of random data to a storage account in parallel, such as how to:
-
-> [!div class="checklist"]
-> - Configure the connection string
-> - Build the application
-> - Run the application
-> - Validate the number of connections
-
-Advance to part three of the series to download large amounts of data from a storage account.
-
-> [!div class="nextstepaction"]
-> [Download large amounts of random data from Azure storage](storage-blob-scalable-app-download-files.md)
-
-[previous-tutorial]: storage-blob-scalable-app-create-vm.md
storage Storage Blob Scalable App Verify Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-scalable-app-verify-metrics.md
- Title: Verify throughput and latency metrics for a storage account in the Azure portal
-description: Learn how to verify throughput and latency metrics for a storage account in the portal.
--- Previously updated : 02/20/2018---
-# Verify throughput and latency metrics for a storage account
-
-This tutorial is part four and the final part of a series. In the previous tutorials, you learned how to upload and download larges amounts of random data to an Azure storage account. This tutorial shows you how you can use metrics to view throughput and latency in the Azure portal.
-
-In part four of the series, you learn how to:
-
-> [!div class="checklist"]
-> - Configure charts in the Azure portal
-> - Verify throughput and latency metrics
-
-[Azure storage metrics](./monitor-blob-storage.md?toc=/azure/storage/blobs/toc.json) uses Azure monitor to provide a unified view into the performance and availability of your storage account.
-
-## Configure metrics
-
-1. Navigate to **Metrics** under **SETTINGS** in your storage account.
-
-1. Choose Blob from the **SUB SERVICE** drop-down.
-
-1. Under **METRIC**, select one of the metrics. For a list of supported metrics, see [Supported metrics for Microsoft.Storage/storageAccounts](monitor-blob-storage-reference.md#supported-metrics-for-microsoftstoragestorageaccounts).
-
- These metrics give you an idea of the latency and throughput of the application. The metrics you configure in the portal are in 1-minute averages. If a transaction finished in the middle of a minute, that minute data is halved for the average. In the application, the upload and download operations were timed and provided you output of the actual amount of time it took to upload and download the files. This information can be used in conjunction with the portal metrics to fully understand throughput.
-
-1. Select **Last 24 hours (Automatic)** next to **Time**. Choose **Last hour** and **Minute** for **Time granularity**, then click **Apply**.
-
- ![Storage account metrics](./media/storage-blob-scalable-app-verify-metrics/figure1.png)
-
-Charts can have more than one metric assigned to them, but assigning more than one metric disables the ability to group by dimensions.
-
-## Dimensions
-
-[Dimensions](./monitor-blob-storage-reference.md?toc=/azure/storage/blobs/toc.json#metrics-dimensions) are used to look deeper into the charts and get more detailed information. Different metrics have different dimensions. One dimension that is available is the **API name** dimension. This dimension breaks out the chart into each separate API call. The first image below shows an example chart of total transactions for a storage account. The second image shows the same chart but with the API name dimension selected. As you can see, each transaction is listed giving more details into how many calls were made by API name.
-
-![Storage account metrics - transactions without a dimension](./media/storage-blob-scalable-app-verify-metrics/transactionsnodimensions.png)
-
-![Storage account metrics - transactions](./media/storage-blob-scalable-app-verify-metrics/transactions.png)
-
-## Clean up resources
-
-When no longer needed, delete the resource group, virtual machine, and all related resources. To do so, select the resource group for the VM and click Delete.
-
-## Next steps
-
-In part four of the series, you learned about viewing metrics for the example solution, such as how to:
-
-> [!div class="checklist"]
-> - Configure charts in the Azure portal
-> - Verify throughput and latency metrics
-
-Follow this link to see pre-built storage samples.
-
-> [!div class="nextstepaction"]
-> [Azure storage script samples](storage-samples-blobs-cli.md)
-
-[previous-tutorial]: storage-blob-scalable-app-download-files.md
storage Storage Ref Azcopy Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-copy.md
Copies source data to a destination location. ## Synopsis
storage Storage Ref Azcopy Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-sync.md
Replicates the source location to the destination location. This article provides a detailed reference for the azcopy sync command. To learn more about synchronizing blobs between source and destination locations, see [Synchronize with Azure Blob storage by using AzCopy v10](storage-use-azcopy-blobs-synchronize.md). For Azure Files, see [Synchronize files](storage-use-azcopy-files.md#synchronize-files). + ## Synopsis The last modified times are used for comparison. The file is skipped if the last modified time in the destination is more recent. Alternatively, you can use the `--compare-hash` flag to transfer only files which differ in their MD5 hash. The supported pairs are:
storage Storage Use Azcopy V10 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-v10.md
description: AzCopy is a command-line utility that you can use to copy data to,
Previously updated : 07/18/2024 Last updated : 09/27/2024
AzCopy is a command-line utility that you can use to copy blobs or files to or f
> > If you need to use a previous version of AzCopy, see the [Use the previous version of AzCopy](#previous-version) section of this article.
-<a id="download-and-install-azcopy"></a>
+<a id="download-and-install-azcopy"></a>This video shows you how to download and run the AzCopy utility.
-This video shows you how to download and run the AzCopy utility.
> [!VIDEO 4238a2be-881a-4aaa-8ccd-07a6557a05ef] The steps in the video are also described in the following sections.
+## Use cases for AzCopy
+
+AzCopy can be used to copy your data to, from, or between Azure storage accounts. Common use cases include:
+
+- Copying data from an on-premises source to an Azure storage account
+- Copying data from an Azure storage account to an on-premises source
+- Copying data from one storage account to another storage account
+
+Each of these use cases has unique options. For example, AzCopy has native commands for copying and/or synchronizing data. This makes AzCopy a flexible tool that can be used for one-time copy activities and ongoing synchronization scenarios. AzCopy also allows you to target specific storage services such as Azure Blob Storage or Azure Files. This allows you to copy data from blob to file, file to blob, file to file, etc.
+
+To learn more about these scenarios, see:
+
+- [Upload files to Azure Blob storage by using AzCopy](storage-use-azcopy-blobs-upload.md)
+- [Download blobs from Azure Blob Storage by using AzCopy](storage-use-azcopy-blobs-download.md)
+- [Copy blobs between Azure storage accounts by using AzCopy](storage-use-azcopy-blobs-copy.md)
+- [Synchronize with Azure Blob storage by using AzCopy](storage-use-azcopy-blobs-synchronize.md)
++ ## Install AzCopy on Linux by using a package manager You can install AzCopy by using a Linux package that is hosted on the [Linux Software Repository for Microsoft Products](/linux/packages).
storsimple Storsimple Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-overview.md
The following resources are available to help you migrate backup files or to cop
|Resource |Description | ||-| |[Azure StorSimple 8000 Series Copy Utility](https://aka.ms/storsimple-copy-utility) |Microsoft is providing a read-only data copy utility to recover and migrate your backup files from StorSimple cloud snapshots. The StorSimple 8000 Series Copy Utility is designed to run in your environment. You can install and configure the Utility, and then use your Service Encryption Key to authenticate and download your metadata from the cloud.|
-|[Azure StorSimple 8000 Series Copy Utility documentation](https://aka.ms/storsimple-copy-utility-docs) |Instructions for use of the Copy Utility. |
-|[StorSimple archived documentation](https://aka.ms/storsimple-archive-docs) |Archived StorSimple articles from Microsoft technical documentation. |
+|Azure StorSimple 8000 Series Copy Utility documentation |Instructions for use of the Copy Utility. |
+|StorSimple archived documentation |Archived StorSimple articles from Microsoft technical documentation. |
## Copy data and then decommission your appliance
Use the following steps to copy data to your environment and then decommission y
**Step 1: Copy backup files or live data to your own environment.** -- **Backup files.** If you have backup files, use the Azure StorSimple 8000 Series Copy Utility to migrate backup files to your environment. For more information, see [Copy Utility documentation](https://aka.ms/storsimple-copy-utility-docs).
+- **Backup files.** If you have backup files, use the Azure StorSimple 8000 Series Copy Utility to migrate backup files to your environment.
- **Live data.** If you have live data to copy, you can access and copy live data to your environment via iSCSI. **Step 2: Decommission your device.**
Use the following steps to create a support ticket for StorSimple data copy, dat
![Screenshot of the Review and create support request page in Azure portal.](./media/storsimple-overview/storsimple-support-review-details-6.png) Microsoft Support will use this information to reach out to you for additional details and diagnosis. A Support engineer will contact you as soon as possible to proceed with your request.-
-## Next steps
--- [StorSimple 8000 series copy utility documentation](https://aka.ms/storsimple-copy-utility-docs).
stream-analytics Azure Data Explorer Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/azure-data-explorer-managed-identity.md
# Use managed identities to access Azure Data Explorer from an Azure Stream Analytics job
-Azure Stream Analytics supports managed identity authentication for Azure Data Explorer output. Managed identities for Azure resources is a cross-Azure feature that enables you to create a secure identity associated with the deployment under which your application code runs. You can then associate that identity with access-control roles that grant custom permissions for accessing specific Azure resources that your application needs.
+Azure Stream Analytics supports managed identity authentication for Azure Data Explorer output. Managed identity for Azure resources is a cross-Azure feature that enables you to create a secure identity associated with the deployment under which your application code runs. You can then associate that identity with access-control roles that grant custom permissions for accessing specific Azure resources that your application needs.
-With managed identities, the Azure platform manages this runtime identity. You do not need to store and protect access keys in your application code or configuration, either for the identity itself, or for the resources you need to access. For more information on managed identities for Azure Stream Analytics, see [Managed identities for Azure Stream Analytics](stream-analytics-managed-identities-overview.md).
+With managed identities, the Azure platform manages this runtime identity. You don't need to store and protect access keys in your application code or configuration, either for the identity itself, or for the resources you need to access. For more information on managed identities for Azure Stream Analytics, see [Managed identities for Azure Stream Analytics](stream-analytics-managed-identities-overview.md).
This article shows you how to enable system-assigned managed identity for an Azure Data Explorer output of a Stream Analytics job through the Azure portal. Before you can enable system-assigned managed identity, you must first have a Stream Analytics job and an Azure Data Explorer resource.
For the Stream Analytics job to access your Azure Data Explorer cluster using ma
| Role | Permissions | ||-|
-| Data ingestor | Can ingest data into all existing tables in the database, but can't query the data. |
-| Data monitor | Can execute .show commands in the context of the database and its child entities. |
+| Ingestor | Can ingest data into all existing tables in the database, but can't query the data. |
+| Monitor | Can execute `.show` commands in the context of the database and its child entities. |
+
+For more information about roles supported Azure Data Explorer, see [Role-based access control in Azure Data Explorer](/kusto/access-control/role-based-access-control?view=azure-data-explorer&preserve-view=true#roles-and-permissions).
1. Select **Access control (IAM)**.
For the Stream Analytics job to access your Azure Data Explorer cluster using ma
| Setting | Value | | | |
- | Role | Data ingestor and Data monitor |
+ | Role | Ingestor and Monitor |
| Assign access to | User, group, or service principal | | Members | \<Name of your Stream Analytics job> |
Now that your managed identity is configured, you're ready to add the Azure Data
1. Go to your Stream Analytics job and navigate to the **Outputs** page under **Job Topology**.
-1. Select **Add > Azure Data Explorer**. In the output properties window, search and select your Azure Data Explorer (kusto) cluster or type in the URL of your cluster and select **Managed Identity: System assigned** from the *Authentication mode* drop-down menu.
+1. Select **Add > Azure Data Explorer**. In the output properties window, search and select your Azure Data Explorer cluster or type in the URL of your cluster and select **Managed Identity: System assigned** from the *Authentication mode* drop-down menu.
1. Fill out the rest of the properties and select **Save**.
synapse-analytics Data Explorer Ingest Data Supported Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-data-supported-formats.md
Data ingestion is the process by which data is added to a table and is made avai
|Format |Extension |Description| |||--|
-|ApacheAvro|`.avro` |An [AVRO](https://avro.apache.org/docs/current/) format with support for [logical types](https://avro.apache.org/docs/current/spec.html#Logical+Types). The following compression codecs are supported: `null`, `deflate`, and `snappy`. Reader implementation of the `apacheavro` format is based on the official [Apache Avro library](https://github.com/apache/avro).|
+|ApacheAvro|`.avro` |An [AVRO](https://avro.apache.org/docs/current/) format with support for [logical types](https://avro.apache.org/docs/1.11.1/specification/#Logical+Types). The following compression codecs are supported: `null`, `deflate`, and `snappy`. Reader implementation of the `apacheavro` format is based on the official [Apache Avro library](https://github.com/apache/avro).|
|Avro |`.avro` |A legacy implementation for [AVRO](https://avro.apache.org/docs/current/) format based on [.NET library](https://www.nuget.org/packages/Microsoft.Hadoop.Avro). The following compression codecs are supported: `null`, `deflate` (for `snappy` - use `ApacheAvro` data format).| |CSV |`.csv` |A text file with comma-separated values (`,`). See [RFC 4180: _Common Format and MIME Type for Comma-Separated Values (CSV) Files_](https://www.ietf.org/rfc/rfc4180.txt).| |JSON |`.json` |A text file with JSON objects delimited by `\n` or `\r\n`. See [JSON Lines (JSONL)](http://jsonlines.org/).|
synapse-analytics How To Set Up Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/how-to-set-up-access-control.md
Title: Access control in Synapse workspace how to description: Learn how to control access to Azure Synapse workspaces using Azure roles, Synapse roles, SQL permissions, and Git permissions. --- Previously updated : 9/12/2024 +++ Last updated : 09/26/2024
This guide has focused on setting up a basic access control system. You can supp
**Disable local authentication**. By allowing only Microsoft Entra authentication, you can centrally manage access to Azure Synapse resources, such as SQL pools. Local authentication for all resources within the workspace can be disabled during or after workspace creation. For more information on Microsoft Entra-only authentication, see [Disabling local authentication in Azure Synapse Analytics](../sql/active-directory-authentication.md#disable-local-authentication).
-## Next steps
+## Related content
+ - [Manage Azure Synapse RBAC role assignments](./how-to-manage-synapse-rbac-role-assignments.md)
+ - [Create aSynapse Workspace](../quickstart-create-workspace.md)
synapse-analytics Apache Spark Machine Learning Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-machine-learning-concept.md
There are several options when training machine learning models using Azure Spar
Learn more about the machine learning capabilities by viewing the article on how to [train models in Azure Synapse Analytics](../spark/apache-spark-machine-learning-training.md). ### SparkML and MLlib
-Spark's in-memory distributed computation capabilities make it a good choice for the iterative algorithms used in machine learning and graph computations. ```spark.ml``` provides a uniform set of high-level APIs that help users create and tune machine learning pipelines.To learn more about ```spark.ml```, you can visit the [Apache Spark ML programming guide](https://spark.apache.org/docs/1.2.2/ml-guide.html).
+Spark's in-memory distributed computation capabilities make it a good choice for the iterative algorithms used in machine learning and graph computations. ```spark.ml``` provides a uniform set of high-level APIs that help users create and tune machine learning pipelines.To learn more about ```spark.ml```, you can visit the [Apache Spark ML programming guide](https://archive.apache.org/dist/spark/docs/1.2.2/ml-guide.html).
### Azure Machine Learning automated ML (deprecated) [Azure Machine Learning automated ML](/azure/machine-learning/concept-automated-ml) (automated machine learning) helps automate the process of developing machine learning models. It allows data scientists, analysts, and developers to build ML models with high scale, efficiency, and productivity all while sustaining model quality. The components to run the Azure Machine Learning automated ML SDK is built directly into the Synapse Runtime.
synapse-analytics Apache Spark Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-performance.md
MAX(AMOUNT) -> MAX(cast(AMOUNT as DOUBLE))
## Next steps - [Learn about Azure Synapse runtimes for Apache Spark](./apache-spark-version-support.md)-- [Tuning Apache Spark](https://spark.apache.org/docs/2.4.5/tuning.html)
+- [Tuning Apache Spark](https://archive.apache.org/dist/spark/docs/2.4.5/tuning.html)
- [How to Actually Tune Your Apache Spark Jobs So They Work](https://www.slideshare.net/ilganeli/how-to-actually-tune-your-spark-jobs-so-they-work) - [Kryo Serialization](https://github.com/EsotericSoftware/kryo)
synapse-analytics Intellij Tool Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/intellij-tool-synapse.md
After creating a Scala application, you can remotely run it.
|Main class name|The default value is the main class from the selected file. You can change the class by selecting the ellipsis(**...**) and choosing another class.| |Job configurations|You can change the default key and values. For more information, see [Apache Livy REST API](http://livy.incubator.apache.org./docs/latest/rest-api.html).| |Command-line arguments|You can enter arguments separated by space for the main class if needed.|
- |Referenced Jars and Referenced Files|You can enter the paths for the referenced Jars and files if any. You can also browse files in the Azure virtual file system, which currently only supports ADLS Gen2 cluster. For more information: [Apache Spark Configuration](https://spark.apache.org/docs/2.4.5/configuration.html#runtime-environment) and [How to upload resources to cluster](../../storage/blobs/quickstart-storage-explorer.md).|
+ |Referenced Jars and Referenced Files|You can enter the paths for the referenced Jars and files if any. You can also browse files in the Azure virtual file system, which currently only supports ADLS Gen2 cluster. For more information: [Apache Spark Configuration](https://archive.apache.org/dist/spark/docs/2.4.5/configuration.html#runtime-environment) and [How to upload resources to cluster](../../storage/blobs/quickstart-storage-explorer.md).|
|Job Upload Storage|Expand to reveal additional options.| |Storage Type|Select **Use Azure Blob to upload** or **Use cluster default storage account to upload** from the drop-down list.| |Storage Account|Enter your storage account.|
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
description: This article contains information that can help you troubleshoot pr
Previously updated : 08/27/2024+ - Last updated : 09/26/2024 # Troubleshoot serverless SQL pool in Azure Synapse Analytics
If you get the error `CREATE DATABASE failed. User database limit has been alrea
You don't need to use separate databases to isolate data for different tenants. All data is stored externally on a data lake and Azure Cosmos DB. The metadata like table, views, and function definitions can be successfully isolated by using schemas. Schema-based isolation is also used in Spark where databases and schemas are the same concepts.
-## Next steps
+## Related content
- [Best practices for serverless SQL pool in Azure Synapse Analytics](best-practices-serverless-sql-pool.md) - [Azure Synapse Analytics frequently asked questions](../overview-faq.yml) - [Store query results to storage using serverless SQL pool in Azure Synapse Analytics](create-external-table-as-select.md)-- [Synapse Studio troubleshooting](../troubleshoot/troubleshoot-synapse-studio.md)-- [Troubleshoot a slow query on a dedicated SQL Pool](/troubleshoot/azure/synapse-analytics/dedicated-sql/troubleshoot-dsql-perf-slow-query)
virtual-desktop Rdp Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/rdp-properties.md
description: Learn about the supported RDP properties you can set to customize t
Previously updated : 08/07/2024 Last updated : 09/27/2024 # Supported RDP properties
Here are the RDP properties that you can use to configure display settings.
- Remote Desktop Services - Remote PC connections
+### `desktopscalefactor`
+
+- **Syntax**: `desktopscalefactor:i:*value*`
+- **Description**: Specifies the scale factor of the remote session to make the content appear larger.
+- **Supported values**:
+ - Numerical value from the following list: `100`, `125`, `150`, `175`, `200`, `250`, `300`, `400`, `500`
+- **Default value**: None. Match the local device.
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+> [!NOTE]
+> The `desktopscalefactor` property is being deprecated and will soon be unavailable.
+ ### `desktopwidth` - **Syntax**: `desktopwidth:i:<value>`
virtual-desktop Troubleshoot Client Windows Basic Shared https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-client-windows-basic-shared.md
zone_pivot_groups: azure-virtual-desktop-windows-client-troubleshoot
Previously updated : 10/12/2023 Last updated : 09/25/2024 # Basic troubleshooting for the Remote Desktop client for Windows
There are a few basic troubleshooting steps you can try if you're having issues
1. If none of the previous steps resolved your issue, you can use the *Troubleshoot & repair* tool in the developer portal to diagnose and repair some common dev box connectivity issues. To learn how to use the Troubleshoot & repair tool, see [Troubleshoot and resolve dev box remote desktop connectivity issues](../dev-box/how-to-troubleshoot-repair-dev-box.md). ::: zone-end
+## Reset password
+
+Password resets can't be done in the product. You should follow your organization's process to reset your password.
+
+Password resets can't be done in the product. You should follow your organization's process to reset your password.
+
+Password resets can't be done in the product. You should follow your organization's process to reset your password.
+ ## Client stops responding or can't be opened If the client stops responding or can't be opened, you might need to reset user data. If you can open the client, you can reset user data from the **About** menu. The default settings for the client will be restored and you'll be unsubscribed from all workspaces.
virtual-network Public Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-addresses.md
Previously updated : 08/15/2024 Last updated : 09/27/2024 # Public IP addresses
Static public IP addresses are commonly used in the following scenarios:
| Basic public IPv6 | x | :white_check_mark: | ## Availability Zone
+> [!IMPORTANT]
+> We are updating Standard non-zonal IPs to be zone-redundant by default on a region by region basis. This means that in the following regions, all IPs created (except zonal) are zone-redundant.
+> Region availability: Central Canada, Central Poland, Central Israel, Central France, Central Qatar, East Asia, East US 2, East Norway, Italy North, Sweden Central, South Africa North, South Brazil, West Central Germany, West US 2, Central Spain
+>
Standard SKU Public IPs can be created as non-zonal, zonal, or zone-redundant in [regions that support availability zones](../../availability-zones/az-region.md). Basic SKU Public IPs do not have any zones and are created as non-zonal. A public IP's availability zone can't be changed after the public IP's creation.
A public IP's availability zone can't be changed after the public IP's creation.
In regions without availability zones, all public IP addresses are created as nonzonal. Public IP addresses created in a region that is later upgraded to have availability zones remain non-zonal.
-> [!IMPORTANT]
-> We are updating Standard non-zonal IPs to be zone-redundant by default on a region by region basis. This means that in the following regions, all IPs created (except zonal) are zone-redundant.
-> Region availability: Central Canada, Central Poland, Central Israel, Central France, Central Qatar, East Asia, East US 2, East Norway, Italy North, Sweden Central, South Africa North, South Brazil, West Central Germany, West US 2.
- ## Domain Name Label Select this option to specify a DNS label for a public IP resource. This functionality works for both IPv4 addresses (32-bit A records) and IPv6 addresses (128-bit AAAA records). This selection creates a mapping for **domainnamelabel**.**location**.cloudapp.azure.com to the public IP in the Azure-managed DNS.
virtual-wan User Groups Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/user-groups-create.md
For Network Policy Server (NPS) vendor-specific attributes configuration informa
### Certificates
-To generate self-signed certificates, see [Generate and export certificates for User VPN P2S connections: PowerShell](certificates-point-to-site.md). To generate a certificate with a specific Common Name, change the **Subject** parameter to the appropriate value (example, xx@domain.com) when running the `New-SelfSignedCertificate` PowerShell command.
+To generate self-signed certificates, see [Generate and export certificates for User VPN P2S connections: PowerShell](certificates-point-to-site.md). To generate a certificate with a specific Common Name, change the **Subject** parameter to the appropriate value (example, xx@domain.com) when running the `New-SelfSignedCertificate` PowerShell command. For example, you can generate certificates with the following **Subject**:
+
+| **Digital certificate field** | Value | description |
+|||--|
+| **Subject**| CN= cert@marketing.contoso.com| digital certificate for Marketing department|
+| **Subject**| CN= cert@sale.contoso.com| digital certificate for Sale department|
+| **Subject**| CN= cert@engineering.contoso.com| digital certificate for Engineering department|
+| **Subject**| CN= cert@finance.contoso.com| digital certificate for Finance department|
+
+> [!NOTE]
+> The multiple address pool feature with digital certificate authentication applies to a specific user group based on the **Subject** field. The selection criteria do not work with Subject Alternative Name (SAN) certificates.
+ ## Step 3: Create a user group
Use the following steps to create a user group.
:::image type="content" source="./media/user-groups-create/select-groups.png" alt-text="Screenshot of Edit User VPN gateway page with groups selected." lightbox="./media/user-groups-create/select-groups.png":::
-1. For **Address Pools**, select **Configure** to open the **Specify Address Pools** page. On this page, associate new address pools with this configuration. Users who are members of groups associated to this configuration will be assigned IP addresses from the specified pools. Based on the number of **Gateway Scale Units** associated to the gateway, you might need to specify more than one address pool. Address pools can't be smaller than /24. For example you can't assign a range of /25 or /26 if you want to have a smaller address pool range for the usergroups. The minimum prefix is /24. Select **Add** and **Okay** to save your address pools.
+1. For **Address Pools**, select **Configure** to open the **Specify Address Pools** page. On this page, associate new address pools with this configuration. Users who are members of groups associated to this configuration will be assigned IP addresses from the specified pools. Based on the number of **Gateway Scale Units** associated to the gateway, you might need to specify more than one address pool. Address pools can't be smaller than /24. For example you can't assign a range of /25 or /26 if you want to have a smaller address pool range for the user groups. The minimum prefix is /24. Select **Add** and **Okay** to save your address pools.
:::image type="content" source="./media/user-groups-create/address-pools.png" alt-text="Screenshot of Specify Address Pools page." lightbox="./media/user-groups-create/address-pools.png":::
vpn-gateway Gateway Sku Consolidation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/gateway-sku-consolidation.md
# VPN Gateway SKU consolidation and migration
-We're simplifying our VPN Gateway SKU portfolio. Due to the lack of redundancy, lower availability, and potential higher costs associated with additional failover solutions, we're transitioning all non availability zone (AZ) supported SKUs to AZ supported SKUs. This article helps you understand the upcoming changes for VPN Gateway virtual network gateway SKUs. This article expands on the official announcement.
+We're simplifying our VPN Gateway SKU portfolio. Due to the lack of redundancy, lower availability, and potential higher costs associated with additional failover solutions, we're transitioning all non availability zone (AZ) supported SKUs to AZ supported SKUs. This article helps you understand the upcoming changes for VPN Gateway virtual network gateway SKUs. This article expands on the [official announcement.](https://azure.microsoft.com/updates/v2/vpngw1-5-non-az-skus-will-be-retired-on-30-september-2026)
* **Effective January 1, 2025**: Creation of new VPN gateways using VpnGw1-5 SKUs (non-AZ) will no longer be possible. * **Migration period**: From April 2025 to October 2026, all existing VPN gateways using VpnGw1-5 SKUs (non-AZ SKUs) will be seamlessly migrated to VpnGw1-5 SKUs (AZ).
vpn-gateway Nva Work Remotely Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/nva-work-remotely-support.md
Most major NVA partners have posted guidance around scaling for sudden, unexpect
[Cisco AnyConnect Implementation and Performance/Scaling Reference for COVID-19 Preparation](https://www.cisco.com/c/en/us/support/docs/security/anyconnect-secure-mobility-client/215331-anyconnect-implementation-and-performanc.html "Cisco AnyConnect Implementation and Performance/Scaling Reference for COVID-19 Preparation")
-[Citrix COVID-19 Response Support Center](https://www.citrix.com/content/dam/citrix/en_us/documents/ebook/back-to-the-office.pdf "Citrix COVID-19 Response Support Center")
- [F5 Guidance to Address the Dramatic Increase in Remote Workers](https://www.f5.com/business-continuity "F5 Guidance to Address the Dramatic Increase in Remote Workers") [Fortinet COVID-19 Updates for Customers and Partners](https://www.fortinet.com/covid-19.html "COVID-19 Updates for Customers and Partners")
vpn-gateway Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/whats-new.md
You can also find the latest VPN Gateway updates and subscribe to the RSS feed [
| Type | Area | Name | Description | Date added | Limitations | |||||||
+|SKU Consolidation | N/A | [VpnGw1-5 non-AZ VPN Gateway SKU](https://learn.microsoft.com/azure/vpn-gateway/gateway-sku-consolidation) | VpnGw1-5 non-AZ SKU will be deprecated on 30 Sep 2026. View the announcement [here](https://azure.microsoft.com/updates/v2/vpngw1-5-non-az-skus-will-be-retired-on-30-september-2026) | Sep 2024 | N/A
| P2S VPN | P2S | [Azure VPN Client for Linux](#linux)| [Certificate](point-to-site-certificate-client-linux-azure-vpn-client.md) authentication, [Microsoft Entra ID ](point-to-site-entra-vpn-client-linux.md) authentication.| May 2024 | N/A| | P2S VPN | P2S | [Azure VPN Client for macOS](#macos) | Microsoft Entra ID authentication updates, additional features. | Sept 2024 | N/A| | P2S VPN | P2S | [Azure VPN Client for Windows](#windows) | Microsoft Entra ID authentication updates, additional features. | May 2024 | N/A|