Updates from: 11/10/2022 02:15:03
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Concepts Migration Benefits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-migration-benefits.md
To get started, see [Migrate Azure AD Domain Services from the Classic virtual n
[azure-files]: ../storage/files/storage-files-identity-auth-active-directory-domain-service-enable.md [hd-insights]: ../hdinsight/domain-joined/apache-domain-joined-configure-using-azure-adds.md [avd]: ../virtual-desktop/overview.md
-[availability-zones]: ../availability-zones/az-overview.md
+[availability-zones]: ../reliability/availability-zones-overview.md
[howto-migrate]: migrate-from-classic-vnet.md [attributes]: synchronization.md#attribute-synchronization-and-mapping-to-azure-ad-ds [managed-disks]: ../virtual-machines/managed-disks-overview.md
active-directory-domain-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/overview.md
To get started, [create a managed domain using the Azure portal][tutorial-create
[tutorial-create]: tutorial-create-instance.md [azure-ad-connect]: ../active-directory/hybrid/whatis-azure-ad-connect.md [password-hash-sync]: ../active-directory/hybrid/how-to-connect-password-hash-synchronization.md
-[availability-zones]: ../availability-zones/az-overview.md
+[availability-zones]: ../reliability/availability-zones-overview.md
[forest-trusts]: concepts-resource-forest.md [administration-concepts]: administration-concepts.md [synchronization]: synchronization.md
active-directory-domain-services Powershell Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-create-instance.md
To see the managed domain in action, you can [domain-join a Windows VM][windows-
[New-AzVirtualNetwork]: /powershell/module/Az.Network/New-AzVirtualNetwork [Get-AzSubscription]: /powershell/module/Az.Accounts/Get-AzSubscription [cloud-shell]: ../cloud-shell/cloud-shell-windows-users.md
-[availability-zones]: ../availability-zones/az-overview.md
+[availability-zones]: ../reliability/availability-zones-overview.md
[New-AzNetworkSecurityRuleConfig]: /powershell/module/az.network/new-aznetworksecurityruleconfig [New-AzNetworkSecurityGroup]: /powershell/module/az.network/new-aznetworksecuritygroup [Set-AzVirtualNetworkSubnetConfig]: /powershell/module/az.network/set-azvirtualnetworksubnetconfig
active-directory-domain-services Template Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/template-create-instance.md
To see the managed domain in action, you can [domain-join a Windows VM][windows-
[windows-join]: join-windows-vm.md [tutorial-ldaps]: tutorial-configure-ldaps.md [tutorial-phs]: tutorial-configure-password-hash-sync.md
-[availability-zones]: ../availability-zones/az-overview.md
+[availability-zones]: ../reliability/availability-zones-overview.md
[portal-deploy]: ../azure-resource-manager/templates/deploy-portal.md [powershell-deploy]: ../azure-resource-manager/templates/deploy-powershell.md [scoped-sync]: scoped-synchronization.md
active-directory-domain-services Tutorial Create Instance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance-advanced.md
To see this managed domain in action, create and join a virtual machine to the d
[configure-sspr]: ../active-directory/authentication/tutorial-enable-sspr.md [password-hash-sync-process]: ../active-directory/hybrid/how-to-connect-password-hash-synchronization.md#password-hash-sync-process-for-azure-ad-domain-services [resource-forests]: concepts-resource-forest.md
-[availability-zones]: ../availability-zones/az-overview.md
+[availability-zones]: ../reliability/availability-zones-overview.md
[concepts-sku]: administration-concepts.md#azure-ad-ds-skus <!-- EXTERNAL LINKS -->
active-directory-domain-services Tutorial Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance.md
Before you domain-join VMs and deploy applications that use the managed domain,
[tutorial-create-instance-advanced]: tutorial-create-instance-advanced.md [skus]: overview.md [resource-forests]: concepts-resource-forest.md
-[availability-zones]: ../availability-zones/az-overview.md
+[availability-zones]: ../reliability/availability-zones-overview.md
[concepts-sku]: administration-concepts.md#azure-ad-ds-skus <!-- EXTERNAL LINKS -->
active-directory Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/known-issues.md
The following applications and directories aren't yet supported.
- When a user is managed by Azure AD Connect, the source of authority is on-premises Azure AD. So, user attributes can't be changed in Azure AD. This preview doesn't change the source of authority for users managed by Azure AD Connect. - Attempting to use Azure AD Connect and the on-premises provisioning to provision groups or users into Active Directory Domain Services can lead to creation of a loop, where Azure AD Connect can overwrite a change that was made by the provisioning service in the cloud. Microsoft is working on a dedicated capability for group or user writeback. Upvote the UserVoice feedback on [this website](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789/) to track the status of the preview. Alternatively, you can use [Microsoft Identity Manager](/microsoft-identity-manager/microsoft-identity-manager-2016) for user or group writeback from Azure AD to Active Directory.
-#### Connectors other than SQL
+#### Connectors other than SQL and LDAP
- The Azure AD ECMA Connector Host is officially supported for the generic SQL connector. While it's possible to use other connectors such as the web services connector or custom ECMA connectors, it's *not yet supported*.
+ The Azure AD ECMA Connector Host is officially supported for the generic SQL and LDAP connectors. While it's possible to use other connectors such as the web services connector or custom ECMA connectors, it's *not yet supported*.
#### Azure AD
active-directory Active Directory Claims Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-claims-mapping.md
In this example, you create a policy that emits a custom claim "JoinedData" to J
1. To create the policy, run the following command: ```powershell
- New-AzureADPolicy -Definition @('{"ClaimsMappingPolicy":{"Version":1,"IncludeBasicClaimSet":"true", "ClaimsSchema":[{"Source":"user","ID":"extensionattribute1"},{"Source":"transformation","ID":"DataJoin","TransformationId":"JoinTheData","JwtClaimType":"JoinedData"}],"ClaimsTransformations":[{"ID":"JoinTheData","TransformationMethod":"Join","InputClaims":[{"ClaimTypeReferenceId":"extensionattribute1","TransformationClaimType":"string1"}], "InputParameters": [{"ID":"string2","Value":"sandbox"},{"ID":"separator","Value":"."}],"OutputClaims":[{"ClaimTypeReferenceId":"DataJoin","TransformationClaimType":"outputClaim"}]}]}}') -DisplayName "TransformClaimsExample" -Type "ClaimsMappingPolicy"
+ New-AzureADPolicy -Definition @('{"ClaimsMappingPolicy":{"Version":1,"IncludeBasicClaimSet":"true", "ClaimsSchema":[{"Source":"user","ID":"extensionattribute1"},{"Source":"transformation","ID":"DataJoin","TransformationId":"JoinTheData","JwtClaimType":"JoinedData"}],"ClaimsTransformation":[{"ID":"JoinTheData","TransformationMethod":"Join","InputClaims":[{"ClaimTypeReferenceId":"extensionattribute1","TransformationClaimType":"string1"}], "InputParameters": [{"ID":"string2","Value":"sandbox"},{"ID":"separator","Value":"."}],"OutputClaims":[{"ClaimTypeReferenceId":"DataJoin","TransformationClaimType":"outputClaim"}]}]}}') -DisplayName "TransformClaimsExample" -Type "ClaimsMappingPolicy"
``` 2. To see your new policy, and to get the policy ObjectId, run the following command:
active-directory Scenario Web App Call Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-overview.md
Previously updated : 07/14/2020 Last updated : 11/4/2022 -+ #Customer intent: As an application developer, I want to know how to write a web app that authenticates users and calls web APIs by using the Microsoft identity platform.
You add authentication to your web app so that it can sign users in and call a w
![Web app that calls web APIs](./media/scenario-webapp/web-app.svg)
-Web apps that call web APIs are confidential client applications.
-That's why they register a secret (an application password or certificate) with Azure Active Directory (Azure AD). This secret is passed in during the call to Azure AD to get a token.
+Web apps that call web APIs are confidential client applications. That's why they register a secret (an application password or certificate) with Azure Active Directory (Azure AD). This secret is passed in during the call to Azure AD to get a token.
## Specifics
-> [!NOTE]
-> Adding sign-in to a web app is about protecting the web app itself. That protection is achieved by using *middleware* libraries, not the Microsoft Authentication Library (MSAL). The preceding scenario, [Web app that signs in users](scenario-web-app-sign-user-overview.md), covered that subject.
->
-> This scenario covers how to call web APIs from a web app. You must get access tokens for those web APIs. You use MSAL libraries to acquire these tokens.
+Adding sign-in to a web app is about protecting the web app itself. That protection is achieved by using *middleware* libraries, not the Microsoft Authentication Library (MSAL). The preceding scenario, [Web app that signs in users](scenario-web-app-sign-user-overview.md), covered that subject.
-Development for this scenario involves these specific tasks:
+This scenario covers how to call web APIs from a web app. You must get access tokens for those web APIs. You use MSAL libraries to acquire these tokens.
-- During [application registration](scenario-web-app-call-api-app-registration.md), you must provide a reply URI, secret, or certificate to be shared with Azure AD. If you deploy your app to several locations, you'll provide a reply URI for each location.-- The [application configuration](scenario-web-app-call-api-app-configuration.md) must provide the client credentials that were shared with Azure AD during application registration.
+Development for this scenario involves;
+
+- Providing a reply URI, secret, or certificate to be shared with Azure AD during [application registration](scenario-web-app-call-api-app-registration.md). If you deploy your app to several locations, you'll provide a reply URI for each location.
+- Providing the client credentials in the [application configuration](scenario-web-app-call-api-app-configuration.md). These credentials were shared with Azure AD during application registration.
## Recommended reading
active-directory Tutorial V2 Nodejs Webapp Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-webapp-msal.md
Previously updated : 02/17/2021 Last updated : 11/09/2022 + # Tutorial: Sign in users and acquire a token for Microsoft Graph in a Node.js & Express web app
Follow the steps in this tutorial to:
> - Add code for user login > - Test the app
-For additional guidance, refer to the [sample code](https://github.com/Azure-Samples/ms-identity-node) that shows how to use MSAL Node to login, logout and acquire an access token for a protected resource such as Microsoft Graph.
+For more information, see the [sample code](https://github.com/Azure-Samples/ms-identity-node) that shows how to use MSAL Node to sign in, sign out and acquire an access token for a protected resource such as Microsoft Graph.
## Prerequisites
Use the [Express application generator tool](https://expressjs.com/en/starter/ge
npm install ```
-You now have a simple Express web app. The file and folder structure of your project should look similar to the following:
+You now have a simple Express web app. The file and folder structure of your project should look similar to the following folder structure:
``` ExpressWebApp/
The web app sample in this tutorial uses the [express-session](https://www.npmjs
## Add app registration details
-1. Create a *.env* file in the root of your project folder. Then add the following code:
+1. Create an *.env* file in the root of your project folder. Then add the following code:
:::code language="text" source="~/ms-identity-node/App/.env":::
Fill in these details with the values you obtain from Azure app registration por
- `Enter_the_Cloud_Instance_Id_Here`: The Azure cloud instance in which your application is registered. - For the main (or *global*) Azure cloud, enter `https://login.microsoftonline.com/` (include the trailing forward-slash). - For **national** clouds (for example, China), you can find appropriate values in [National clouds](authentication-national-cloud.md).-- `Enter_the_Tenant_Info_here` should be one of the following:
+- `Enter_the_Tenant_Info_here` should be one of the following parameters:
- If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name**. For example, `contoso.microsoft.com`. - If your application supports *accounts in any organizational directory*, replace this value with `organizations`. - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`.
Fill in these details with the values you obtain from Azure app registration por
:::code language="js" source="~/ms-identity-node/App/authConfig.js":::
-## Add code for user login and token acquisition
+## Add code for user sign-in and token acquisition
1. Create a new file named *auth.js* under the *router* folder and add the following code there: :::code language="js" source="~/ms-identity-node/App/routes/auth.js":::
-2. Next, update the *index.js* route by replacing the existing code with the following:
+2. Next, update the *index.js* route by replacing the existing code with the following code snippet:
:::code language="js" source="~/ms-identity-node/App/routes/index.js":::
-3. Finally, update the *users.js* route by replacing the existing code with the following:
+3. Finally, update the *users.js* route by replacing the existing code with the following code snippet:
:::code language="js" source="~/ms-identity-node/App/routes/users.js":::
Create a file named *fetch.js* in the root of your project and add the following
## Register routers and add state management
-In the *app.js* file in the root of the project folder, register the routes you have created earlier and add session support for tracking authentication state using the **express-session** package. Replace the existing code there with the following:
+In the *app.js* file in the root of the project folder, register the routes you've created earlier and add session support for tracking authentication state using the **express-session** package. Replace the existing code there with the following code snippet:
:::code language="js" source="~/ms-identity-node/App/app.js":::
You've completed creation of the application and are now ready to test the app's
## How the application works
-In this tutorial, you instantiated an MSAL Node [ConfidentialClientApplication](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/initialize-confidential-client-application.md) object by passing it a configuration object (*msalConfig*) that contains parameters obtained from your Azure AD app registration on Azure portal. The web app you created uses the [OpenID Connect protocol](./v2-protocols-oidc.md) to sign-in users and the [OAuth 2.0 Authorization code grant flow](./v2-oauth2-auth-code-flow.md) obtain access tokens.
+In this tutorial, you instantiated an MSAL Node [ConfidentialClientApplication](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/initialize-confidential-client-application.md) object by passing it a configuration object (*msalConfig*) that contains parameters obtained from your Azure AD app registration on Azure portal. The web app you created uses the [OpenID Connect protocol](./v2-protocols-oidc.md) to sign-in users and the [OAuth 2.0 authorization code flow](./v2-oauth2-auth-code-flow.md) to obtain access tokens.
## Next steps
active-directory Domains Admin Takeover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-admin-takeover.md
Some products that include SharePoint and OneDrive, such as Microsoft 365, do no
1. Create a user context in the unmanaged organization through signing up for Power BI. For convenience of example, these steps assume that path.
-2. Open the [Power BI site](https://powerbi.com) and select **Start Free**. Enter a user account that uses the domain name for the organization; for example, `admin@fourthcoffee.xyz`. After you enter in the verification code, check your email for the confirmation code.
+2. Open the [Power BI site](https://powerbi.microsoft.com) and select **Start Free**. Enter a user account that uses the domain name for the organization; for example, `admin@fourthcoffee.xyz`. After you enter in the verification code, check your email for the confirmation code.
3. In the confirmation email from Power BI, select **Yes, that's me**.
active-directory Groups Write Back Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-write-back-portal.md
You can also configure writeback settings for a group on the property page for t
- Targeted the writeback type as a security group :::image type="content" source="./media/groups-write-back-portal/groups-properties-view.png" alt-text="Screenshot of changing writeback settings in the group properties." lightbox="media/groups-write-back-portal/groups-properties-view.png":::
+
+## Read the Writeback configuration using PowerShell
+
+You can use PowerShell to get a list of writeback enabled group using the following PowerShell Get-MgGroup cmdlet.
+
+```powershell-console
+Connect-MgGraph -Scopes @('Group.Read.all')
+Select-MgProfile -Name beta
+PS D:\> Get-MgGroup -All |Where-Object {$_.AdditionalProperties.writebackConfiguration.isEnabled -Like $true} |Select-Object Displayname,@{N="WriteBackEnabled";E={$_.AdditionalProperties.writebackConfiguration.isEnabled}}
+
+DisplayName WriteBackEnabled
+-- -
+CloudGroup1 True
+CloudGroup2 True
+```
+
+## Read the Writeback configuration using Graph Explorer
+
+Open [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) and use the following endpoint ```https://graph.microsoft.com/beta/groups/{Group_ID}```.
+
+Replace the Group_ID with a cloud group id, and then click on Run query.
+In the **Response Preview**, scroll to the end to see the part of the JSON file.
+
+```JSON
+"writebackConfiguration": {
+ "isEnabled": true,
+```
## Next steps - Check out the groups REST API documentation for the [preview writeback property on the settings template](/graph/api/resources/group?view=graph-rest-beta&preserve-view=true).-- For more about group writeback operations, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback.md)
+- For more about group writeback operations, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback.md).
+- For more information about the writebackConfiguration resource, read [writebackConfiguration resource type](/graph/api/resources/writebackconfiguration?view=graph-rest-beta).
active-directory Secure With Azure Ad Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-resource-management.md
When a requirement exists to deploy IaaS workloads to Azure that require identit
* Consider a location that is geographically closed to the servers and applications that require Azure AD DS services.
-* Consider regions that provide Availability Zones capabilities for high availability requirements. For more information, see [Regions and Availability Zones in Azure](../../availability-zones/az-overview.md).
+* Consider regions that provide Availability Zones capabilities for high availability requirements. For more information, see [Regions and Availability Zones in Azure](../../reliability/availability-zones-service-support.md).
**Object provisioning** - Azure AD DS synchronizes identities from the Azure AD that is associated with the subscription that Azure AD DS is deployed into. It's also worth noting that if the associated Azure AD has synchronization set up with Azure AD Connect (user forest scenario) then the life cycle of these identities can also be reflected in Azure AD DS. This service has two modes that can be used for provisioning user and group objects from Azure AD.
active-directory How To Connect Health Adfs Risky Ip Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-adfs-risky-ip-workbook.md
Each item in the Risky IP report table shows aggregated information about failed
| Detection Window Length | Shows the type of detection time window. The aggregation trigger types are per hour or per day. This is helpful to detect versus a high frequency brute force attack versus a slow attack where the number of attempts is distributed throughout the day. | | IP Address | The single risky IP address that had either bad password or extranet lockout sign-in activities. This could be an IPv4 or an IPv6 address. | | Bad Password Error Count (50126) | The count of Bad Password error occurred from the IP address during the detection time window. The Bad Password errors can happen multiple times to certain users. Notice this does not include failed attempts due to expired passwords. |
-| Extranet Lock Out Error Count (30030) | The count of Extranet Lockout error occurred from the IP address during the detection time window. The Extranet Lockout errors can happen multiple times to certain users. This will only be seen if Extranet Lockout is configured in AD FS (versions 2012R2 or higher). <b>Note</b> We strongly recommend turning this feature on if you allow extranet logins using passwords. |
+| Extranet Lock Out Error Count (300030) | The count of Extranet Lockout error occurred from the IP address during the detection time window. The Extranet Lockout errors can happen multiple times to certain users. This will only be seen if Extranet Lockout is configured in AD FS (versions 2012R2 or higher). <b>Note</b> We strongly recommend turning this feature on if you allow extranet logins using passwords. |
| Unique Users Attempted | The count of unique user accounts attempted from the IP address during the detection time window. This provides a mechanism to differentiate a single user attack pattern versus multi-user attack pattern. | Filter the report by IP address or user name to see an expanded view of sign-ins details for each risky IP event.
active-directory How To Connect Install Existing Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-existing-tenant.md
You can manage some users on-premises and other in the cloud. A common scenario
If you started to manage users in Azure AD that are also in on-premises AD and later want to use Connect, then there are some additional concerns you need to consider. ## Sync with existing users in Azure AD
-When you install Azure AD Connect and you start synchronizing, the Azure AD sync service (in Azure AD) does a check on every new object and tries to find an existing object to match. There are three attributes used for this process: **userPrincipalName**, **proxyAddresses**, and **sourceAnchor**/**immutableID**. A match on **userPrincipalName** and **proxyAddresses** is known as a **soft match**. A match on **sourceAnchor** is known as **hard match**. For the **proxyAddresses** attribute only the value with **SMTP:**, that is the primary email address, is used for the evaluation.
+When you install Azure AD Connect and you start synchronizing, the Azure AD sync service (in Azure AD) does a check on every new object and tries to find an existing object to match. There are three attributes used for this process: **userPrincipalName**, **proxyAddresses**, and **sourceAnchor**/**immutableID**. A match on **userPrincipalName** or **proxyAddresses** is known as a **soft match**. A match on **sourceAnchor** is known as **hard match**. For the **proxyAddresses** attribute only the value with **SMTP:**, that is the primary email address, is used for the evaluation.
The match is only evaluated for new objects coming from Connect. If you change an existing object so it is matching any of these attributes, then you see an error instead.
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
To read more about securing your Active Directory environment, see [Best practic
#### Installation prerequisites -- Azure AD Connect must be installed on a domain-joined Windows Server 2019 or later - note that Windows Server 2022 is not yet supported. You can deploy Azure AD Connect on Windows Server 2016 but since WS2016 is in extended support, you may require [a paid support program](/lifecycle/policies/fixed#extended-support) if you require support for this configuration.
+- Azure AD Connect must be installed on a domain-joined Windows Server 2016 or later - note that Windows Server 2022 is not yet supported. You can deploy Azure AD Connect on Windows Server 2016 but since WS2016 is in extended support, you may require [a paid support program](/lifecycle/policies/fixed#extended-support) if you require support for this configuration.
- The minimum .Net Framework version required is 4.6.2, and newer versions of .Net are also supported. - Azure AD Connect can't be installed on Small Business Server or Windows Server Essentials before 2019 (Windows Server Essentials 2019 is supported). The server must be using Windows Server standard or better. - The Azure AD Connect server must have a full GUI installed. Installing Azure AD Connect on Windows Server Core isn't supported.
active-directory How To Connect Selective Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-selective-password-hash-synchronization.md
To reduce the configuration administrative effort, you should first consider the
> [!IMPORTANT] > Configuring selective password hash synchronization directly influences password writeback. Password changes or password resets that are initiated in Azure Active Directory write back to on-premises Active Directory only if the user is in scope for password hash synchronization.
+> [!IMPORTANT]
+> Selective password hash synchronization is supported in 1.6.2.4 or later. If you are using a version lower than that, please upgrade to the latest version.
+ ### The adminDescription attribute Both scenarios rely on setting the adminDescription attribute of users to a specific value. This allows the rules to be applied and is what makes selective PHS work.
active-directory How To Connect Sync Staging Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-staging-server.md
A common and supported method is to run the sync engine in a virtual machine. In
If you are not using the SQL Server Express that comes with Azure AD Connect, then high availability for SQL Server should also be considered. The high availability solutions supported include SQL clustering and AOA (Always On Availability Groups). Unsupported solutions include mirroring.
-Support for SQL AOA was added to Azure AD Connect in version 1.1.524.0. You must enable SQL AOA before installing Azure AD Connect. During installation, Azure AD Connect detects whether the SQL instance provided is enabled for SQL AOA or not. If SQL AOA is enabled, Azure AD Connect further figures out if SQL AOA is configured to use synchronous replication or asynchronous replication. When setting up the Availability Group Listener, it is recommended that you set the RegisterAllProvidersIP property to 0. This is because Azure AD Connect currently uses SQL Native Client to connect to SQL and SQL Native Client does not support the use of MultiSubNetFailover property.
+Support for SQL AOA was added to Azure AD Connect in version 1.1.524.0. You must enable SQL AOA before installing Azure AD Connect. During installation, Azure AD Connect detects whether the SQL instance provided is enabled for SQL AOA or not. If SQL AOA is enabled, Azure AD Connect further figures out if SQL AOA is configured to use synchronous replication or asynchronous replication. When setting up the Availability Group Listener, the RegisterAllProvidersIP property must be set to 0. This is because Azure AD Connect currently uses SQL Native Client to connect to SQL and SQL Native Client does not support the use of MultiSubNetFailover property.
## Appendix CSAnalyzer
active-directory Reference Connect Adsynctools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-adsynctools.md
Import-ADSyncToolsSourceAnchor -OutputFile '.\AllSyncUsers.csv'
``` #### EXAMPLE 2 ```
-Another example of how to use this cmdlet
+Import-ADSyncToolsSourceAnchor -OutputFile '.\AllSyncUsers.csv' -IncludeSyncUsersFromRecycleBin
``` ### PARAMETERS #### -Output
New-ADSyncToolsSqlConnection -Server SQLserver01.Contoso.com -Port 49823 | Invok
#### EXAMPLE 2 ``` $sqlConn = New-ADSyncToolsSqlConnection -Server SQLserver01.Contoso.com -Port 49823
-```
Invoke-ADSyncToolsSqlQuery -SqlConnection $sqlConn -Query 'SELECT *, database_id FROM sys.databases'
+```
### PARAMETERS #### -SqlConnection SQL Connection
Each certificate will be backed up to a separated filename: ObjectClass_ObjectGU
The script will also create a log file in CSV format showing all the users with certificates that either are valid or expired including the actual action taken (Skipped/Exported/Deleted). ### EXAMPLES #### EXAMPLE 1
-```
Check all users in target OU - Expired Certificates will be copied to separated files and no certificates will be removed ``` Remove-ADSyncToolsExpiredCertificates -TargetOU "OU=Users,OU=Corp,DC=Contoso,DC=com" -ObjectClass user
-#### EXAMPLE 2
```
+#### EXAMPLE 2
Delete Expired Certs from all Computer objects in target OU - Expired Certificates will be copied to files and removed from AD ``` Remove-ADSyncToolsExpiredCertificates -TargetOU "OU=Computers,OU=Corp,DC=Contoso,DC=com" -ObjectClass computer -BackupOnly $false
+```
### PARAMETERS #### -TargetOU Target OU to lookup for AD objects
Creates a trace file '.\ADimportTrace_yyyyMMddHHmmss.log' on the current folder.
To use -ADConnectorXML, go to the Synchronization Service Manager, right-click your AD Connector and select "Export Connector..." ### EXAMPLES #### EXAMPLE 1
-```
Trace Active Directory Import for user objects by providing an AD Connector XML file ``` Trace-ADSyncToolsADImport -DC 'DC1.contoso.com' -RootDN 'DC=Contoso,DC=com' -Filter '(&(objectClass=user))' -ADConnectorXML .\ADConnector.xml
-#### EXAMPLE 2
```
+#### EXAMPLE 2
Trace Active Directory Import for all objects by providing the Active Directory watermark (cookie) and AD Connector credential ``` $creds = Get-Credential Trace-ADSyncToolsADImport -DC 'DC1.contoso.com' -RootDN 'DC=Contoso,DC=com' -Credential $creds -ADwatermark "TVNEUwMAAAAXyK9ir1zSAQAAAAAAAAAA(...)"
+```
### PARAMETERS #### -DC Target Domain Controller
Note: ConsistencyGuid Report must be imported with Tab delimiter
### EXAMPLES #### EXAMPLE 1 ```
-Import-Csv .\AllSyncUsersTEST-Report.csv -Delimiter "`t"| Update-ADSyncToolsSourceAnchor -Output .\AllSyncUsersTEST-Result2 -WhatIf
+Import-Csv .\AllSyncUsers-Report.csv -Delimiter "`t"| Update-ADSyncToolsSourceAnchor -Output .\AllSyncUsersTEST-Result2 -WhatIf
``` #### EXAMPLE 2 ```
-Import-Csv .\AllSyncUsersTEST-Report.csv -Delimiter "`t"| Update-ADSyncToolsSourceAnchor -Output .\AllSyncUsersTEST-Result2
+Import-Csv .\AllSyncUsers-Report.csv -Delimiter "`t"| Update-ADSyncToolsSourceAnchor -Output .\AllSyncUsersTEST-Result2
``` ### PARAMETERS #### -DistinguishedName
active-directory Reference Connect Version History Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history-archive.md
Released: March 2017
Azure AD Connect sync * Fixed an issue which causes Azure AD Connect wizard to fail if the display name of the Azure AD Connector does not contain the initial onmicrosoft.com domain assigned to the Azure AD tenant. * Fixed an issue which causes Azure AD Connect wizard to fail while making connection to SQL database when the password of the Sync Service Account contains special characters such as apostrophe, colon and space.
-* Fixed an issue which causes the error ΓÇ£The image has an anchor that is different than the imageΓÇ¥ to occur on an Azure AD Connect server in staging mode, after you have temporarily excluded an on-premises AD object from syncing and then included it again for syncing.
+* Fixed an issue which causes the error ΓÇ£The dimage has an anchor that is different than the imageΓÇ¥ to occur on an Azure AD Connect server in staging mode, after you have temporarily excluded an on-premises AD object from syncing and then included it again for syncing.
* Fixed an issue which causes the error ΓÇ£The object located by DN is a phantomΓÇ¥ to occur on an Azure AD Connect server in staging mode, after you have temporarily excluded an on-premises AD object from syncing and then included it again for syncing. AD FS management
active-directory Tshoot Connect Largeobjecterror Usercertificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-largeobjecterror-usercertificate.md
To obtain the list of objects in your tenant with LargeObject errors, use one of
* If your tenant is enabled for Azure AD Connect Health for sync, you can refer to the [Synchronization Error Report](./how-to-connect-health-sync.md) provided.
- * The notification email for directory synchronization errors that is sent at the end of each sync cycle has the list of objects with LargeObject errors.
* The [Synchronization Service Manager Operations tab](./how-to-connect-sync-service-manager-ui-operations.md) displays the list of objects with LargeObject errors if you click the latest Export to Azure AD operation. ## Mitigation options
Until the LargeObject error is resolved, other attribute changes to the same obj
* Implement an **outbound sync rule** in Azure AD Connect that exports a **null value instead of the actual values for objects with more than 15 certificate values**. This option is suitable if you do not require any of the certificate values to be exported to Azure AD for objects with more than 15 values. For details on how to implement this sync rule, refer to next section [Implementing sync rule to limit export of userCertificate attribute](#implementing-sync-rule-to-limit-export-of-usercertificate-attribute).
- * Reduce the number of certificate values on the on-premises AD object (15 or less) by removing values that are no longer in use by your organization. This is suitable if the attribute bloat is caused by expired or unused certificates. You can use the [PowerShell script available here](https://gallery.technet.microsoft.com/Remove-Expired-Certificates-0517e34f) to help find, backup, and delete expired certificates in your on-premises AD. Before deleting the certificates, it is recommended that you verify with the Public-Key-Infrastructure administrators in your organization.
+ * Reduce the number of certificate values on the on-premises AD object (15 or less) by removing values that are no longer in use by your organization. This is suitable if the attribute bloat is caused by expired or unused certificates. You can use the cmdlet [Remove-ADSyncToolsExpiredCertificates](reference-connect-adsynctools.md#remove-adsynctoolsexpiredcertificates) to help find, backup, and delete expired certificates in your on-premises AD. Before deleting the certificates, it is recommended that you verify with the Public-Key-Infrastructure administrators in your organization.
* Configure Azure AD Connect to exclude the userCertificate attribute from being exported to Azure AD. In general, we do not recommend this option since the attribute may be used by Microsoft Online Services to enable specific scenarios. In particular: * The userCertificate attribute on the User object is used by Exchange Online and Outlook clients for message signing and encryption. To learn more about this feature, refer to article [S/MIME for message signing and encryption](/microsoft-365/security/office-365-security/s-mime-for-message-signing-and-encryption).
aks Aks Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-migration.md
AKS is a managed service offering unique capabilities with lower management over
We recommend using AKS clusters backed by [Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) and the [Azure Standard Load Balancer](./load-balancer-standard.md) to ensure you get features such as: * [Multiple node pools](./use-multiple-node-pools.md),
-* [Availability Zones](../availability-zones/az-overview.md),
+* [Availability Zones](../reliability/availability-zones-overview.md),
* [Authorized IP ranges](./api-server-authorized-ip-ranges.md), * [Cluster Autoscaler](./cluster-autoscaler.md), * [Azure Policy for AKS](../governance/policy/concepts/policy-for-kubernetes.md), and
aks Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/availability-zones.md
This article detailed how to create an AKS cluster that uses availability zones.
[az-feature-list]: /cli/azure/feature#az-feature-list [az-provider-register]: /cli/azure/provider#az-provider-register [az-aks-create]: /cli/azure/aks#az-aks-create
-[az-overview]: ../availability-zones/az-overview.md
+[az-overview]: ../reliability/availability-zones-overview.md
[best-practices-bc-dr]: operator-best-practices-multi-region.md [aks-support-policies]: support-policies.md [aks-faq]: faq.md
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
description: Learn how to configure Azure CNI Overlay networking in Azure Kubern
Previously updated : 08/29/2022 Last updated : 11/08/2022 # Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
-The traditional [Azure Container Networking Interface (CNI)](./configure-azure-cni.md) assigns a VNet IP address to every Pod either from a pre-reserved set of IPs on every node or from a separate subnet reserved for pods. This approach requires IP address planning and could lead to address exhaustion and difficulties in scaling your clusters as your application demands grow.
+The traditional [Azure Container Networking Interface (CNI)](./configure-azure-cni.md) assigns a VNet IP address to every Pod, either from a pre-reserved set of IPs on every node, or from a separate subnet reserved for pods. This approach requires planning IP addresses and could lead to address exhaustion, which introduces difficulties scaling your clusters as your application demands grow.
-With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an overlay network, and Network Address Translation (via the node's IP address) is used to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes. An added advantage is that the private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS.
+With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network (VNet) subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an overlay network, and Network Address Translation (using the node's IP address) is used to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes. An added advantage is that the private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS.
> [!NOTE]
-> Azure CNI Overlay is currently available in the following regions:
+> Azure CNI Overlay is currently available only in the following regions:
> - North Central US > - West Central US+ ## Overview of overlay networking In overlay networking, only the Kubernetes cluster nodes are assigned IPs from a subnet. Pods receive IPs from a private CIDR that is provided at the time of cluster creation. Each node is assigned a `/24` address space carved out from the same CIDR. Additional nodes that are created when you scale out a cluster automatically receive `/24` address spaces from the same CIDR. Azure CNI assigns IPs to pods from this `/24` space.
-A separate routing domain is created in the Azure Networking stack for the pod's private CIDR space, which creates an overlay network for direct communication between pods. There is no need to provision custom routes on the cluster subnet or use an encapsulation method to tunnel traffic between pods. This provides connectivity performance between pods on par with VMs in a VNet.
+A separate routing domain is created in the Azure Networking stack for the pod's private CIDR space, which creates an overlay network for direct communication between pods. There is no need to provision custom routes on the cluster subnet or use an encapsulation method to tunnel traffic between pods. This provides connectivity performance between pods on par with VMs in a VNet.
:::image type="content" source="media/azure-cni-overlay/azure-cni-overlay.png" alt-text="A diagram showing two nodes with three pods each running in an overlay network. Pod traffic to endpoints outside the cluster is routed via NAT.":::
Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address spa
## IP address planning
-* **Cluster Nodes**: Cluster nodes go into a subnet in your VNet, so ensure that you have a subnet big enough to account for future scale. A simple `/24` subnet can host up to 251 nodes (the first three IP addresses in a subnet are reserved for management operations).
+* **Cluster Nodes**: Cluster nodes go into a subnet in your VNet, so verify you have a subnet large enough to account for future scale. A simple `/24` subnet can host up to 251 nodes (the first three IP addresses in a subnet are reserved for management operations).
+
+* **Pods**: The overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure that the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion.
+
+The following are additional factors to consider when planning pods IP address space:
-* **Pods**: The overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure that the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion.
-The following are additional factors to consider when planning pod address space:
* Pod CIDR space must not overlap with the cluster subnet range. * Pod CIDR space must not overlap with IP ranges used in on-premises networks and peered networks. * The same pod CIDR space can be used on multiple independent AKS clusters in the same VNet. * **Kubernetes service address range**: The size of the service address CIDR depends on the number of cluster services you plan to create. It must be smaller than `/12`. This range should also not overlap with the pod CIDR range, cluster subnet range, and IP range used in peered VNets and on-premises networks.
-* **Kubernetes DNS service IP address**: This is an IP address within the Kubernetes service address range that will be used by cluster service discovery. Don't use the first IP address in your address range. The first address in your subnet range is used for the kubernetes.default.svc.cluster.local address.
+* **Kubernetes DNS service IP address**: This is an IP address within the Kubernetes service address range that's used by cluster service discovery. Don't use the first IP address in your address range, as this address is used for the `kubernetes.default.svc.cluster.local` address.
## Maximum pods per node
You can configure the maximum number of pods per node at the time of cluster cre
## Choosing a network model to use
-Azure CNI offers two IP addressing options for pods- the traditional configuration that assigns VNet IPs to pods, and overlay networking. The choice of which option to use for your AKS cluster is a balance between flexibility and advanced configuration needs. The following considerations help outline when each network model may be the most appropriate.
+Azure CNI offers two IP addressing options for pods - the traditional configuration that assigns VNet IPs to pods, and overlay networking. The choice of which option to use for your AKS cluster is a balance between flexibility and advanced configuration needs. The following considerations help outline when each network model may be the most appropriate.
Use overlay networking when:
-* You would like to scale to a large number of Pods but have limited IP address space in your VNet.
+* You would like to scale to a large number of pods, but have limited IP address space in your VNet.
* Most of the pod communication is within the cluster. * You don't need advanced AKS features, such as virtual nodes.
Use the traditional VNet option when:
The overlay solution has the following limitations today * Only available for Linux and not for Windows.
-* You can't deploy multiple overlay clusters in the same subnet.
+* You can't deploy multiple overlay clusters on the same subnet.
* Overlay can be enabled only for new clusters. Existing (already deployed) clusters can't be configured to use overlay. * You can't use Application Gateway as an Ingress Controller (AGIC) for an overlay cluster.
-* v5 VM SKUs are not currently supported.
-
-## Steps to set up overlay clusters
+* v5 VM SKUs are currently not supported.
+## Install the aks-preview Azure CLI extension
-The following example walks through the steps to create a new virtual network with a subnet for the cluster nodes and an AKS cluster that uses Azure CNI Overlay. Be sure to replace the variables with your own values.
-First, opt into the feature by running the following command:
+To install the aks-preview extension, run the following command:
-```azurecli-interactive
-az feature register --namespace Microsoft.ContainerService --name AzureOverlayPreview
+```azurecli
+az extension add --name aks-preview
```
-Create a virtual network with a subnet for the cluster nodes.
-
-```azurecli-interactive
-resourceGroup="myResourceGroup"
-vnet="myVirtualNetwork"
-location="westcentralus"
-
-# Create the resource group
-az group create --name $resourceGroup --location $location
+Run the following command to update to the latest version of the extension released:
-# Create a VNet and a subnet for the cluster nodes
-az network vnet create -g $resourceGroup --location $location --name $vnet --address-prefixes 10.0.0.0/8 -o none
-az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name nodesubnet --address-prefix 10.10.0.0/16 -o none
+```azurecli
+az extension update --name aks-preview
```
-Create a cluster with Azure CNI Overlay. Use `--network-plugin-mode` to specify that this is an overlay cluster. If the pod CIDR is not specified then AKS assigns a default space, viz. 10.244.0.0/16.
+## Register the 'AzureOverlayPreview' feature flag
-```azurecli-interactive
-clusterName="myOverlayCluster"
-subscription="aaaaaaa-aaaaa-aaaaaa-aaaa"
+Register the `AzureOverlayPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16 --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/nodesubnet
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "AzureOverlayPreview"
```
-## Frequently asked questions
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
-* *How do pods and cluster nodes communicate with each other?*
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AzureOverlayPreview')].{Name:name,State:properties.state}"
+```
- Pods and nodes talk to each other directly without any SNAT requirements.
+When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
-* *Can I configure the size of the address space assigned to each space?*
+## Set up overlay clusters
- No, this is fixed at `/24` today and can't be changed.
+The following steps create a new virtual network with a subnet for the cluster nodes and an AKS cluster that uses Azure CNI Overlay.
+1. Create a virtual network with a subnet for the cluster nodes. Replace the values for the variables `resourceGroup`, `vnet` and `location`.
-* *Can I add more private pod CIDRs to a cluster after the cluster has been created?*
+ ```azurecli-interactive
+ resourceGroup="myResourceGroup"
+ vnet="myVirtualNetwork"
+ location="westcentralus"
+
+ # Create the resource group
+ az group create --name $resourceGroup --location $location
+
+ # Create a VNet and a subnet for the cluster nodes
+ az network vnet create -g $resourceGroup --location $location --name $vnet --address-prefixes 10.0.0.0/8 -o none
+ az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name nodesubnet --address-prefix 10.10.0.0/16 -o none
+ ```
- No, a private pod CIDR can only be specified at the time of cluster creation.
+2. Create a cluster with Azure CNI Overlay. Use the argument `--network-plugin-mode` to specify that this is an overlay cluster. If the pod CIDR is not specified then AKS assigns a default space, viz. 10.244.0.0/16. Replace the values for the variables `clusterName` and `subscription`.
+ ```azurecli-interactive
+ clusterName="myOverlayCluster"
+ subscription="aaaaaaa-aaaaa-aaaaaa-aaaa"
+
+ az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16 --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/nodesubnet
+ ```
-* *What are the max nodes and pods per cluster supported by Azure CNI Overlay?*
+## Next steps
- The max scale in terms of nodes and pods per cluster is the same as the max limits supported by AKS today.
+To learn how to utilize AKS with your own Container Network Interface (CNI) plugin, see [Bring your own Container Network Interface (CNI) plugin](use-byo-cni.md).
api-management Api Management Howto Deploy Multi Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-deploy-multi-region.md
Title: Deploy Azure API Management instance to multiple Azure regions
description: Learn how to deploy a Premium tier Azure API Management instance to multiple Azure regions to improve API gateway availability. - Last updated 09/27/2022
When adding a region, you configure:
* The number of scale [units](upgrade-and-scale.md) that region will host.
-* Optional [zone redundancy](../availability-zones/migrate-api-mgt.md), if that region supports it.
+* Optional [zone redundancy](../reliability/migrate-api-mgt.md), if that region supports it.
* [Virtual network](virtual-network-concepts.md) settings in the added region, if networking is configured in the existing region or regions.
When adding a region, you configure:
1. Select **+ Add** in the top bar. 1. Select the added location from the dropdown list. 1. Select the number of scale **[Units](upgrade-and-scale.md)** in the location.
-1. Optionally select one or more [**Availability zones**](../availability-zones/migrate-api-mgt.md).
+1. Optionally select one or more [**Availability zones**](../reliability/migrate-api-mgt.md).
1. If the API Management instance is deployed in a [virtual network](api-management-using-with-vnet.md), configure virtual network settings in the location. Select an existing virtual network, subnet, and public IP address that are available in the location. 1. Select **Add** to confirm. 1. Repeat this process until you configure all locations.
This section provides considerations for multi-region deployments when the API M
## Next steps
-* Learn more about [zone redundancy](../availability-zones/migrate-api-mgt.md) to improve the availability of an API Management instance in a region.
+* Learn more about [zone redundancy](../reliability/migrate-api-mgt.md) to improve the availability of an API Management instance in a region.
* For more information about virtual networks and API Management, see:
api-management Api Management Howto Disaster Recovery Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-disaster-recovery-backup-restore.md
Check out the following related resources for the backup/restore process:
- [Automating API Management Backup and Restore with Logic Apps](https://github.com/Azure/api-management-samples/tree/master/tutorials/automating-apim-backup-restore-with-logic-apps) - [How to move Azure API Management across regions](api-management-howto-migrate.md)-- API Management **Premium** tier also supports [zone redundancy](../availability-zones/migrate-api-mgt.md), which provides resiliency and high availability to a service instance in a specific Azure region (location).
+- API Management **Premium** tier also supports [zone redundancy](../reliability/migrate-api-mgt.md), which provides resiliency and high availability to a service instance in a specific Azure region (location).
[backup an api management service]: #step1 [restore an api management service]: #step2
api-management Api Management Howto Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ip-addresses.md
In the Developer, Basic, Standard, and Premium tiers of API Management, the publ
* The service subscription is [suspended](https://github.com/Azure/azure-resource-manager-rpc/blob/master/v1.0/subscription-lifecycle-api-reference.md#subscription-states) or [warned](https://github.com/Azure/azure-resource-manager-rpc/blob/master/v1.0/subscription-lifecycle-api-reference.md#subscription-states) (for example, for nonpayment) and then reinstated. * (Developer and Premium tiers) Azure Virtual Network is added to or removed from the service. * (Developer and Premium tiers) API Management service is switched between external and internal VNet deployment mode.
-* (Premium tier) [Availability zones](../availability-zones/migrate-api-mgt.md) are enabled, added, or removed.
+* (Premium tier) [Availability zones](../reliability/migrate-api-mgt.md) are enabled, added, or removed.
* (Premium tier) In [multi-regional deployments](api-management-howto-deploy-multi-region.md), the regional IP address changes if a region is vacated and then reinstated. > [!IMPORTANT]
api-management Compute Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/compute-infrastructure.md
The following table summarizes migration options for instances in the different
|Tier |Migration options | |||
-|Premium | 1. Enable [zone redundancy](../availability-zones/migrate-api-mgt.md)<br/> -or-<br/> 2. Create new [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) VNet connection<sup>1</sup><br/> -or-<br/> 3. Update existing [VNet configuration](#update-vnet-configuration) |
+|Premium | 1. Enable [zone redundancy](../reliability/migrate-api-mgt.md)<br/> -or-<br/> 2. Create new [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) VNet connection<sup>1</sup><br/> -or-<br/> 3. Update existing [VNet configuration](#update-vnet-configuration) |
|Developer | 1. Create new [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) VNet connection<sup>1</sup><br/>-or-<br/> 2. Update existing [VNet configuration](#update-vnet-configuration) | | Standard | 1. [Change your service tier](upgrade-and-scale.md#change-your-api-management-service-tier) (downgrade to Developer or upgrade to Premium). Follow migration options in new tier.<br/>-or-<br/>2. Deploy new instance in existing tier and migrate configurations<sup>2</sup> | | Basic | 1. [Change your service tier](upgrade-and-scale.md#change-your-api-management-service-tier) (downgrade to Developer or upgrade to Premium). Follow migration options in new tier<br/>-or-<br/>2. Deploy new instance in existing tier and migrate configurations<sup>2</sup> |
The virtual network configuration is updated, and the instance is migrated to th
## Next steps * Learn more about using a [virtual network](virtual-network-concepts.md) with API Management.
-* Learn more about enabling [availability zones](../availability-zones/migrate-api-mgt.md).
+* Learn more about enabling [availability zones](../reliability/migrate-api-mgt.md).
api-management High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/high-availability.md
This article introduces service capabilities and considerations to ensure that your API Management instance continues to serve API requests if Azure outages occur.
-API Management supports the following key service capabilities that are recommended for [reliable and resilient](../availability-zones/overview.md) Azure solutions. Use them individually, or together, to improve the availability of your API Management solution:
+API Management supports the following key service capabilities that are recommended for [reliable and resilient](../reliability/overview.md) Azure solutions. Use them individually, or together, to improve the availability of your API Management solution:
* **Availability zones**, to provide resilience to datacenter-level outages
API Management supports the following key service capabilities that are recommen
## Availability zones
-Azure [availability zones](../availability-zones/az-overview.md) are physically separate locations within an Azure region that are tolerant to datacenter-level failures. Each zone is composed of one or more datacenters equipped with independent power, cooling, and networking infrastructure. To ensure resiliency, a minimum of three separate availability zones are present in all availability zone-enabled regions.
+Azure [availability zones](../reliability/availability-zones-overview.md) are physically separate locations within an Azure region that are tolerant to datacenter-level failures. Each zone is composed of one or more datacenters equipped with independent power, cooling, and networking infrastructure. To ensure resiliency, a minimum of three separate availability zones are present in all availability zone-enabled regions.
-Enabling [zone redundancy](../availability-zones/migrate-api-mgt.md) for an API Management instance in a supported region provides redundancy for all [service components](api-management-key-concepts.md#api-management-components): gateway, management plane, and developer portal. Azure automatically replicates all service components across the zones that you select.
+Enabling [zone redundancy](../reliability/migrate-api-mgt.md) for an API Management instance in a supported region provides redundancy for all [service components](api-management-key-concepts.md#api-management-components): gateway, management plane, and developer portal. Azure automatically replicates all service components across the zones that you select.
When you enable zone redundancy in a region, consider the number of API Management scale [units](upgrade-and-scale.md) that need to be distributed. Minimally, configure the same number of units as the number of availability zones, or a multiple so that the units are distributed evenly across the zones. For example, if you select 3 availability zones in a region, you could have 3 units so that each zone hosts one unit.
For details, see the blog post [Back-end API redundancy with Azure API Manager](
## Next steps
-* Learn more about [resiliency in Azure](../availability-zones/overview.md)
+* Learn more about [reliability in Azure](../reliability/overview.md)
* Learn more about [designing reliable Azure applications](/azure/architecture/framework/resiliency/app-design) * Read [API Management and reliability](/azure/architecture/framework/services/networking/api-management/reliability) in the Azure Well-Architected Framework
api-management Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/private-endpoint.md
With a private endpoint and Private Link, you can:
## Prerequisites - An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).
- - The API Management instance must be hosted on the [`stv2` compute platform](compute-infrastructure.md). For example, create a new instance or, if you already have an instance in the Premium service tier, enable [zone redundancy](../availability-zones/migrate-api-mgt.md).
+ - The API Management instance must be hosted on the [`stv2` compute platform](compute-infrastructure.md). For example, create a new instance or, if you already have an instance in the Premium service tier, enable [zone redundancy](../reliability/migrate-api-mgt.md).
- Do not deploy (inject) the instance into an [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) virtual network. - A virtual network and subnet to host the private endpoint. The subnet may contain other Azure resources. - (Recommended) A virtual machine in the same or a different subnet in the virtual network, to test the private endpoint.
application-gateway Overview V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview-v2.md
The new v2 SKU includes the following enhancements:
- **Autoscaling**: Application Gateway or WAF deployments under the autoscaling SKU can scale out or in based on changing traffic load patterns. Autoscaling also removes the requirement to choose a deployment size or instance count during provisioning. This SKU offers true elasticity. In the Standard_v2 and WAF_v2 SKU, Application Gateway can operate both in fixed capacity (autoscaling disabled) and in autoscaling enabled mode. Fixed capacity mode is useful for scenarios with consistent and predictable workloads. Autoscaling mode is beneficial in applications that see variance in application traffic. - **Zone redundancy**: An Application Gateway or WAF deployment can span multiple Availability Zones, removing the need to provision separate Application Gateway instances in each zone with a Traffic Manager. You can choose a single zone or multiple zones where Application Gateway instances are deployed, which makes it more resilient to zone failure. The backend pool for applications can be similarly distributed across availability zones.
- Zone redundancy is available only where Azure Zones are available. In other regions, all other features are supported. For more information, see [Regions and Availability Zones in Azure](../availability-zones/az-overview.md)
+ Zone redundancy is available only where Azure Zones are available. In other regions, all other features are supported. For more information, see [Regions and Availability Zones in Azure](../reliability/availability-zones-service-support.md)
- **Static VIP**: Application Gateway v2 SKU supports the static VIP type exclusively. This ensures that the VIP associated with the application gateway doesn't change for the lifecycle of the deployment, even after a restart. There isn't a static VIP in v1, so you must use the application gateway URL instead of the IP address for domain name routing to App Services via the application gateway. - **Header Rewrite**: Application Gateway allows you to add, remove, or update HTTP request and response headers with v2 SKU. For more information, see [Rewrite HTTP headers with Application Gateway](./rewrite-http-headers-url.md) - **Key Vault Integration**: Application Gateway v2 supports integration with Key Vault for server certificates that are attached to HTTPS enabled listeners. For more information, see [TLS termination with Key Vault certificates](key-vault-certs.md).
application-gateway Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/private-link-configure.md
A private endpoint is a network interface that uses a private IP address from th
> If the public or private IP configuration resource is missing when trying to select a _Target sub-resource_ on the _Resource_ tab of private endpoint creation, please ensure a listener is actively utilizing the respected frontend IP configuration. Frontend IP configurations without an associated listener won't be shown as a _Target sub-resource_. > [!Note]
-> If you're provisioning a **Private Endpoint** from within another tenant, you will need to utilize the Azure Application Gateway Resource ID, along with sub-resource to your frontend configuration. For example, if the frontend configuration of the gateway was named _PrivateFrontendIp_, the resource ID would be as follows: _/subscriptions/xxxx-xxxx-xxxx-xxxx-xxxx/resourceGroups/resourceGroupname/providers/Microsoft.Network/applicationGateways/appgwname/frontendIPConfigurations/PrivateFrontendIp_.
+> If you're provisioning a **Private Endpoint** from within another tenant, you will need to utilize the Azure Application Gateway Resource ID and Frontend Configuration ID as the target sub-resource. For example, if the frontend configuration of the gateway was named _PrivateFrontendIp_, the target sub-resource would be as follows: _/subscriptions/xxxx-xxxx-xxxx-xxxx-xxxx/resourceGroups/resourceGroupname/providers/Microsoft.Network/applicationGateways/appgwname/frontendIPConfigurations/PrivateFrontendIp_.
# [Azure PowerShell](#tab/powershell)
automation Automation Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-availability-zones.md
# Availability zones support for Azure Automation
-Azure Automation uses [Azure availability zones](../availability-zones/az-overview.md#availability-zones) to provide improved resiliency and high availability to a service instance in a specific Azure region.
+Azure Automation uses [Azure availability zones](../reliability/availability-zones-overview.md) to provide improved resiliency and high availability to a service instance in a specific Azure region.
-[Azure availability zones](../availability-zones/az-overview.md#availability-zones) is a
-high-availability offering that protects your applications and data from data center failures.
-Availability zones are unique physical locations within an Azure region and each region comprises of one or more data center(s) equipped with independent power, cooling, and networking. To ensure resiliency, there needs to be a minimum of three separate zones in all enabled regions.
+Azure availability zones is a high-availability offering that protects your applications and data from data center failures. Availability zones are unique physical locations within an Azure region and each region comprises of one or more data center(s) equipped with independent power, cooling, and networking. To ensure resiliency, there needs to be a minimum of three separate zones in all enabled regions.
A zone redundant Automation account automatically distributes traffic to the Automation account through various management operations and runbook jobs amongst the availability zones in the supported region. The replication is handled at the service level to these physically separate zones, making the service resilient to a zone failure with no impact on the availability of the Automation accounts in the same region.
In the event when a zone is down, there's no action required by you to recover f
## Supported regions with availability zones
-See [Regions and Availability Zones in Azure](../availability-zones/az-overview.md) for the Azure regions that have availability zones.
+See [Regions and Availability Zones in Azure](../reliability/availability-zones-service-support.md) for the Azure regions that have availability zones.
Automation accounts currently support the following regions: - China North 3
There is no change to the [Service Level Agreement](https://azure.microsoft.com/
## Next steps -- Learn more about [regions that support availability zones](../availability-zones/az-overview.md).
+- Learn more about [regions that support availability zones](../reliability/availability-zones-service-support.md).
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
Build your own disaster recovery strategy to handle a region-wide or zone-wide f
### Availability zones support for Azure Automation
-Azure Automation now supports [Azure availability zones](../availability-zones/az-overview.md#availability-zones) to provide improved resiliency and high availability to a service instance in a specific Azure region. [Learn more](https://learn.microsoft.com/azure/automation/automation-availability-zones).
+Azure Automation now supports [Azure availability zones](../reliability/availability-zones-overview.md#availability-zones) to provide improved resiliency and high availability to a service instance in a specific Azure region. [Learn more](https://learn.microsoft.com/azure/automation/automation-availability-zones).
## July 2022
Azure Automation Run As Account will retire on September 30, 2023 and will be re
**Type:** Enhancement to an existing feature
-In addition to the support for Azure VMs and Arc-enabled Servers, Azure Automation Hybrid Worker Extension (preview) now supports Arc-enabled VMware VMs as a target. You can now orchestrate management tasks using PowerShell and Python runbooks on Azure VMs, Arc-enabled Servers, and Arc-enabled VMWare VMs with an identical experience. Read [here](extension-based-hybrid-runbook-worker-install.md) for more information.
+In addition to the support for Azure VMs and Arc-enabled Servers, Azure Automation Hybrid Worker Extension (preview) now supports Arc-enabled VMware VMs as a target. You can now orchestrate management tasks using PowerShell and Python runbooks on Azure VMs, Arc-enabled Servers, and Arc-enabled VMware VMs with an identical experience. Read [here](extension-based-hybrid-runbook-worker-install.md) for more information.
## March 2022
availability-zones Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/overview.md
- Title: Resiliency in Azure
-description: Learn about resiliency in Azure.
--- Previously updated : 02/08/2022-----
-# Resiliency in Azure
-
-**Resiliency** is a systemΓÇÖs ability to recover from failures and continue to function. ItΓÇÖs not only about avoiding failures but also involves responding to failures in a way that minimizes downtime or data loss. Because failures can occur at various levels, itΓÇÖs important to have protection for all types based on your service availability requirements. Resiliency in Azure supports and advances capabilities that respond to outages in real time to ensure continuous service and data protection assurance for mission-critical applications that require near-zero downtime and high customer confidence.
-
-Azure includes built-in resiliency services that you can use and manage based on your business needs. Whether itΓÇÖs a single hardware node failure, a rack level failure, a datacenter outage, or a large-scale regional outage, Azure provides solutions that improve resiliency. For example, availability sets ensure that the virtual machines deployed on Azure are distributed across multiple isolated hardware nodes in a cluster. Availability zones protect customersΓÇÖ applications and data from datacenter failures across multiple physical locations within a region. **Regions** and **availability zones** are central to your application design and resiliency strategy and are discussed in greater detail later in this article.
-
-## Resiliency requirements
-
-The required level of resilience for any Azure solution depends on several considerations. Availability and latency SLA and other business requirements drive the architectural choices and resiliency level and should be considered first. Availability requirements range from how much downtime is acceptable ΓÇô and how much it costs your business ΓÇô to the amount of money and time that you can realistically invest in making an application highly available.
-
-Building resilient systems on Azure is a **shared responsibility**. Microsoft is responsible for the reliability of the cloud platform, including its global network and data centers. Azure customers and partners are responsible for the resilience of their cloud applications, using architectural best practices based on the requirements of each workload. While Azure continually strives for highest possible resiliency in SLA for the cloud platform, you must define your own target SLAs for each workload in your solution. An SLA makes it possible to evaluate whether the architecture meets the business requirements. As you strive for higher percentages of SLA guaranteed uptime, the cost and complexity to achieve that level of availability grows. An uptime of 99.99 percent translates to about five minutes of total downtime per month. Is it worth the more complexity and cost to reach that percentage? The answer depends on the individual business requirements. While deciding final SLA commitments, understand MicrosoftΓÇÖs supported SLAs. Each Azure service has its own SLA.
-
-## Building resiliency
-
-You should define your applicationΓÇÖs availability requirements at the beginning of planning. If you know which applications don't need 100% high availability during certain periods of time, you can optimize costs during those non-critical periods. Identify the type of failures an application can experience, and the potential effect of each failure. A recovery plan should cover all critical services by finalizing recovery strategy at the individual component and the overall application level. Design your recovery strategy to protect against zonal, regional, and application-level failure. And perform testing of the end-to-end application environment to measure application resiliency and recovery against unexpected failure.
-
-The following checklist covers the scope of resiliency planning.
-
-| **Resiliency planning** |
-| |
-| **Define** availability and recovery targets to meet business requirements. |
-| **Design** the resiliency features of your applications based on the availability requirements. |
-| **Align** applications and data platforms to meet your reliability requirements. |
-| **Configure** connection paths to promote availability. |
-| **Use** availability zones and disaster recovery planning where applicable to improve reliability and optimize costs. |
-| **Ensure** your application architecture is resilient to failures. |
-| **Know** what happens if SLA requirements are not met. |
-| **Identify** possible failure points in the system; application design should tolerate dependency failures by deploying circuit breaking. |
-| **Build** applications that operate in the absence of their dependencies. |
-
-## Regions and availability zones
-
-Regions and Availability Zones are a big part of the resiliency equation. Regions feature multiple, physically separate availability zones. These availability zones are connected by a high-performance network featuring less than 2ms latency between physical zones. Low latency helps your data stay synchronized and accessible when things go wrong. You can use this infrastructure strategically as you architect applications and data infrastructure that automatically replicate and deliver uninterrupted services between zones and across regions. Choose the best region for your needs based on technical and regulatory considerationsΓÇöservice capabilities, data residency, compliance requirements, latencyΓÇöand begin advancing your resiliency strategy.
-
-Microsoft Azure services support availability zones and are enabled to drive your cloud operations at optimum high availability while supporting your disaster recovery and business continuity strategy needs. Choose the best region for your needs based on technical and regulatory considerationsΓÇöservice capabilities, data residency, compliance requirements, latencyΓÇöand begin advancing your resiliency strategy. For more information, see [Azure regions and availability zones](az-overview.md).
-
-## Shared responsibility
-
-Building resilient systems on Azure is a shared responsibility. Microsoft is responsible for the reliability of the cloud platform, which includes its global network and datacenters. Azure customers and partners are responsible for the resilience of their cloud applications, using architectural best practices based on the requirements of each workload. For more information, see [Business continuity management program in Azure](business-continuity-management-program.md).
-
-## Azure service dependencies
-
-Microsoft Azure services are available globally to drive your cloud operations at an optimal level. You can choose the best region for your needs based on technical and regulatory considerations: service capabilities, data residency, compliance requirements, and latency.
-
-Azure services deployed to Azure regions are listed on the [Azure global infrastructure products](https://azure.microsoft.com/global-infrastructure/services/?products=all) page. To better understand regions and Availability Zones in Azure, see [Regions and Availability Zones in Azure](az-overview.md).
-
-Azure services are built for resiliency including high availability and disaster recovery. There are no services that are dependent on a single logical data center (to avoid single points of failure). Non-regional services listed on [Azure global infrastructure products](https://azure.microsoft.com/global-infrastructure/services/?products=all) are services for which there is no dependency on a specific Azure region. Non-regional services are deployed to two or more regions and if there is a regional failure, the instance of the service in another region continues servicing customers. Certain non-regional services enable customers to specify the region where the underlying virtual machine (VM) on which service runs will be deployed. For example, [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) enables customers to specify the region location where the VM resides. All Azure services that store customer data allow the customer to specify the specific regions in which their data will be stored. The exception is [Azure Active Directory (Azure AD)](https://azure.microsoft.com/services/active-directory/), which has geo placement (such as Europe or North America). For more information about data storage residency, see the [Data residency map](https://azure.microsoft.com/global-infrastructure/data-residency/).
-
-If you need to understand dependencies between Azure services to help better architect your applications and services, you can request the **Azure service dependency documentation** by contacting your Microsoft sales or customer representative. This document lists the dependencies for Azure services, including dependencies on any common major internal services such as control plane services. To obtain this documentation, you must be a Microsoft customer and have the appropriate non-disclosure agreement (NDA) with Microsoft.
-
-## Next steps
--- [Regions and availability zones in Azure](az-overview.md)-- [Azure services that support availability zones](az-region.md)-- [Azure Resiliency whitepaper](https://azure.microsoft.com/resources/resilience-in-azure-whitepaper/)-- [Azure Well-Architected Framework](https://www.aka.ms/WellArchitected/Framework)-- [Azure architecture guidance](/azure/architecture/high-availability/building-solutions-for-high-availability)
azure-cache-for-redis Cache High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-high-availability.md
See this article for information on how to set it up.
If a cache is configured to use two or more zones as described above, the cache nodes are created in different zones. When a zone goes down, cache nodes in other zones are available to keep the cache functioning as usual.
-Azure Cache for Redis supports zone redundant configurations in the Premium and Enterprise tiers. A [zone redundant cache](cache-how-to-zone-redundancy.md) can place its nodes across different [Azure Availability Zones](../availability-zones/az-overview.md) in the same region. It eliminates data center or Availability Zone outage as a single point of failure and increases the overall availability of your cache.
+Azure Cache for Redis supports zone redundant configurations in the Premium and Enterprise tiers. A [zone redundant cache](cache-how-to-zone-redundancy.md) can place its nodes across different [Azure Availability Zones](../reliability/availability-zones-overview.md) in the same region. It eliminates data center or Availability Zone outage as a single point of failure and increases the overall availability of your cache.
### Premium tier
azure-cache-for-redis Cache How To Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-zone-redundancy.md
Last updated 06/07/2022
In this article, you'll learn how to configure a zone-redundant Azure Cache instance using the Azure portal.
-Azure Cache for Redis Standard, Premium, and Enterprise tiers provide built-in redundancy by hosting each cache on two dedicated virtual machines (VMs). Even though these VMs are located in separate [Azure fault and update domains](../virtual-machines/availability.md) and highly available, they're susceptible to datacenter level failures. Azure Cache for Redis also supports zone redundancy in its Premium and Enterprise tiers. A zone-redundant cache runs on VMs spread across multiple [Availability Zones](../availability-zones/az-overview.md). It provides higher resilience and availability.
+Azure Cache for Redis Standard, Premium, and Enterprise tiers provide built-in redundancy by hosting each cache on two dedicated virtual machines (VMs). Even though these VMs are located in separate [Azure fault and update domains](../virtual-machines/availability.md) and highly available, they're susceptible to datacenter level failures. Azure Cache for Redis also supports zone redundancy in its Premium and Enterprise tiers. A zone-redundant cache runs on VMs spread across multiple [Availability Zones](../reliability/availability-zones-overview.md). It provides higher resilience and availability.
> [!NOTE] > Data transfer between Azure Availability Zones will be charged at standard [bandwidth rates](https://azure.microsoft.com/pricing/details/bandwidth/).
azure-functions Azure Functions Az Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/azure-functions-az-redundancy.md
Azure function apps in the Premium plan can be deployed into availability zones to help you achieve resiliency and reliability for your business-critical workloads. This architecture is also known as zone redundancy.
-Availability zones support for Azure Functions is available on Premium (Elastic Premium) and Dedicated (App Service) plans. A zone-redundant function app plan automatically balances its instances between availability zones for higher availability. This article focuses on zone redundancy support for Premium plans. For zone redundancy on Dedicated plans, refer [here](../availability-zones/migrate-app-service.md).
+Availability zones support for Azure Functions is available on Premium (Elastic Premium) and Dedicated (App Service) plans. A zone-redundant function app plan automatically balances its instances between availability zones for higher availability. This article focuses on zone redundancy support for Premium plans. For zone redundancy on Dedicated plans, refer [here](../reliability/migrate-app-service.md).
[!INCLUDE [functions-premium-plan-note](../../includes/functions-premium-plan-note.md)] ## Overview
-An [availability zone](../availability-zones/az-overview.md#availability-zones) is a high-availability offering that protects your applications and data from datacenter failures. Availability zones are unique physical locations within an Azure region. Each zone comprises one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there's a minimum of three separate zones in all enabled regions. You can build high-availability into your application architecture by co-locating your compute, storage, networking, and data resources within a zone and replicating into other zones.
+An [availability zone](../reliability//availability-zones-overview.md) is a high-availability offering that protects your applications and data from datacenter failures. Availability zones are unique physical locations within an Azure region. Each zone comprises one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there's a minimum of three separate zones in all enabled regions. You can build high-availability into your application architecture by co-locating your compute, storage, networking, and data resources within a zone and replicating into other zones.
A zone redundant function app automatically distributes the instances your app runs on between the availability zones in the region. For apps running in a zone-redundant Premium plan, even as the app scales in and out, the instances the app is running on are still evenly distributed between availability zones.
Availability zone support is a property of the Premium plan. The following are t
- You can only enable availability zones when creating a Premium plan for your function app. You can't convert an existing Premium plan to use availability zones. - You must use a [zone redundant storage account (ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage) for your function app's [storage account](storage-considerations.md#storage-account-requirements). If you use a different type of storage account, Functions may show unexpected behavior during a zonal outage. - Both Windows and Linux are supported.-- Must be hosted on an [Elastic Premium](functions-premium-plan.md) or Dedicated hosting plan. To learn how to use zone redundancy with a Dedicated plan, see [Migrate App Service to availability zone support](../availability-zones/migrate-app-service.md).
+- Must be hosted on an [Elastic Premium](functions-premium-plan.md) or Dedicated hosting plan. To learn how to use zone redundancy with a Dedicated plan, see [Migrate App Service to availability zone support](../reliability/migrate-app-service.md).
- Availability zone support isn't currently available for function apps on [Consumption](consumption-plan.md) plans. - Function apps hosted on a Premium plan must have a minimum [always ready instances](functions-premium-plan.md#always-ready-instances) count of three. - The platform will enforce this minimum count behind the scenes if you specify an instance count fewer than three.-- If you aren't using Premium plan or a scale unit that supports availability zones, are in an unsupported region, or are unsure, see the [migration guidance](../availability-zones/migrate-functions.md).
+- If you aren't using Premium plan or a scale unit that supports availability zones, are in an unsupported region, or are unsure, see the [migration guidance](../reliability/migrate-functions.md).
## Regional availability
There are currently two ways to deploy a zone-redundant Premium plan and functio
| Setting | Suggested value | Notes for Zone Redundancy | | | - | -- | | **Storage Account** | A [zone-redundant storage account](storage-considerations.md#storage-account-requirements) | As mentioned above in the [requirements](#requirements) section, we strongly recommend using a zone-redundant storage account for your zone redundant function app. |
- | **Plan Type** | Functions Premium | This article details how to create a zone redundant app in a Premium plan. Zone redundancy isn't currently available in Consumption plans. Information on zone redundancy on app service plans can be found [in this article](../availability-zones/migrate-app-service.md). |
+ | **Plan Type** | Functions Premium | This article details how to create a zone redundant app in a Premium plan. Zone redundancy isn't currently available in Consumption plans. Information on zone redundancy on app service plans can be found [in this article](../reliability/migrate-app-service.md). |
| **Zone Redundancy** | Enabled | This field populates the flag that determines if your app is zone redundant or not. You won't be able to select `Enabled` unless you have chosen a region supporting zone redundancy, as mentioned in step 2. | ![Screenshot of Hosting tab of function app create page.](./media/functions-az-redundancy\azure-functions-hosting-az.png)
After the zone-redundant plan is created and deployed, any function app hosted o
## Migrate your function app to a zone-redundant plan
-For information on how to migrate the public multi-tenant Premium plan from non-availability zone to availability zone support, see [Migrate App Service to availability zone support](../availability-zones/migrate-functions.md).
+For information on how to migrate the public multi-tenant Premium plan from non-availability zone to availability zone support, see [Migrate App Service to availability zone support](../reliability/migrate-functions.md).
## Pricing
azure-functions Dotnet Isolated In Process Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-in-process-differences.md
recommendations: false
#Customer intent: As a developer, I need to understand the differences between running in-process and running in an isolated worker process so that I can choose the best process model for my functions.
-# Differences between in-process and isolate worker process .NET Azure Functions
+# Differences between in-process and isolated worker process .NET Azure Functions
Functions supports two process models for .NET class library functions:
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference.md
Identity-based connections are supported by the following components:
| Connection source | Plans supported | Learn more | ||--|--|
-| Azure Blob triggers and bindings | All | [Extension version 5.0.0 or later](./functions-bindings-storage-blob.md#install-extension) |
-| Azure Queue triggers and bindings | All | [Extension version 5.0.0 or later](./functions-bindings-storage-queue.md#storage-extension-5x-and-higher) |
-| Azure Event Hubs triggers and bindings | All | [Extension version 5.0.0 or later](./functions-bindings-event-hubs.md?tabs=extensionv5) |
-| Azure Service Bus triggers and bindings | All | [Extension version 5.0.0 or later](./functions-bindings-service-bus.md) |
-| Azure Cosmos DB triggers and bindings - Preview | Elastic Premium | [Extension version 4.0.0-preview1 or later](.//functions-bindings-cosmosdb-v2.md?tabs=extensionv4) |
-| Azure Tables (when using Azure Storage) - Preview | All | [Azure Cosmos DB for Table extension](./functions-bindings-storage-table.md#table-api-extension) |
+| Azure Blob triggers and bindings | All | [Extension version 5.0.0 or later][blobv5]<br/>[Extension bundle 3.3.0 or later][blobv5] |
+| Azure Queue triggers and bindings | All | [Extension version 5.0.0 or later][queuev5]<br/>[Extension bundle 3.3.0 or later][queuev5] |
+| Azure Event Hubs triggers and bindings | All | [Extension version 5.0.0 or later][eventhubv5]<br/>[Extension bundle 3.3.0 or later][eventhubv5] |
+| Azure Service Bus triggers and bindings | All | [Extension version 5.0.0 or later][servicebusv5]<br/>[Extension bundle 3.3.0 or later][servicebusv5] |
+| Azure Cosmos DB triggers and bindings - Preview | Elastic Premium | [Extension version 4.0.0-preview1 or later][cosmosv4]<br/> [Preview extension bundle 4.0.0 or later][cosmosv4]|
+| Azure Tables (when using Azure Storage) - Preview | All | [Azure Cosmos DB for Table extension](./functions-bindings-storage-table.md#table-api-extension)<br/>[Extension bundle 3.3.0 or later][tablesv1] |
| Durable Functions storage provider (Azure Storage) - Preview | All | [Extension version 2.7.0 or later](https://github.com/Azure/azure-functions-durable-extension/releases/tag/v2.7.0) | | Host-required storage ("AzureWebJobsStorage") - Preview | All | [Connecting to host storage with an identity](#connecting-to-host-storage-with-an-identity-preview) |
+[blobv5]: ./functions-bindings-storage-blob.md#install-extension
+[queuev5]: ./functions-bindings-storage-queue.md#storage-extension-5x-and-higher
+[eventhubv5]: ./functions-bindings-event-hubs.md?tabs=extensionv5
+[servicebusv5]: ./functions-bindings-service-bus.md
+[cosmosv4]: ./functions-bindings-cosmosdb-v2.md?tabs=extensionv4
+[tablesv1]: ./functions-bindings-storage-table.md#table-api-extension
+ [!INCLUDE [functions-identity-based-connections-configuration](../../includes/functions-identity-based-connections-configuration.md)] Choose a tab below to learn about permissions for each component:
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md
To use the globally hosted Azure Content Delivery Network version of the *Azure
>npm install azure-maps-indoor ```
- 2. Reference the *Azure Maps Indoor* module JavaScript and Style Sheet in the `<head>` element of the HTML file:
+ 2. Import the *Azure Maps Indoor* module JavaScript and Style Sheet in a source file:
- ```html
- <link rel="stylesheet" href="node_modules/azure-maps-indoor/dist/atlas-indoor.min.css" type="text/css" />
- <script src="node_modules/azure-maps-indoor/dist/atlas-indoor.min.js"></script>
+ ```js
+ import * as indoor from "azure-maps-indoor";
+ import "azure-maps-indoor/dist/atlas-indoor.min.css";
``` ## Set the domain and instantiate the Map object
azure-maps How To Use Services Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-services-module.md
The Azure Maps Web SDK provides a *services module*. This module is a helper lib
`npm install azure-maps-rest`
- Then, add a script reference to the `<head>` element of the file:
+ Then, use an import declaration to add the module into a source file:
- ```html
- <script src="node_modules/azure-maps-rest/dist/atlas-service.min.js"></script>
+ ```js
+ import * as service from "azure-maps-rest";
``` 1. Create an authentication pipeline. The pipeline must be created before you can initialize a service URL client endpoint. Use your own Azure Maps account key or Azure Active Directory (Azure AD) credentials to authenticate an Azure Maps Search service client. In this example, the Search service URL client will be created.
azure-maps How To Use Spatial Io Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-spatial-io-module.md
You can load the Azure Maps spatial IO module using one of the two options:
npm install azure-maps-spatial-io ```
- Then, add a reference to the JavaScript in the `<head>` element of the HTML document:
+ Then, use an import declaration to add the module into a source file:
- ```html
- <script src="node_modules/azure-maps-spatial-io/dist/atlas-spatial.min.js"></script>
+ ```js
+ import * as spatial from "azure-maps-spatial-io";
``` ## Using the Spatial IO module
azure-maps Set Drawing Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/set-drawing-options.md
The Azure Maps Web SDK provides a *drawing tools module*. This module makes it e
`npm install azure-maps-drawing-tools`
- Then, add a reference to the JavaScript and CSS stylesheet in the `<head>` element of the file:
+ Then, import the JavaScript and CSS stylesheet in a source file:
- ```html
- <link rel="stylesheet" href="node_modules/azure-maps-drawing-tools/dist/atlas-drawing.min.css" type="text/css" />
- <script src="node_modules/azure-maps-drawing-tools/dist/atlas-drawing.min.js"></script>
+ ```js
+ import * as drawing from "azure-maps-drawing-tools";
+ import "azure-maps-drawing-tools/dist/atlas-drawing.min.css";
``` ## Use the drawing manager directly
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md
Again, only if you're using an older version of the agent, the python2 executabl
### Supported Linux hardening The OMS Agent has limited customization and hardening support for Linux.
-The following are currently supported:
-- FIPS
+The following are currently supported:
- SELinux (Marketplace images for CentOS and RHEL with their default settings) The following aren't supported: - CIS - SELinux (custom hardening like MLS)
-CIS and SELinux hardening support is planned for [Azure Monitoring Agent](./azure-monitor-agent-overview.md). Further hardening and customization methods aren't supported nor planned for OMS Agent. For instance, OS images like GitHub Enterprise Server which include customizations such as limitations to user account privileges aren't supported.
+CIS, FIPS and SELinux hardening support is planned for [Azure Monitoring Agent](./azure-monitor-agent-overview.md). Further hardening and customization methods aren't supported nor planned for OMS Agent. For instance, OS images like GitHub Enterprise Server which include customizations such as limitations to user account privileges aren't supported.
### Agent prerequisites
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Here's a short **introduction to Azure Monitor agent video**, which includes a q
## Consolidating legacy agents
-Deploy Azure Monitor Agent on all new virtual machines, scale sets and on premise servers to collect data for [supported services and features](#supported-services-and-features).
+Deploy Azure Monitor Agent on all new virtual machines, scale sets and on-premises servers to collect data for [supported services and features](#supported-services-and-features).
If you have machines already deployed with legacy Log Analytics agents, we recommend you [migrate to Azure Monitor Agent](./azure-monitor-agent-migration.md) as soon as possible. The legacy Log Analytics agent will not be supported after August 2024. Azure Monitor Agent replaces the Azure Monitor legacy monitoring agents: -- [Log Analytics Agent](./log-analytics-agent.md): Sends data to a Log Analytics workspace and supports monitoring solutions. This is fully consolidated into Azure Monitor agent.
+- [Log Analytics Agent](./log-analytics-agent.md): Sends data to a Log Analytics workspace and supports monitoring solutions. This is fully consolidated into Azure Monitor agent.
- [Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md): Sends data to Azure Monitor Metrics (Linux only). Only basic Telegraf plugins are supported today in Azure Monitor agent. - [Diagnostics extension](./diagnostics-extension-overview.md): Sends data to Azure Monitor Metrics (Windows only), Azure Event Hubs, and Azure Storage. This is not consolidated yet.
-## Install the agent and configure data collection
+## Install the agent and configure data collection
-Azure Monitor Agent uses [data collection rules](../essentials/data-collection-rule-overview.md), using which you define which data you want each agent to collect. Data collection rules let you manage data collection settings at scale and define unique, scoped configurations for subsets of machines. The rules are independent of the workspace and the virtual machine, which means you can define a rule once and reuse it across machines and environments.
+Azure Monitor Agent uses [data collection rules](../essentials/data-collection-rule-overview.md), where you define which data you want each agent to collect. Data collection rules let you manage data collection settings at scale and define unique, scoped configurations for subsets of machines. The rules are independent of the workspace and the virtual machine, which means you can define a rule once and reuse it across machines and environments.
**To collect data using Azure Monitor Agent:**
Azure Monitor Agent uses [data collection rules](../essentials/data-collection-r
| Data source | Destinations | Description | |:|:|:|
- | Performance | Azure Monitor Metrics (Public preview)<sup>1</sup> - Insights.virtualmachine namespace<br>Log Analytics workspace - [Perf](/azure/azure-monitor/reference/tables/perf) table | Numerical values measuring performance of different aspects of operating system and workloads |
+ | Performance | Azure Monitor Metrics (Public preview)<sup>1</sup> - Insights.virtualmachine namespace<br>Log Analytics workspace - [Perf](/azure/azure-monitor/reference/tables/perf) table | Numerical values measuring performance of different aspects of operating system and workloads |
| Windows event logs (including sysmon events) | Log Analytics workspace - [Event](/azure/azure-monitor/reference/tables/Event) table | Information sent to the Windows event logging system |
- | Syslog | Log Analytics workspace - [Syslog](/azure/azure-monitor/reference/tables/syslog)<sup>2</sup> table | Information sent to the Linux event logging system |
-
+ | Syslog | Log Analytics workspace - [Syslog](/azure/azure-monitor/reference/tables/syslog)<sup>2</sup> table | Information sent to the Linux event logging system |
+ <sup>1</sup> On Linux, using Azure Monitor Metrics as the only destination is supported in v1.10.9.0 or higher.<br> <sup>2</sup> Azure Monitor Linux Agent versions 1.15.2 and higher support syslog RFC formats including Cisco Meraki, Cisco ASA, Cisco FTD, Sophos XG, Juniper Networks, Corelight Zeek, CipherTrust, NXLog, McAfee, and Common Event Format (CEF).
- >[!NOTE]
- >On rsyslog-based systems, Azure Monitor Linux Agent adds forwarding rules to the default ruleset defined in the rsyslog configuration. If multiple rulesets are used, inputs bound to non-default ruleset(s) are **not** forwarded to Azure Monitor Agent. For more information about multiple rulesets in rsyslog, see the [official documentation](https://www.rsyslog.com/doc/master/concepts/multi_ruleset.html).
+ > [!NOTE]
+ > On rsyslog-based systems, Azure Monitor Linux Agent adds forwarding rules to the default ruleset defined in the rsyslog configuration. If multiple rulesets are used, inputs bound to non-default ruleset(s) are **not** forwarded to Azure Monitor Agent. For more information about multiple rulesets in rsyslog, see the [official documentation](https://www.rsyslog.com/doc/master/concepts/multi_ruleset.html).
## Supported services and features
In addition to the generally available data collection listed above, Azure Monit
## Supported regions Azure Monitor Agent is available in all public regions and Azure Government clouds, for generally available features. It's not yet supported in air-gapped clouds. For more information, see [Product availability by region](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&rar=true&regions=all).+ ## Costs There's no cost for the Azure Monitor Agent, but you might incur charges for the data ingested. For information on Log Analytics data collection and retention and for customer metrics, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). ## Compare to legacy agents
-The tables below provide a comparison of Azure Monitor Agent with the legacy the Azure Monitor telemetry agents for Windows and Linux.
+The tables below provide a comparison of Azure Monitor Agent with the legacy the Azure Monitor telemetry agents for Windows and Linux.
### Windows agents
View [supported operating systems for Azure Arc Connected Machine agent](../../a
<sup>2</sup> Requires Python 2 to be installed on the machine and aliased to the `python` command.<br> <sup>3</sup> Also supported on Arm64-based machines.
->[!NOTE]
->Machines and appliances that run heavily customized or stripped-down versions of the above distributions and hosted solutions that disallow customization by the user are not supported. Azure Monitor and legacy agents rely on various packages and other baseline functionality that is often removed from such systems, and their installation may require some environmental modifications considered to be disallowed by the appliance vendor. For instance, [GitHub Enterprise Server](https://docs.github.com/en/enterprise-server/admin/overview/about-github-enterprise-server) is not supported due to heavy customization as well as [documented, license-level disallowance](https://docs.github.com/en/enterprise-server/admin/overview/system-overview#operating-system-software-and-patches) of operating system modification.
+> [!NOTE]
+> Machines and appliances that run heavily customized or stripped-down versions of the above distributions and hosted solutions that disallow customization by the user are not supported. Azure Monitor and legacy agents rely on various packages and other baseline functionality that is often removed from such systems, and their installation may require some environmental modifications considered to be disallowed by the appliance vendor. For instance, [GitHub Enterprise Server](https://docs.github.com/en/enterprise-server/admin/overview/about-github-enterprise-server) is not supported due to heavy customization as well as [documented, license-level disallowance](https://docs.github.com/en/enterprise-server/admin/overview/system-overview#operating-system-software-and-patches) of operating system modification.
## Next steps
azure-monitor Data Sources Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-custom-logs.md
The following section walks through an example of creating a custom log. The sam
We provide one of the log files and can see the events that it will be collecting. In this case, **New line** is a sufficient delimiter. If a single entry in the log could span multiple lines though, a timestamp delimiter would need to be used.
-![Screenshot that shows uploading and parsing a sample log.](media/data-sources-custom-logs/delimiter.png)
### Add log collection paths The log files will be located in *C:\MyApp\Logs*. A new file will be created each day with a name that includes the date in the pattern *appYYYYMMDD.log*. A sufficient pattern for this log would be *C:\MyApp\Logs\\\*.log*.
-![Screenshot that shows adding a log collection path.](media/data-sources-custom-logs/collection-path.png)
### Provide a name and description for the log We use a name of *MyApp_CL* and type in a **Description**.
-![Screenshot that shows adding a log name.](media/data-sources-custom-logs/log-name.png)
### Validate that the custom logs are being collected We use a simple query of *MyApp_CL* to return all records from the collected log.
-![Screenshot that shows a log query with no custom fields.](media/data-sources-custom-logs/query-01.png)
## Alternatives to custom logs
azure-monitor Change Analysis Custom Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-custom-filters.md
Browsing through a long list of changes in the entire subscription is time consu
| Time range | Specifies how far back the UI display changes, up to 14 days. By default, itΓÇÖs set to the past 24 hours. | | Resource group | Select the resource group to scope the changes. By default, all resource groups are selected. | | Change level | Controls which levels of changes to display. Levels include: important, normal, and noisy. <ul><li>Important: related to availability and security</li><li>Noisy: Read-only properties that are unlikely to cause any issues</li></ul> By default, important and normal levels are checked. |
-| Resource | Select **Add filter** to use this filter. </br> Filter the changes to specific resources. Helpful if you already know which resources to look at for changes. |
+| Resource | Select **Add filter** to use this filter. </br> Filter the changes to specific resources. Helpful if you already know which resources to look at for changes. [If the filter is only returning 1,000 resources, see the corresponding solution in troubleshooting guide](./change-analysis-troubleshoot.md#cant-filter-to-your-resource-to-view-changes). |
| Resource type | Select **Add filter** to use this filter. </br> Filter the changes to specific resource types. | ### Search bar The search bar filters the changes according to the input keywords. Search bar results apply only to the changes loaded by the page already and don't pull in results from the server side. + ## Next steps [Troubleshoot Change Analysis](./change-analysis-troubleshoot.md).
azure-monitor Change Analysis Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-troubleshoot.md
This error message may occur in the Azure portal when loading change data via th
To load all change data, try waiting a few minutes and refreshing the page. If you are still only receiving partial data, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com). + ## You don't have enough permissions to view some changes. Contact your Azure subscription administrator. This general unauthorized error message occurs when the current user doesn't have sufficient permissions to view the change. At minimum,
To troubleshoot virtual machine issues using the troubleshooting tool in the Azu
![Screenshot of the tile for the Analyze recent changes troubleshooting tool for a Virtual Machine.](./media/change-analysis/analyze-recent-changes.png)
+## Can't filter to your resource to view changes
+
+When filtering down to a particular resource in the Change Analysis standalone page, you may encounter a known limitation that only returns 1,000 resource results. To filter through and pinpoint changes for one of your 1,000+ resources:
+
+1. In the Azure portal, select **All resources**.
+1. Select the actual resource you want to view.
+1. In that resource's left side menu, select **Diagnose and solve problems**.
+1. Select **Change details**.
+From here, you'll be able to view all of the changes for that one resource.
## Next steps
azure-monitor Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis.md
Every 30 minutes, Change Analysis captures the configuration state of a web appl
:::image type="content" source="./media/change-analysis/scan-changes.png" alt-text="Screenshot of the selecting the Refresh button to view latest changes.":::
-If you don't see file changes within 30 minutes or configuration changes within 6 hours, refer to [our troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app). [See known limitations.](#limitations)
+If you don't see file changes within 30 minutes or configuration changes within 6 hours, refer to [our troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app).
+
+[See known limitations.](#limitations)
Currently, all text-based files under site root **wwwroot** with the following extensions are supported:
Currently the following dependencies are supported in **Web App Diagnose and sol
- **Web app deployment changes**: Code deployment change information might not be available immediately in the Change Analysis tool. To view the latest changes in Change Analysis, select **Refresh**. - **App Services file changes**: File changes take up to 30 minutes to display. - **App Services configuration changes**: Due to the snapshot approach to configuration changes, timestamps of configuration changes could take up to 6 hours to display from when the change actually happened.
+- **Web app deployment and configuration changes**: Since these changes are collected by a site extension and stored on disk space owned by your application, data collection and storage is subject to your application's behavior. Check to see if a misbehaving application is affecting the results.
## Next steps
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
The containerized Linux agent (replicaset pod) makes API calls to all the Window
If you have a Kubernetes cluster with Windows nodes, review and configure the network security group and network policies to make sure the Kubelet secure port (:10250) is opened for both inbound and outbound in the cluster's virtual network.
-### Network firewall requirements
-
-For information on the firewall requirements for the AKS cluster, see [Network firewall requirements](#network-firewall-requirements).
- ## Authentication Container insights now supports authentication by using managed identity (in preview). This secure and simplified authentication model has a monitoring agent that uses the cluster's managed identity to send data to Azure Monitor. It replaces the existing legacy certificate-based local authentication and removes the requirement of adding a *Monitoring Metrics Publisher* role to the cluster.
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
Previously updated : 10/03/2022 Last updated : 11/09/2022
When you use category groups, you:
Currently, there are two category groups: - **All**: Every resource log offered by the resource.-- **Audit**: All resource logs that record customer interactions with data or the settings of the service.
+- **Audit**: All resource logs that record customer interactions with data or the settings of the service. Note that Audit logs are an attempt by each resource provider to provide the most relevant audit data, but may not be considered sufficient from an auditing standards perspective.
### Activity log
See the [Activity log settings](#activity-log-settings) section.
## Destinations
-Platform logs and metrics can be sent to the destinations listed in the following table.
+Platform logs and metrics can be sent to the destinations listed in the following table.
+
+To ensure the security of data in transit, we strongly encourage you to configure Transport Layer Security (TLS). All destination endpoints support TLS 1.2.
| Destination | Description | |:|:|
The following table provides unique requirements for each destination including
| Destination | Requirements | |:|:| | Log Analytics workspace | The workspace doesn't need to be in the same region as the resource being monitored.|
-| Storage account | It is not recommended to use an existing storage account that has other, non-monitoring data stored in it so that you can better control access to the data. If you're archiving the activity log and resource logs together, you might choose to use the same storage account to keep all monitoring data in a central location.<br><br>To send the data to immutable storage, set the immutable policy for the storage account as described in [Set and manage immutability policies for Azure Blob Storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this linked article including enabling protected append blobs writes.<br><br>The storage account needs to be in the same region as the resource being monitored if the resource is regional.<br><br> Diagnostic settings can't access storage accounts when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in storage accounts so that the Azure Monitor diagnostic settings service is granted access to your storage account.|
+| Storage account | Don't use an existing storage account that has other, non-monitoring data stored in it so that you can better control access to the data. If you're archiving the activity log and resource logs together, you might choose to use the same storage account to keep all monitoring data in a central location.<br><br>To send the data to immutable storage, set the immutable policy for the storage account as described in [Set and manage immutability policies for Azure Blob Storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this linked article including enabling protected append blobs writes.<br><br>The storage account needs to be in the same region as the resource being monitored if the resource is regional.<br><br> Diagnostic settings can't access storage accounts when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in storage accounts so that the Azure Monitor diagnostic settings service is granted access to your storage account.<br><br>[Azure DNS zone endpoints (preview)](/azure/storage/common/storage-account-overview#azure-dns-zone-endpoints-preview) and [Azure Premium LRS](/azure/storage/common/storage-redundancy#locally-redundant-storage) (locally redundant storage) storage accounts are not supported as a log or metric destination.|
| Event Hubs | The shared access policy for the namespace defines the permissions that the streaming mechanism has. Streaming to Event Hubs requires Manage, Send, and Listen permissions. To update the diagnostic setting to include streaming, you must have the ListKey permission on that Event Hubs authorization rule.<br><br>The event hub namespace needs to be in the same region as the resource being monitored if the resource is regional. <br><br> Diagnostic settings can't access Event Hubs resources when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in Event Hubs so that the Azure Monitor diagnostic settings service is granted access to your Event Hubs resources.| | Partner integrations | The solutions vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
If you receive this error, update your deployments to replace any metric categor
Diagnostic settings don't support resource IDs with non-ASCII characters. For example, consider the term Preproducci├│n. Because you can't rename resources in Azure, your only option is to create a new resource without the non-ASCII characters. If the characters are in a resource group, you can move the resources under it to a new one. Otherwise, you'll need to re-create the resource.
+### Possibility of duplicated or dropped data
+
+Every effort is made to ensure all log data is sent correctly to your destinations, however it's not possible guarantee 100% data transfer of logs between endpoints. Retries and other mechanisms are in place to work around these issues and attempt to ensure log data arrives at the endpoint.
+ ## Next step [Read more about Azure platform logs](./platform-logs-overview.md)
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
Title: Configure Basic Logs in Azure Monitor
-description: Learn how to configure a table for Basic Logs in Azure Monitor.
-- Previously updated : 10/01/2022
+ Title: Set a table's log data plan in Azure Monitor Logs
+description: Learn how to configure the table log data plan to optimize log ingestion and retention costs in Azure Monitor Logs.
++++ Last updated : 11/09/2022
-# Configure Basic Logs in Azure Monitor
+# Set a table's log data plan in Azure Monitor Logs
-Setting a table's [log data plan](log-analytics-workspace-overview.md#log-data-plans) to **Basic Logs** lets you save on the cost of storing high-volume verbose logs you use for debugging, troubleshooting, and auditing, but not for analytics and alerts. This article describes how to configure Basic Logs for a particular table in your Log Analytics workspace.
+Azure Monitor Logs offers two log data plans that let you reduce log ingestion and retention costs and take advantage of Azure Monitor's advanced features and analytics capabilities based on your needs:
+
+- The default **Analytics** log data plan provides full analysis capabilities and makes log data available for queries, Azure Monitor features, such as alerts, and use by other services.
+- The **Basic** log data plan lets you save on the cost of ingesting and storing high-volume verbose logs in your Log Analytics workspace for debugging, troubleshooting, and auditing, but not for analytics and alerts.
+
+This article describes Azure Monitor's log data plans and explains how to configure the log data plan of the tables in your Log Analytics workspace.
> [!IMPORTANT]
-> You can switch a table's plan once a week. The Basic Logs feature isn't available for workspaces in [legacy pricing tiers](cost-logs.md#legacy-pricing-tiers).
+> You can switch a table's plan once a week.<br/> The Basic Logs feature isn't available for workspaces in [legacy pricing tiers](cost-logs.md#legacy-pricing-tiers).
+
+## Compare the Basic and Analytics log data plans
+
+The following table summarizes the two plans.
+
+| Category | Analytics | Basic |
+|:|:|:|
+| Ingestion | Cost for ingestion. | Reduced cost for ingestion. |
+| Log queries | No extra cost. Full query capabilities. | Extra cost.<br>[Subset of query capabilities](basic-logs-query.md#limitations). |
+| Retention | Configure retention from 30 days to 730 days. | Retention fixed at eight days. |
+| Alerts | Supported. | Not supported. |
+
+> [!NOTE]
+> The Basic log data plan isn't available for workspaces in [legacy pricing tiers](cost-logs.md#legacy-pricing-tiers).
## Which tables support Basic Logs?
By default, all tables in your Log Analytics workspace are Analytics tables, and
> [!NOTE] > Tables created with the [Data Collector API](data-collector-api.md) don't support Basic Logs.
-## Set table configuration
+## Set a table's log data plan
# [Portal](#tab/portal-1)
For example:
-## Check table configuration
+## View a table's log data plan
# [Portal](#tab/portal-2)
-To check table configuration in the Azure portal, you can open the table configuration screen, as described in [Set table configuration](#set-table-configuration).
+To check table configuration in the Azure portal, you can open the table configuration screen, as described in [Set table configuration](#set-a-tables-log-data-plan).
Alternatively:
GET https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{
|Name | Type | Description | | | | | |properties.plan | string | The table plan. Either `Analytics` or `Basic`. |
-|properties.retentionInDays | integer | The table's data retention in days. In `Basic Logs`, the value is 8 days, fixed. In `Analytics Logs`, the value is between 7 and 730 days.|
+|properties.retentionInDays | integer | The table's data retention in days. In `Basic Logs`, the value is eight days, fixed. In `Analytics Logs`, the value is between 7 and 730 days.|
|properties.totalRetentionInDays | integer | The table's data retention that also includes the archive period.| |properties.archiveRetentionInDays|integer|The table's archive period (read-only, calculated).| |properties.lastPlanModifiedDate|String|Last time when the plan was set for this table. Null if no change was ever done from the default settings (read-only).
Basic Logs tables retain data for eight days. When you change an existing table'
## Next steps -- [Learn more about the different log plans](log-analytics-workspace-overview.md#log-data-plans) - [Query data in Basic Logs](basic-logs-query.md)
+- [Set retention and archive policies](../logs/data-retention-archive.md)
+
azure-monitor Basic Logs Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-query.md
Title: Query data from Basic Logs in Azure Monitor description: Create a log query using tables configured for Basic logs in Azure Monitor.+++ Last updated 10/01/2022
Last updated 10/01/2022
# Query Basic Logs in Azure Monitor Basic Logs tables reduce the cost of ingesting high-volume verbose logs and let you query the data they store using a limited set of log queries. This article explains how to query data from Basic Logs tables.
-For more information, see [Azure log data plans](log-analytics-workspace-overview.md#log-data-plans) and [Configure a table for Basic Logs](basic-logs-configure.md).
+For more information, see [Set a table's log data plan](basic-logs-configure.md).
> [!NOTE]
For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pr
## Next steps -- [Learn more about Basic Logs and the different log plans.](log-analytics-workspace-overview.md#log-data-plans)-- [Configure a table for Basic Logs.](basic-logs-configure.md)-- [Use a search job to retrieve data from Basic Logs into Analytics Logs where it can be queries multiple times.](search-jobs.md)
+- [Learn more about the Basic Logs and Analytics log plans](basic-logs-configure.md).
+- [Use a search job to retrieve data from Basic Logs into Analytics Logs where it can be queries multiple times](search-jobs.md).
azure-monitor Create Custom Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/create-custom-table.md
+
+ Title: Add or delete tables and columns in Azure Monitor Logs
+description: Create a table with a custom schema to collect logs from any data source.
+++++ Last updated : 11/09/2022+
+# Customer intent: As a Log Analytics workspace administrator, I want to create a table with a custom schema to store logs from an Azure or non-Azure data source.
++
+# Add or delete tables and columns in Azure Monitor Logs
+
+[Data collection rules](../essentials/data-collection-rule-overview.md) let you [filter and transform log data](../essentials/data-collection-transformations.md) before sending the data to an [Azure table or a custom table](../logs/manage-logs-tables.md#table-type). This article explains how to create custom tables and add custom columns to tables in your Log Analytics workspace.
+
+## Prerequisites
+
+To create a custom table, you need:
+
+- A Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac).
+- A [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overview.md).
+- A JSON file with the schema of your custom table in the following format:
+ ```json
+ [
+ {
+ "TimeGenerated": "supported_datetime_format",
+ "<column_name_1": "<column_name_1_value>",
+ "<column_name_2": "<column_name_2_value>"
+ }
+ ]
+ ```
+
+ For information about the `TimeGenerated` format, see [supported datetime formats](/azure/data-explorer/kusto/query/scalar-data-types/datetime#supported-formats).
+## Create a custom table
+
+Azure tables have predefined schemas. To store log data in a different schema, use data collection rules to define how to collect, transform, and send the data to a custom table in your Log Analytics workspace.
+
+> [!NOTE]
+> For information about creating a custom table for logs you ingest with the deprecated Log Analytics agent, also known as MMA or OMS, see [Collect text logs with the Log Analytics agent](../agents/data-sources-custom-logs.md#define-a-custom-log).
+
+### [Portal](#tab/portal-1)
+
+To create a custom table in the Azure portal:
+
+1. From the **Log Analytics workspaces** menu, select **Tables**.
+
+ :::image type="content" source="media/manage-logs-tables/azure-monitor-logs-table-configuration.png" alt-text="Screenshot that shows the Tables screen for a Log Analytics workspace." lightbox="media/manage-logs-tables/azure-monitor-logs-table-configuration.png":::
+
+1. Select **Create** and then **New custom log (DCR-based)**.
+
+ :::image type="content" source="media/tutorial-logs-ingestion-portal/new-custom-log.png" lightbox="media/tutorial-logs-ingestion-portal/new-custom-log.png" alt-text="Screenshot showing new DCR-based custom log.":::
+
+1. Specify a name and, optionally, a description for the table. You don't need to add the *_CL* suffix to the custom table's name - this is added automatically to the name you specify in the portal.
+
+1. Select an existing data collection rule from the **Data collection rule** dropdown, or select **Create a new data collection rule** and specify the **Subscription**, **Resource group**, and **Name** for the new data collection rule.
+
+ :::image type="content" source="media/tutorial-logs-ingestion-portal/new-data-collection-rule.png" lightbox="media/tutorial-logs-ingestion-portal/new-data-collection-rule.png" alt-text="Screenshot showing new data collection rule.":::
+
+4. Select a [data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-data-collection-endpoint) and select **Next**.
+
+ :::image type="content" source="media/tutorial-logs-ingestion-portal/custom-log-table-name.png" lightbox="media/tutorial-logs-ingestion-portal/custom-log-table-name.png" alt-text="Screenshot showing custom log table name.":::
+
+1. Select **Browse for files** and locate the JSON file in which you defined the schema of your new table.
+
+ :::image type="content" source="media/tutorial-logs-ingestion-portal/custom-log-browse-files.png" lightbox="media/tutorial-logs-ingestion-portal/custom-log-browse-files.png" alt-text="Screenshot showing custom log browse for files.":::
+
+ All log tables in Azure Monitor Logs must have a `TimeGenerated` column populated with the timestamp of the logged event.
+
+1. If you want to [transform log data before ingestion](../essentials//data-collection-transformations.md) into your table:
+
+ 1. Select **Transformation editor**.
+
+ The transformation editor lets you create a transformation for the incoming data stream. This is a KQL query that runs against each incoming record. Azure Monitor Logs stores the results of the query in the destination table.
+
+ :::image type="content" source="media/tutorial-logs-ingestion-portal/custom-log-data-preview.png" lightbox="media/tutorial-logs-ingestion-portal/custom-log-data-preview.png" alt-text="Screenshot showing custom log data preview.":::
+
+ 1. Select **Run** to view the results.
+
+ :::image type="content" source="media/tutorial-logs-ingestion-portal/custom-log-query-01.png" lightbox="media/tutorial-logs-ingestion-portal/custom-log-query-01.png" alt-text="Screenshot showing initial custom log data query.":::
+
+1. Select **Apply** to save the transformation and view the schema of the table that's about to be created. Select **Next** to proceed.
+
+ :::image type="content" source="media/tutorial-logs-ingestion-portal/custom-log-final-schema.png" lightbox="media/tutorial-logs-ingestion-portal/custom-log-final-schema.png" alt-text="Screenshot showing custom log final schema.":::
+
+1. Verify the final details and select **Create** to save the custom log.
+
+ :::image type="content" source="media/tutorial-logs-ingestion-portal/custom-log-create.png" lightbox="media/tutorial-logs-ingestion-portal/custom-log-create.png" alt-text="Screenshot showing custom log create.":::
+
+### [PowerShell](#tab/powershell-1)
+
+Use the [Tables - Update PATCH API](/rest/api/loganalytics/tables/update) to create a custom table with the PowerShell code below. This code creates a table called *MyTable_CL* with two columns. Modify this schema to collect a different table.
+
+> [!IMPORTANT]
+> Custom tables have a suffix of *_CL*; for example, *tablename_CL*. The *tablename_CL* in the DataFlows Streams must match the *tablename_CL* name in the Log Analytics workspace.
+
+1. Select the **Cloud Shell** button in the Azure portal and ensure the environment is set to **PowerShell**.
+
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/open-cloud-shell.png" lightbox="../logs/media/tutorial-workspace-transformations-api/open-cloud-shell.png" alt-text="Screenshot of opening Cloud Shell in the Azure portal.":::
+
+2. Copy the following PowerShell code and replace the **Path** parameter with the appropriate values for your workspace in the `Invoke-AzRestMethod` command. Paste it into the Cloud Shell prompt to run it.
+
+ ```PowerShell
+ $tableParams = @'
+ {
+ "properties": {
+ "schema": {
+ "name": "MyTable_CL",
+ "columns": [
+ {
+ "name": "TimeGenerated",
+ "type": "DateTime"
+ },
+ {
+ "name": "RawData",
+ "type": "String"
+ }
+ ]
+ }
+ }
+ }
+ '@
+
+ Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourcegroup}/providers/microsoft.operationalinsights/workspaces/{workspace}/tables/MyTable_CL?api-version=2021-12-01-preview" -Method PUT -payload $tableParams
+ ```
+++
+## Delete a table
+
+You can delete any table in your Log Analytics workspace that's not an [Azure table](../logs/manage-logs-tables.md#table-type).
+
+> [!NOTE]
+> Deleting a restored table doesn't delete the data in the source table.
+
+### [Portal](#tab/portal-2)
+
+To delete a table from the Azure portal:
+
+1. From the Log Analytics workspace menu, select **Tables**.
+1. Search for the tables you want to delete by name, or by selecting **Search results** in the **Type** field.
+
+ :::image type="content" source="media/search-job/search-results-on-log-analytics-tables-screen.png" alt-text="Screenshot that shows the Tables screen for a Log Analytics workspace with the Filter by name and Type fields highlighted." lightbox="media/search-job/search-results-on-log-analytics-tables-screen.png":::
+
+1. Select the table you want to delete, select the ellipsis ( **...** ) to the right of the table, select **Delete**, and confirm the deletion by typing **yes**.
+
+ :::image type="content" source="media/search-job/delete-table.png" alt-text="Screenshot that shows the Delete Table screen for a table in a Log Analytics workspace." lightbox="media/search-job/delete-table.png":::
+
+### [API](#tab/api-2)
+
+To delete a table, call the **Tables - Delete** API:
+
+```http
+DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/<TableName>_SRCH?api-version=2021-12-01-preview
+```
+
+### [CLI](#tab/cli-2)
+
+To delete a table, run the [az monitor log-analytics workspace table delete](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-delete) command.
+
+For example:
+
+```azurecli
+az monitor log-analytics workspace table delete --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name HeartbeatByIp_SRCH
+```
++
+## Add or delete a custom column
+
+To add a custom column to a table in your Log Analytics workspace, or delete a column:
+
+1. From the **Log Analytics workspaces** menu, select **Tables**.
+1. Select the ellipsis ( **...** ) to the right of the table you want to edit and select **Edit schema**.
+ This opens the **Schema Editor** screen.
+1. Scroll down to the **Custom Columns** section of the **Schema Editor** screen.
+
+ :::image type="content" source="media/create-custom-table/add-or-delete-column-azure-monitor-logs.png" alt-text="Screenshot showing the Schema Editor screen with the Add a column and Delete buttons highlighted." lightbox="media/create-custom-table/add-or-delete-column-azure-monitor-logs.png":::
+
+1. To add a new column:
+ 1. Select **Add a column**.
+ 1. Set the column name and description (optional), and select the expected value type from the **Type** dropdown.
+ 1. Select **Save** to save the new column.
+1. To delete a column, select the **Delete** icon to the left of the column you want to delete.
+
+## Next steps
+
+Learn more about:
+
+- [Collecting logs with the Log Ingestion API](../logs/logs-ingestion-api-overview.md)
+- [Collecting logs with Azure Monitor Agent](../agents/agents-overview.md)
+- [Data collection rules](../essentials/data-collection-endpoint-overview.md)
azure-monitor Data Retention Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md
Title: Configure data retention and archive in Azure Monitor Logs (preview)
+ Title: Configure data retention and archive in Azure Monitor Logs
description: Configure archive settings for a table in a Log Analytics workspace in Azure Monitor.-+ Last updated 10/01/2022 # Customer intent: As an Azure account administrator, I want to set data retention and archive policies to save retention costs.
The retention can also be [set programmatically with PowerShell](../app/powershe
- [Learn more about Log Analytics workspaces and data retention and archive](log-analytics-workspace-overview.md) - [Create a search job to retrieve archive data matching particular criteria](search-jobs.md)-- [Restore archive data within a particular time range](restore.md)
+- [Restore archive data within a particular time range](restore.md)
azure-monitor Log Analytics Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-overview.md
Each workspace contains multiple tables that are organized into separate columns
## Cost
-There's no direct cost for creating or maintaining a workspace. You're charged for the data sent to it, which is also known as data ingestion. You're charged for how long that data is stored, which is otherwise known as data retention. These costs might vary based on the data plan of each table, as described in [Log data plans (preview)](#log-data-plans).
+There's no direct cost for creating or maintaining a workspace. You're charged for the data sent to it, which is also known as data ingestion. You're charged for how long that data is stored, which is otherwise known as data retention. These costs might vary based on the log data plan of each table, as described in [Log data plan](../logs/basic-logs-configure.md).
For information on pricing, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). For guidance on how to reduce your costs, see [Azure Monitor best practices - Cost management](../best-practices-cost.md). If you're using your Log Analytics workspace with services other than Azure Monitor, see the documentation for those services for pricing information.
-## Log data plans
-
-By default, all tables in a workspace are **Analytics** tables, which are available to all features of Azure Monitor and any other services that use the workspace. You can configure [certain tables as **Basic Logs**](basic-logs-configure.md#which-tables-support-basic-logs) to reduce the cost of storing high-volume verbose logs you use for debugging, troubleshooting, and auditing, but not for analytics and alerts. Tables configured for Basic Logs have a lower ingestion cost in exchange for reduced features.
-
-The following table summarizes the two plans. For more information on Basic Logs and how to configure them, see [Configure Basic Logs in Azure Monitor](basic-logs-configure.md).
-
-> [!NOTE]
-> The Basic Logs feature isn't available for workspaces in [legacy pricing tiers](cost-logs.md#legacy-pricing-tiers).
-
-| Category | Analytics Logs | Basic Logs |
-|:|:|:|
-| Ingestion | Cost for ingestion. | Reduced cost for ingestion. |
-| Log queries | No extra cost. Full query capabilities. | Extra cost.<br>[Subset of query capabilities](basic-logs-query.md#limitations). |
-| Retention | Configure retention from 30 days to 730 days. | Retention fixed at 8 days. |
-| Alerts | Supported. | Not supported. |
- ## Workspace transformation DCR [Data collection rules (DCRs)](../essentials/data-collection-rule-overview.md) that define data coming into Azure Monitor can include transformations that allow you to filter and transform data before it's ingested into the workspace. Since all data sources don't yet support DCRs, each workspace can have a [workspace transformation DCR](../essentials/data-collection-transformations.md#workspace-transformation-dcr).
azure-monitor Manage Logs Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-logs-tables.md
+
+ Title: Manage tables in a Log Analytics workspace
+description: Learn how to manage the data and costs related to a Log Analytics workspace effectively
+++ Last updated : 11/09/2022
+# Customer intent: As a Log Analytics workspace administrator, I want to understand the options I have for configuring tables in a Log Analytics workspace so that I can manage the data and costs related to a Log Analytics workspace effectively.
+++
+# Manage tables in a Log Analytics workspace
+
+Azure Monitor Logs stores log data in tables. Table configuration lets you define how to store collected data, how long to retain the data, and whether you collect the data for auditing and troubleshooting or for ongoing data analysis and regular use by features and services.
+
+This article explains the table configuration options in Azure Monitor Logs and how to manage table settings based on your data analysis and cost management needs.
+
+## Table configuration settings
+
+This diagram provides an overview of the table configuration options in Azure Monitor Logs:
++
+In the Azure portal, you can view and set table configuration settings by selecting **Tables** from your Log Analytics workspace.
++
+## Table type
+
+A Log Analytics workspace lets you collect logs from Azure and non-Azure resources into one space for data analysis, use by other services, such as [Sentinel](../../../articles/sentinel/overview.md), and to trigger alerts and actions, for example, using [Logic Apps](../logs/logicapp-flow-connector.md).
+
+Your Log Analytics workspace can contain the following types of tables:
+
+| Table type | Data source | Setup |
+|-|-|-|
+| Azure table | Logs from Azure resources or required by Azure services and solutions. | Azure Monitor Logs creates Azure tables automatically based on Azure services you use and [diagnostic settings](../essentials/diagnostic-settings.md) you configure for specific resources. |
+| Custom table | Non-Azure resource and any other data source, such as file-based logs. | [Create a custom table](../logs/create-custom-table.md).|
+| Search results | Logs within the workspace. | Azure Monitor creates a search job results table when you run a [search job](../logs/search-jobs.md). |
+| Restored logs | Archived logs. | Azure Monitor creates a restored logs table when you [restore archived logs](../logs/restore.md). |
+
+## Table schema
+
+A table's schema is the set of columns that make up the table, into which Azure Monitor Logs collects log data from one or more data sources.
+
+### Azure table schema
+
+Each Azure table has a predefined schema into which Azure Monitor Logs collects logs defined by Azure resources, services, and solutions.
+
+You can [add columns to an Azure table](../logs/create-custom-table.md#add-or-delete-a-custom-column) to store transformed log data or enrich data in the Azure table with data from another source.
+### Custom table schema
+
+You can [define a custom table's schema](../logs/create-custom-table.md) based on how you want to store data you collect from a given data source.
+
+Reduce costs and analysis effort by using data collection rules to [filter out and transform data before ingestion](../essentials/data-collection-transformations.md) based on the schema you define for your custom table.
+
+### Search results and restored logs table schema
+
+The schema of a search results table is based on the query you define when you [run the search job](../logs/search-jobs.md).
+
+A restored logs table has the same schema as the table from which you [restore logs](../logs/restore.md).
+
+You can't edit the schema of existing search results and restored logs tables.
+## Log data plan
+
+[Configure a table's log data plan](../logs/basic-logs-configure.md) based on how often you access the data in the table. The **Basic** log data plan provides a low-cost way to ingest and retain logs for troubleshooting, debugging, auditing, and compliance. The **Analytics** plan makes log data available for interactive queries and use by features and services.
+
+## Retention and archive
+
+ Archiving is a low-cost solution for keeping data that you no longer use regularly in your workspace for compliance or occasional investigation. [Set table-level retention policies](../logs/data-retention-archive.md) to override the default workspace retention policy and to archive data within your workspace.
+
+To access archived data, [run a search job](../logs/search-jobs.md) or [restore data for a specific time range](../logs/restore.md).
+
+## Next steps
+
+Learn how to:
+
+- [Set a table's log data plan](../logs/basic-logs-configure.md)
+- [Add custom tables and columns](../logs/create-custom-table.md)
+- [Set retention and archive policies](../logs/data-retention-archive.md)
+- [Design a workspace architecture](../logs/workspace-design.md)
azure-monitor Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/restore.md
Set the query time range by either:
## Dismiss restored data
-To save costs, dismiss restored data when you no longer need it by deleting the restored table.
+To save costs, we recommend you [delete the restored table](../logs/create-custom-table.md#delete-a-table) to dismiss restored data when you no longer need it.
Deleting the restored table doesn't delete the data in the source table. > [!NOTE] > Restored data is available as long as the underlying source data is available. When you delete the source table from the workspace or when the source table's retention period ends, the data is dismissed from the restored table. However, the empty table will remain if you do not delete it explicitly.
-# [API](#tab/api-2)
-To delete a restore table, call the **Tables - Delete** API:
-
-```http
-DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{user defined name}_RST?api-version=2021-12-01-preview
-```
-# [CLI](#tab/cli-2)
-
-To delete a restore table, run the [az monitor log-analytics workspace table delete](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-delete) command.
-
-For example:
-
-```azurecli
-az monitor log-analytics workspace table delete --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name Heartbeat_RST
-```
-- ## Limitations Restore is subject to the following limitations.
You can:
- Restore data for a minimum of two days. - Restore up to 60 TB.-- Perform up to four restores per workspace per week. - Run up to two restore processes in a workspace concurrently. - Run only one active restore on a specific table at a given time. Executing a second restore on a table that already has an active restore will fail.
+- Perform up to four restores per table per week.
## Pricing model The charge for maintaining restored logs is calculated based on the volume of data you restore, in GB, and the number or days for which you restore the data. Charges are prorated and subject to the minimum restore duration and data volume. There is no charge for querying against restored logs.
azure-monitor Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/search-jobs.md
Search jobs also let you retrieve records from [Archived Logs](data-retention-ar
A search job sends its results to a new table in the same workspace as the source data. The results table is available as soon as the search job begins, but it may take time for results to begin to appear.
-The search job results table is a [Log Analytics](log-analytics-workspace-overview.md#log-data-plans) table that is available for log queries and other Azure Monitor features that use tables in a workspace. The table uses the [retention value](data-retention-archive.md) set for the workspace, but you can modify this value after the table is created.
+The search job results table is an [Analytics table](../logs/basic-logs-configure.md) that is available for log queries and other Azure Monitor features that use tables in a workspace. The table uses the [retention value](data-retention-archive.md) set for the workspace, but you can modify this value after the table is created.
The search results table schema is based on the source table schema and the specified query. The following other columns help you track the source records:
az monitor log-analytics workspace table show --subscription ContosoSID --resour
-## Delete search a job table
-We recommend deleting the search job table when you're done querying the table. This reduces workspace clutter and extra charges for data retention.
-### [Portal](#tab/portal-3)
-1. From the Log Analytics workspace menu, select **Tables.**
-1. Search for the tables you want to delete by name, or by selecting **Search results** in the **Type** field.
-
- :::image type="content" source="media/search-job/search-results-on-log-analytics-tables-screen.png" alt-text="Screenshot that shows the Tables screen for a Log Analytics workspace with the Filter by name and Type fields highlighted." lightbox="media/search-job/search-results-on-log-analytics-tables-screen.png":::
-
-1. Select the tables you want to delete, select **Delete**, and confirm the deletion by typing **yes**.
-
- :::image type="content" source="media/search-job/delete-table.png" alt-text="Screenshot that shows the Delete Table screen for a table in a Log Analytics workspace." lightbox="media/search-job/delete-table.png":::
-
-### [API](#tab/api-3)
-
-To delete a table, call the **Tables - Delete** API:
-
-```http
-DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/<TableName>_SRCH?api-version=2021-12-01-preview
-```
-
-### [CLI](#tab/cli-3)
-
-To delete a search table, run the [az monitor log-analytics workspace table delete](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-delete) command.
-
-For example:
-
-```azurecli
-az monitor log-analytics workspace table delete --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name HeartbeatByIp_SRCH
-```
--
+## Delete a search job table
+We recommend you [delete the search job table](../logs/create-custom-table.md#delete-a-table) when you're done querying the table. This reduces workspace clutter and extra charges for data retention.
## Limitations Search jobs are subject to the following limitations:
azure-resource-manager Bicep Functions Lambda https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-lambda.md
var dogs = [
} ] var ages = map(dogs, dog => dog.age)
-output totalAge int = reduce(ages, 0, (cur, prev) => cur + prev)
-output totalAgeAdd1 int = reduce(ages, 1, (cur, prev) => cur + prev)
+output totalAge int = reduce(ages, 0, (cur, next) => cur + next)
+output totalAgeAdd1 int = reduce(ages, 1, (cur, next) => cur + next)
``` The output from the preceding example is:
backup Backup Azure Linux App Consistent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-linux-app-consistent.md
Pre-scripts invoke native application APIs, which quiesce the IOs, and flush in-
- **VMSnapshotScriptPluginConfig.json**: Permission ΓÇ£600.ΓÇ¥ For example, only ΓÇ£rootΓÇ¥ user should have ΓÇ£readΓÇ¥ and ΓÇ£writeΓÇ¥ permissions to this file, and no user should have ΓÇ£executeΓÇ¥ permissions.
- - **Pre-script file**: Permission ΓÇ£700.ΓÇ¥ For example, only ΓÇ£rootΓÇ¥ user should have ΓÇ£readΓÇ¥, ΓÇ£writeΓÇ¥, and ΓÇ£executeΓÇ¥ permissions to this file. The file is expected to be a shell script but theoretically this script can internally spawn or refer to other scripts like a python script.
+ - **Pre-script file**: Permission ΓÇ£700.ΓÇ¥ For example, only ΓÇ£rootΓÇ¥ user should have ΓÇ£readΓÇ¥, ΓÇ£writeΓÇ¥, and ΓÇ£executeΓÇ¥ permissions to this file. The file is expected to be a shell script but theoretically this script can internally spawn or refer to other scripts like a Python script.
- - **Post-script** Permission ΓÇ£700.ΓÇ¥ For example, only ΓÇ£rootΓÇ¥ user should have ΓÇ£readΓÇ¥, ΓÇ£writeΓÇ¥, and ΓÇ£executeΓÇ¥ permissions to this file. The file is expected to be a shell script but theoretically this script can internally spawn or refer to other scripts like a python script.
+ - **Post-script** Permission ΓÇ£700.ΓÇ¥ For example, only ΓÇ£rootΓÇ¥ user should have ΓÇ£readΓÇ¥, ΓÇ£writeΓÇ¥, and ΓÇ£executeΓÇ¥ permissions to this file. The file is expected to be a shell script but theoretically this script can internally spawn or refer to other scripts like a Python script.
> [!IMPORTANT] > The framework gives users a lot of power. Secure the framework, and ensure only ΓÇ£rootΓÇ¥ user has access to critical JSON and script files.
cdn Monitoring And Access Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/monitoring-and-access-log.md
For more information, see [Azure Monitor metrics](../azure-monitor/essentials/da
> [!NOTE] > If a request to the the origin timeout, the value for HttpStatusCode is set to **0**.
-***Bytes Hit Ration = (egress from edge - egress from origin)/egress from edge**
+***Bytes Hit Ratio = (egress from edge - egress from origin)/egress from edge**
Scenarios excluded in bytes hit ratio calculation:
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 11/4/2022 Last updated : 11/8/2022 # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+## November 2022 Guest OS
+
+>[!NOTE]
+
+>The November Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the November Guest OS. This list is subject to change.
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 22-11 | [5019966] | Latest Cumulative Update(LCU) | 6.51 | Nov 8, 2022 |
+| Rel 22-11 | [5019958] | IE Cumulative Updates | 2.131, 3.118, 4.111 | Nov 8, 2022 |
+| Rel 22-11 | [5019081] | Latest Cumulative Update(LCU) | 7.19 | Nov 8, 2022 |
+| Rel 22-11 | [5019964] | Latest Cumulative Update(LCU) | 5.75 | Nov 8, 2022 |
+| Rel 22-11 | [5013637] | .NET Framework 3.5 Security and Quality Rollup LKG | 2.131 | Nov 8, 2022 |
+| Rel 22-11 | [5020630] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 2.131 | Nov 8, 2022 |
+| Rel 22-11 | [5016268] | .NET Framework 3.5 Security and Quality Rollup LKG | 4.111 | Nov 8, 2022 |
+| Rel 22-11 | [5020629] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 4.111 | Nov 8, 2022 |
+| Rel 22-11 | [5013635] | .NET Framework 3.5 Security and Quality Rollup LKG | 3.118 | Nov 8, 2022 |
+| Rel 22-11 | [5020628] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 3.118 | Nov 8, 2022 |
+| Rel 22-11 | [5020627] | . NET Framework 3.5 and 4.7.2 Cumulative Update LKG | 6.51 | Nov 8, 2022 |
+| Rel 22-11 | [5020619] | .NET Framework 4.8 Security and Quality Rollup LKG | 7.19 | Nov 8, 2022 |
+| Rel 22-11 | [5020000] | Monthly Rollup | 2.131 | Nov 8, 2022 |
+| Rel 22-11 | [5020009] | Monthly Rollup | 3.118 | Nov 8, 2022 |
+| Rel 22-11 | [5020023] | Monthly Rollup | 4.111 | Nov 8, 2022 |
+| Rel 22-11 | [5016263] | Servicing Stack update | 3.118 | Jul 12, 2022 |
+| Rel 22-11 | [5018922] | Servicing Stack update | 4.111 | Oct 11, 2022 |
+| Rel 22-11 | [4578013] | OOB Standalone Security Update | 4.111 | Aug 19, 2020 |
+| Rel 22-11 | [5017396] | Servicing Stack update | 5.75 | Sep 13, 2022 |
+| Rel 22-11 | [5017397] | Servicing Stack update | 2.131 | Sep 13, 2022 |
+| Rel 22-11 | [4494175] | Microcode | 5.75 | Sep 1, 2020 |
+| Rel 22-11 | [4494174] | Microcode | 6.51 | Sep 1, 2020 |
+
+[5019966]: https://support.microsoft.com/kb/5019966
+[5019958]: https://support.microsoft.com/kb/5019958
+[5019081]: https://support.microsoft.com/kb/5019081
+[5019964]: https://support.microsoft.com/kb/5019964
+[5013637]: https://support.microsoft.com/kb/5013637
+[5020630]: https://support.microsoft.com/kb/5020630
+[5016268]: https://support.microsoft.com/kb/5016268
+[5020629]: https://support.microsoft.com/kb/5020629
+[5013635]: https://support.microsoft.com/kb/5013635
+[5020628]: https://support.microsoft.com/kb/5020628
+[5020627]: https://support.microsoft.com/kb/5020627
+[5020619]: https://support.microsoft.com/kb/5020619
+[5020000]: https://support.microsoft.com/kb/5020000
+[5020009]: https://support.microsoft.com/kb/5020009
+[5020023]: https://support.microsoft.com/kb/5020023
+[5016263]: https://support.microsoft.com/kb/5016263
+[5018922]: https://support.microsoft.com/kb/5018922
+[4578013]: https://support.microsoft.com/kb/4578013
+[5017396]: https://support.microsoft.com/kb/5017396
+[5017397]: https://support.microsoft.com/kb/5017397
+[4494175]: https://support.microsoft.com/kb/4494175
+[4494174]: https://support.microsoft.com/kb/4494174
++ ## October 2022 Guest OS
cognitive-services Cognitive Services Limited Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-limited-access.md
Limited Access services are made available to customers under the terms governin
The following services are Limited Access: -- [Embedded Speech](/legal/cognitive-services/speech-service/embedded-speech/limited-access-embedded-speech?context=/azure/cognitive-services/speech-service/context/context): All features - [Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context): Pro features - [Speaker Recognition](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context): All features - [Face API](/legal/cognitive-services/computer-vision/limited-access-identity?context=/azure/cognitive-services/computer-vision/context/context): Identify and Verify features, face ID property
Features of these services that aren't listed above are available without regist
Submit a registration form for each Limited Access service you would like to use: -- [Embedded Speech](https://aka.ms/csgate-embedded-speech): All features - [Custom Neural Voice](https://aka.ms/customneural): Pro features - [Speaker Recognition](https://aka.ms/azure-speaker-recognition): All features - [Face API](https://aka.ms/facerecognition): Identify and Verify features
Existing customers have until June 30, 2023 to submit a registration form and be
The registration forms can be found here: -- [Embedded Speech](https://aka.ms/csgate-embedded-speech): All features - [Custom Neural Voice](https://aka.ms/customneural): Pro features - [Speaker Recognition](https://aka.ms/azure-speaker-recognition): All features - [Face API](https://aka.ms/facerecognition): Identify and Verify features
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/troubleshooting-info.md
The Azure Communication Services SMS SDK uses the following error codes to help
| 4007 | The Destination/To number has opted out of receiving messages from you| Mark the Destination/To number as opted out so that no further message attempts are made to the number| | 4008 | You've exceeded the maximum number of messages allowed for your profile| Ensure you aren't exceeding the maximum number of messages allowed for your number or use queues to batch the messages | | 4009 | Message is rejected by Microsoft Entitlement System| Most often it happens if fraudulent activity is detected. Please contact support for more details |
+| 4010 | Message was blocked due to the toll-free number not being verified | [Review unverified sending limits](./sms/sms-faq.md#toll-free-verification) and submit toll-free verification as soon as possible |
| 5000 | Message failed to deliver. Please reach out Microsoft support team for more details| File a support request through the Azure portal | | 5001 | Message failed to deliver due to temporary unavailability of application/system| | | 5002 | Message Delivery Timeout| Try resending the message |
-| 9999 | Message failed to deliver due to unknown error/failure| Try resending the message |
+| 9999 | Message failed to deliver due to unknown error/failure| Try resending the message |
## Related information
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-automation.md
Most of the events sent by Event Grid are platform agnostic meaning they're emit
| CallEnded | A call is terminated and all participants are removed | | ParticipantAdded | A participant has been added to a call | | ParticipantRemoved| A participant has been removed from a call |
+| RecordingFileStatusUpdated| A recording file is available |
Read more about these events and payload schema [here](../../../event-grid/communication-services-voice-video-events.md)
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md
Call Recording use [Azure Event Grid](https://learn.microsoft.com/azure/event-gr
An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated` is published when a recording is ready for retrieval, typically a few minutes after the recording process has completed (for example, meeting ended, recording stopped). Recording event notifications include `contentLocation` and `metadataLocation`, which are used to retrieve both recorded media and a recording metadata file. ### Notification Schema Reference+ ```typescript { "id": string, // Unique guid for event
An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated`
"eventTime": string // ISO 8601 date time for when the event was created } ```
-## Metadata Schema
+### Metadata Schema Reference
+ ```typescript { "resourceId": <string>, // stable resource id of the ACS resource recording
An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated`
Many countries and states have laws and regulations that apply to call recording. PSTN, voice, and video calls, often require that users consent to the recording of their communications. It is your responsibility to use the call recording capabilities in compliance with the law. You must obtain consent from the parties of recorded communications in a manner that complies with the laws applicable to each participant.
-Regulations around the maintenance of personal data require the ability to export user data. In order to support these requirements, recording metadata files include the participantId for each call participant in the `participants` array. You can cross-reference the MRIs in the `participants` array with your internal user identities to identify participants in a call. An example of a recording metadata file is provided below for reference.
+Regulations around the maintenance of personal data require the ability to export user data. In order to support these requirements, recording metadata files include the participantId for each call participant in the `participants` array. You can cross-reference the MRIs in the `participants` array with your internal user identities to identify participants in a call.
## Known Issues
communication-services Incoming Call Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/incoming-call-notification.md
Given the above examples, the following scenarios will trigger an `IncomingCall`
| Source | Destination | Scenario(s) | | | -- | -- | | Azure Communication Services identity | Azure Communication Services identity | Call, Redirect, Add Participant, Transfer |
-| Azure Communication Services identity | PSTN number owned by your Azure Communication Services resource | Call, Redirect, Add Participant
+| Azure Communication Services identity | PSTN number owned by your Azure Communication Services resource | Call, Redirect, Add Participant, Transfer
| Public PSTN | PSTN number owned by your Azure Communication Services resource | Call, Redirect, Add Participant, Transfer > [!NOTE]
This architecture has the following benefits:
- PSTN number assignment and routing logic can exist in your application versus being statically configured online. - As identified in the above [calling scenarios](#calling-scenarios) section, your application can be notified even when users make calls between each other. You can then combine this scenario together with the [Call Recording APIs](../voice-video-calling/call-recording.md) to meet compliance needs.
-To subscribe to the `IncomingCall` notification from Event Grid, [follow this how-to guide](../../how-tos/call-automation-sdk/subscribe-to-incoming-call.md).
+To check out a sample payload for the event and to learn about other calling events published to Event Grid, check out this [guide](../../../event-grid/communication-services-voice-video-events.md#microsoftcommunicationincomingcall).
## Call routing in Call Automation or Event Grid
You can use [advanced filters](../../../event-grid/event-filtering.md) in your E
## Number assignment Since the `IncomingCall` notification doesn't have a specific destination other than the Event Grid subscription you've created, you're free to associate any particular number to any endpoint in Azure Communication Services. For example, if you acquired a PSTN phone number of `+14255551212` and want to assign it to a user with an identity of `375f0e2f-e8db-4449-9bf7-2054b02e42b4` in your application, you'll maintain a mapping of that number to the identity. When an `IncomingCall` notification is sent matching the phone number in the **to** field, you'll invoke the `Redirect` API and supply the identity of the user. In other words, you maintain the number assignment within your application and route or answer calls at runtime.+
+## Next steps
+- [Build a Call Automation application](../../quickstarts/voice-video-calling/callflows-for-customer-interactions.md) to simulate a customer interaction.
+- [Redirect an inbound PSTN call](../../how-tos/call-automation-sdk/redirect-inbound-telephony-calls.md) to your resource.
communication-services Subscribe To Incoming Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation-sdk/subscribe-to-incoming-call.md
- Title: Subscribe to IncomingCall for Call Automation-
-description: Learn how to subscribe to the IncomingCall event from Event Grid for the Call Automation SDK
----- Previously updated : 09/26/2022---
-# Subscribe to IncomingCall for Call Automation
-
-> [!IMPORTANT]
-> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly. Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
-
-As described in the [Incoming Call concepts guide](../../concepts/voice-video-calling/incoming-call-notification.md), your Event Grid subscription to the `IncomingCall` notification is critical to using the Call Automation SDK for scenarios involving answering, redirecting, or rejecting a call.
-
-## Choosing the right subscription
-
-Event Grid offers several choices for receiving events including Azure Functions, Azure Service Bus, or simple HTTP/S web hooks. Thinking about how the Call Automation platform functions, we rely on web hook callbacks for mid-call events such as `CallConnected`, `CallTransferAccepted`, or `PlayCompleted` as a few examples. The most optimal choice would be to use a **Webhook** subscription since you need a web API for the mid-call events anyway.
-
-> [!IMPORTANT]
-> When using a Webhook subscription, you must undergo a validation of your web service endpoint as per [the following Event Grid instructions.](../../../event-grid/webhook-event-delivery.md)
-
-## Prerequisites
--- An Azure account with an active subscription.-- A deployed [Communication Service resource](../../quickstarts/create-communication-resource.md) and valid Connection String-- The [ARMClient application](https://github.com/projectkudu/ARMClient), used to configure the Event Grid subscription.-
-## Configure an Event Grid subscription
-
-> [!NOTE]
-> The following steps will not be necessary once the `IncomingCall` event is published to the Event Grid portal.
-
-1. Locate and copy the following to be used in the armclient command-line statement below:
- - Azure subscription ID
- - Resource group name
-
- On the picture below you can see the required fields:
-
- :::image type="content" source="./media/portal.png" alt-text="Screenshot of Communication Services resource page on Azure portal.":::
-
-2. Communication Service resource name
-3. Determine your local development HTTP port used by your web service application.
-4. Start your web service making sure you've followed the steps outlined in the above note regarding validation of your Webhook from Event Grid.
-5. Since the `IncomingCall` event isn't yet published in the Azure portal, you must run the following command-line statements to configure your subscription:
-
- ``` console
- armclient login
-
- armclient put "/subscriptions/<your_azure_subscription_guid>/resourceGroups/<your_resource_group_name>/providers/Microsoft.Communication/CommunicationServices/<your_acs_resource_name>/providers/Microsoft.EventGrid/eventSubscriptions/<subscription_name>?api-version=2022-06-15" "{'properties':{'destination':{'properties':{'endpointUrl':'<your_ngrok_uri>'},'endpointType':'WebHook'},'filter':{'includedEventTypes': ['Microsoft.Communication.IncomingCall']}}}" -verbose
-
- ```
-
-### How do you know it worked?
-
-1. Click on the **Events** section of your Azure Communication Services resource.
-2. Locate your subscription and check the **Provisioning state** making sure it says **Succeeded**.
-
- :::image type="content" source="./media/subscription-validation.png" alt-text="Event Grid Subscription Validation":::
-
->[!IMPORTANT]
-> If you use the Azure portal to modify your Event Grid subscription by adding or removing an event or by modifying any aspect of the subscription such as an advanced filter, the `IncomingCall` subscription will be removed. This is a known issue and will only exist during Private Preview. Use the above command-line statements to simply recreate your subscription if this happens.
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Build a Call Automation application](../../quickstarts/voice-video-calling/callflows-for-customer-interactions.md)
-> [Redirect an inbound PSTN call](../../how-tos/call-automation-sdk/redirect-inbound-telephony-calls.md)
communication-services Callflows For Customer Interactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/callflows-for-customer-interactions.md
zone_pivot_groups: acs-csharp-java
In this quickstart, you'll learn how to build an application that uses the Azure Communication Services Call Automation SDK to handle the following scenario: - handling the `IncomingCall` event from Event Grid - answering a call-- playing an audio file
+- playing an audio file and recognizing input(DTMF) from caller
- adding a communication user to the call such as a customer service agent who uses a web application built using Calling SDKs to connect to Azure Communication Services ::: zone pivot="programming-language-csharp"
In this quickstart, you'll learn how to build an application that uses the Azure
[!INCLUDE [Call flows for customer interactions with Java](./includes/call-automation/Callflow-for-customer-interactions-java.md)] ::: zone-end
+## Subscribe to IncomingCall event
+
+IncomingCall is an Azure Event Grid event for notifying incoming calls to your Communication Services resource. To learn more about it, see [this guide](../../concepts/voice-video-calling/incoming-call-notification.md).
+1. Navigate to your resource on Azure portal and select `Events` from the left side menu.
+1. Select `+ Event Subscription` to create a new subscription.
+1. Filter for Incoming Call event.
+1. Choose endpoint type as web hook and provide the public url generated for your application by ngrok. Make sure to provide the exact api route that you programmed to receive the event previously. In this case, it would be <ngrok_url>/api/incomingCall.
+![Screenshot of portal page to create a new event subscription.](./media/call-automation/event-susbcription.png)
+
+1. Select create to start the creation of subscription and validation of your endpoint as mentioned previously. The subscription is ready when the provisioning status is marked as succeeded.
+
+This subscription currently has no filters and hence all incoming calls will be sent to your application. To filter for specific phone number or a communication user, use the Filters tab.
+ ## Testing the application
-1. Place a call to the number you acquired in the Azure portal (see prerequisites above).
-2. Your Event Grid subscription to the `IncomingCall` should execute and call your web server.
-3. The call will be answered, and an asynchronous web hook callback will be sent to the NGROK callback URI.
-4. When the call is connected, a `CallConnected` event will be sent to your web server, wrapped in a `CloudEvent` schema and can be easily deserialized using the Call Automation SDK parser. At this point, the application will request audio to be played and input from a targeted phone number.
-5. When the input has been received and recognized, the web server will make a request to add a participant to the call.
+1. Place a call to the number you acquired in the Azure portal.
+2. Your Event Grid subscription to the `IncomingCall` should execute and call your application that will request to answer the call.
+3. When the call is connected, a `CallConnected` event will be sent to your application's callback url. At this point, the application will request audio to be played and to receive input from the caller.
+4. From your phone, press any three number keys, or press one number key and then # key.
+5. When the input has been received and recognized, the application will make a request to add a participant to the call.
+6. Once the added user answers, you can talk to them.
+ ## Clean up resources
container-apps Containerapp Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containerapp-up.md
+
+ Title: Deploy Azure Container Apps with the az containerapp up command
+description: How to deploy a container app with the az containerapp up command
++++ Last updated : 11/08/2022+++
+# Deploy Azure Container Apps with the az containerapp up command
+
+The `az containerapp up` (or `up`) command is the fastest way to deploy an app in Azure Container Apps from an existing image, local source code or a GitHub repo. With this single command, you can have your container app up and running in minutes.
+
+The `az containerapp up` command is a streamlined way to create and deploy container apps that primarily use default settings. However, you'll need to use the `az containerapp create` command for apps with customizations such as:
+
+- Dapr configuration
+- Secrets
+- Transport protocols
+- Custom domains
+- Storage mounts
+
+To customize your container app's resource or scaling settings, you can use the `up` command and then the `az containerapp update` command to change these settings. Note that the `az containerapp up` command isn't an abbreviation of the `az containerapp update` command.
+
+The `up` command can create or use existing resources including:
+
+- Resource group
+- Azure Container Registry
+- Container Apps environment and Log Analytics workspace
+- Your container app
+
+The command can build and push a container image to an Azure Container Registry (ACR) when you provide local source code or a GitHub repo. When you're working from a GitHub repo, it creates a GitHub Actions workflow that automatically builds and pushes a new container image when you commit changes to your GitHub repo.
+
+ If you need to customize the Container Apps environment, first create the environment using the `az containerapp env create` command. If you don't provide an existing environment, the `up` command looks for one in your resource group and, if found, uses that environment. If not found, it creates an environment with a Log Analytics workspace.
+
+To learn more about the `az containerapp up` command and its options, see [`az containerapp up`](/cli/azure/containerapp#az_containerapp_up).
+
+## Prerequisites
+
+| Requirement | Instructions |
+|--|--|
+| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current) for details. |
+| GitHub Account | If you use a GitHub repo, sign up for [free](https://github.com/join). |
+| Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).|
+|Local source code | You need to have a local source code directory if you use local source code. |
+| Existing Image | If you use an existing image, you'll need your registry server, image name, and tag. If you're using a private registry, you'll need your credentials. |
+
+## Set up
+
+1. Log in to Azure with the Azure CLI.
+
+ ```azurecli
+ az login
+ ```
+1. Next, install the Azure Container Apps extension for the CLI.
+
+ ```azurecli
+ az extension add --name containerapp --upgrade
+ ```
+
+1. Now that the current extension or module is installed, register the `Microsoft.App` namespace.
+
+ ```azurecli
+ az provider register --namespace Microsoft.App
+ ```
+
+1. Register the `Microsoft.OperationalInsights` provider for the Azure Monitor Log Analytics workspace.
+
+ ```azurecli
+ az provider register --namespace Microsoft.OperationalInsights
+ ```
+
+## Deploy from an existing image
+
+You can deploy a container app that uses an existing image in a public or private container registry. If you are deploying from a private registry, you'll need to provide your credentials using the `--registry-server`, `--registry-username`, and `--registry-password` options.
+
+In this example, the `az containerapp up` command performs the following actions:
+
+1. Creates a resource group.
+1. Creates an environment and Log Analytics workspace.
+1. Creates and deploys a container app that pulls the image from a public registry.
+1. Sets the container app's ingress to external with a target port set to the specified value.
+
+Run the following command to deploy a container app from an existing image. Replace the \<Placeholders\> with your values.
+
+```azurecli
+az containerapp up \
+ --name <CONTAINER_APP_NAME> \
+ --image <REGISTRY_SERVER>/<IMAGE_NAME>:<TAG> \
+ --ingress external \
+ --target-port <PORT_NUMBER>
+```
+
+You can use the `up` command to redeploy a container app. If you want to redeploy with a new image, use the `--image` option to specify a new image. Ensure that the `--resource-group` and `environment` options are set to the same values as the original deployment.
+
+```azurecli
+az containerapp up \
+ --name <CONTAINER_APP_NAME> \
+ --image <REGISTRY_SERVER>/<IMAGE_NAME>:<TAG> \
+ --resource-group <RESOURCE_GROUP_NAME> \
+ --environment <ENVIRONMENT_NAME> \
+ --ingress external \
+ --target-port <PORT_NUMBER>
+```
+
+## Deploy from local source code
+
+When you use the `up` command to deploy from a local source, it builds the container image, pushes it to a registry, and deploys the container app. It creates the registry in Azure Container Registry if you don't provide one.
+
+The command can build the image with or without a Dockerfile. If building without a Dockerfile the following languages are supported:
+
+- .NET
+- Node.js
+- PHP
+- Python
+- Ruby
+- Go
+
+The following example shows how to deploy a container app from local source code.
+
+In the example, the `az containerapp up` command performs the following actions:
+
+1. Creates a resource group.
+1. Creates an environment and Log Analytics workspace.
+1. Creates a registry in Azure Container Registry.
+1. Builds the container image (using the Dockerfile if it exists).
+1. Pushes the image to the registry.
+1. Creates and deploys the container app.
+
+Run the following command to deploy a container app from local source code:
+
+```azurecli
+ az containerapp up \
+ --name <CONTAINER_APP_NAME> \
+ --source <SOURCE_DIRECTORY>\
+ --ingress external
+```
+
+When the Dockerfile includes the EXPOSE instruction, the `up` command configures the container app's ingress and target port using the information in the Dockerfile.
+
+If you've configured ingress through your Dockerfile or your app doesn't require ingress, you can omit the `ingress` option.
+
+The output of the command includes the URL for the container app.
+
+If there's a failure, you can run the command again with the `--debug` option to get more information about the failure. If the build fails without a Dockerfile, you can try adding a Dockerfile and running the command again.
+
+To use the `az containerapp up` command to redeploy your container app with an updated image, include the `--resource-group` and `--environment` arguments. The following example shows how to redeploy a container app from local source code.
+
+1. Make changes to the source code.
+1. Run the following command:
+
+ ```azurecli
+ az containerapp up \
+ --name <CONTAINER_APP_NAME> \
+ --source <SOURCE_DIRECTORY> \
+ --resource-group <RESOURCE_GROUP_NAME> \
+ --environment <ENVIRONMENT_NAME>
+ ```
+
+## Deploy from a GitHub repository
+
+When you use the `az containerapp up` command to deploy from a GitHub repository, it generates a GitHub Actions workflow that builds the container image, pushes it to a registry, and deploys the container app. The command creates the registry in Azure Container Registry if you don't provide one.
+
+A Dockerfile is required to build the image. When the Dockerfile includes the EXPOSE instruction, the command configures the container app's ingress and target port using the information in the Dockerfile.
+
+The following example shows how to deploy a container app from a GitHub repository.
+
+In the example, the `az containerapp up` command performs the following actions:
+
+1. Creates a resource group.
+1. Creates an environment and Log Analytics workspace.
+1. Creates a registry in Azure Container Registry.
+1. Builds the container image using the Dockerfile.
+1. Pushes the image to the registry.
+1. Creates and deploys the container app.
+1. Creates a GitHub Actions workflow to build the container image and deploy the container app when future changes are pushed to the GitHub repository.
+
+To deploy an app from a GitHub repository, run the following command:
+
+```azurecli
+az containerapp up \
+ --name <CONTAINER_APP_NAME> \
+ --repo <GitHub repository URL> \
+ --ingress external
+```
+
+If you've configured ingress through your Dockerfile or your app doesn't require ingress, you can omit the `ingress` option.
+
+Because the `up` command creates a GitHub Actions workflow, rerunning it to deploy changes to your app's image will have the unwanted effect of creating multiple workflows. Instead, push changes to your GitHub repository, and the GitHub workflow will automatically build and deploy your app. To change the workflow, edit the workflow file in GitHub.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Quickstart: Deploy your code to Azure Container Apps](quickstart-code-to-cloud.md)
cosmos-db Diagnostic Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/diagnostic-queries.md
Title: Troubleshoot issues with advanced diagnostics queries for API for Cassandra-
-description: Learn how to use Azure Log Analytics to improve the performance and health of your Azure Cosmos DB for Apache Cassandra account.
-
+ Title: Troubleshoot issues with advanced diagnostics queries with Azure Cosmos DB for Apache Cassandra
+description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for Apache Cassandra.
+++ - Previously updated : 06/12/2021- Last updated : 11/08/2022
-# Troubleshoot issues with advanced diagnostics queries for the API for Cassandra
+# Troubleshoot issues with advanced diagnostics queries with Azure Cosmos DB for Apache Cassandra
[!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin](../includes/appliesto-nosql-mongodb-cassandra-gremlin.md)]
-> [!div class="op_single_selector"]
-> * [API for NoSQL](../advanced-queries.md)
-> * [API for MongoDB](../mongodb/diagnostic-queries.md)
-> * [API for Cassandra](diagnostic-queries.md)
-> * [API for Gremlin](../queries-gremlin.md)
+In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB for Cassandra account by using diagnostics logs sent to **resource-specific** tables.
-In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB Cassansra API account by using diagnostics logs sent to **resource-specific** tables.
-
-For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](../monitor-resource-logs.md) to learn how to enable this feature.
-
-For [resource-specific tables](../monitor-resource-logs.md), data is written into individual tables for each category of the resource. We recommend this mode because it:
--- Makes it much easier to work with the data. -- Provides better discoverability of the schemas.-- Improves performance across both ingestion latency and query times.- ## Prerequisites
For [resource-specific tables](../monitor-resource-logs.md), data is written int
> [!NOTE] > Note that enabling full text diagnostics, the queries returned will contain PII data.
-> This feature will not only log the skeleton of the query with obfuscated parameters but log the values of the parameters themselves.
+> This feature will not only log the skeleton of the query with obfuscated parameters but log the values of the parameters themselves.
> This can help in diagnosing whether queries on a specific Primary Key (or set of Primary Keys) are consuming far more RUs than queries on other Primary Keys. ## Log Analytics queries with different scenarios
For [resource-specific tables](../monitor-resource-logs.md), data is written int
:::image type="content" source="./media/diagnostic-queries/log-analytics-questions-bubble.png" alt-text="Image of a bubble word map with possible questions on how to leverage Log Analytics within Azure Cosmos DB"::: ### RU consumption+ - Cassandra operations that are consuming high RU/s.
-```kusto
-CDBCassandraRequests
-| where DatabaseName=="azure_comos" and CollectionName=="user"
-| project TimeGenerated, RequestCharge, OperationName,
-requestType=split(split(PIICommandText,'"')[3], ' ')[0]
-| summarize max(RequestCharge) by bin(TimeGenerated, 10m), tostring(requestType), OperationName;
-```
+
+ ```kusto
+ CDBCassandraRequests
+ | where DatabaseName=="azure_comos" and CollectionName=="user"
+ | project TimeGenerated, RequestCharge, OperationName,
+ requestType=split(split(PIICommandText,'"')[3], ' ')[0]
+ | summarize max(RequestCharge) by bin(TimeGenerated, 10m), tostring(requestType), OperationName;
+ ```
- Monitoring RU consumption per operation on logical partition keys.
-```kusto
-CDBPartitionKeyRUConsumption
-| where DatabaseName=="azure_comos" and CollectionName=="user"
-| summarize TotalRequestCharge=sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId
-| order by TotalRequestCharge;
-
-CDBPartitionKeyRUConsumption
-| where DatabaseName=="azure_comos" and CollectionName=="user"
-| summarize TotalRequestCharge=sum(todouble(RequestCharge)) by OperationName, PartitionKey
-| order by TotalRequestCharge;
-
-CDBPartitionKeyRUConsumption
-| where DatabaseName=="azure_comos" and CollectionName=="user"
-| summarize TotalRequestCharge=sum(todouble(RequestCharge)) by bin(TimeGenerated, 1m), PartitionKey
-| render timechart;
-```
+
+ ```kusto
+ CDBPartitionKeyRUConsumption
+ | where DatabaseName=="azure_comos" and CollectionName=="user"
+ | summarize TotalRequestCharge=sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId
+ | order by TotalRequestCharge;
+
+ CDBPartitionKeyRUConsumption
+ | where DatabaseName=="azure_comos" and CollectionName=="user"
+ | summarize TotalRequestCharge=sum(todouble(RequestCharge)) by OperationName, PartitionKey
+ | order by TotalRequestCharge;
+
+ CDBPartitionKeyRUConsumption
+ | where DatabaseName=="azure_comos" and CollectionName=="user"
+ | summarize TotalRequestCharge=sum(todouble(RequestCharge)) by bin(TimeGenerated, 1m), PartitionKey
+ | render timechart;
+ ```
- What are the top queries impacting RU consumption?
-```kusto
-CDBCassandraRequests
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| where TimeGenerated > ago(24h)
-| project ActivityId, DatabaseName, CollectionName, queryText=split(split(PIICommandText,'"')[3], ' ')[0], RequestCharge, TimeGenerated
-| order by RequestCharge desc;
-```
+
+ ```kusto
+ CDBCassandraRequests
+ | where DatabaseName=="azure_cosmos" and CollectionName=="user"
+ | where TimeGenerated > ago(24h)
+ | project ActivityId, DatabaseName, CollectionName, queryText=split(split(PIICommandText,'"')[3], ' ')[0], RequestCharge, TimeGenerated
+ | order by RequestCharge desc;
+ ```
+ - RU consumption based on variations in payload sizes for read and write operations.
-```kusto
-// This query is looking at read operations
-CDBCassandraRequests
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
-| where cassandraOperationName =="SELECT"
-| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
-
-// This query is looking at write operations
-CDBCassandraRequests
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
-| where cassandraOperationName in ("CREATE", "UPDATE", "INSERT", "DELETE", "DROP")
-| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
-
-// Write operations over a time period.
-CDBCassandraRequests
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
-| where cassandraOperationName in ("CREATE", "UPDATE", "INSERT", "DELETE", "DROP")
-| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
-| render timechart;
-
-// Read operations over a time period.
-CDBCassandraRequests
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
-| where cassandraOperationName =="SELECT"
-| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
-| render timechart;
-```
+
+ ```kusto
+ // This query is looking at read operations
+ CDBCassandraRequests
+ | where DatabaseName=="azure_cosmos" and CollectionName=="user"
+ | project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
+ | where cassandraOperationName =="SELECT"
+ | summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
+
+ // This query is looking at write operations
+ CDBCassandraRequests
+ | where DatabaseName=="azure_cosmos" and CollectionName=="user"
+ | project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
+ | where cassandraOperationName in ("CREATE", "UPDATE", "INSERT", "DELETE", "DROP")
+ | summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
+
+ // Write operations over a time period.
+ CDBCassandraRequests
+ | where DatabaseName=="azure_cosmos" and CollectionName=="user"
+ | project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
+ | where cassandraOperationName in ("CREATE", "UPDATE", "INSERT", "DELETE", "DROP")
+ | summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
+ | render timechart;
+
+ // Read operations over a time period.
+ CDBCassandraRequests
+ | where DatabaseName=="azure_cosmos" and CollectionName=="user"
+ | project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
+ | where cassandraOperationName =="SELECT"
+ | summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
+ | render timechart;
+ ```
- RU consumption based on read and write operations by logical partition.
-```kusto
-CDBPartitionKeyRUConsumption
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| where OperationName in ("Delete", "Read", "Upsert")
-| summarize totalRU=max(RequestCharge) by OperationName, PartitionKeyRangeId
-```
+
+ ```kusto
+ CDBPartitionKeyRUConsumption
+ | where DatabaseName=="azure_cosmos" and CollectionName=="user"
+ | where OperationName in ("Delete", "Read", "Upsert")
+ | summarize totalRU=max(RequestCharge) by OperationName, PartitionKeyRangeId
+ ```
- RU consumption by physical and logical partition.
-```kusto
-CDBPartitionKeyRUConsumption
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| summarize totalRequestCharge=sum(RequestCharge) by PartitionKey, PartitionKeyRangeId;
-```
+
+ ```kusto
+ CDBPartitionKeyRUConsumption
+ | where DatabaseName=="azure_cosmos" and CollectionName=="user"
+ | summarize totalRequestCharge=sum(RequestCharge) by PartitionKey, PartitionKeyRangeId;
+ ```
- Is a hot partition leading to high RU consumption?
-```kusto
-CDBPartitionKeyStatistics
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| where TimeGenerated > now(-8h)
-| summarize StorageUsed = sum(SizeKb) by PartitionKey
-| order by StorageUsed desc
-```
+
+ ```kusto
+ CDBPartitionKeyStatistics
+ | where DatabaseName=="azure_cosmos" and CollectionName=="user"
+ | where TimeGenerated > now(-8h)
+ | summarize StorageUsed = sum(SizeKb) by PartitionKey
+ | order by StorageUsed desc
+ ```
- How does the partition key affect RU consumption?
-```kusto
-let storageUtilizationPerPartitionKey =
-CDBPartitionKeyStatistics
-| project AccountName=tolower(AccountName), PartitionKey, SizeKb;
-CDBCassandraRequests
-| project AccountName=tolower(AccountName),RequestCharge, ErrorCode, OperationName, ActivityId, DatabaseName, CollectionName, PIICommandText, RegionName
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| join kind=inner storageUtilizationPerPartitionKey on $left.AccountName==$right.AccountName
-| where ErrorCode != -1 //successful
-| project AccountName, PartitionKey,ErrorCode,RequestCharge,SizeKb, OperationName, ActivityId, DatabaseName, CollectionName, PIICommandText, RegionName;
-```
+
+ ```kusto
+ let storageUtilizationPerPartitionKey =
+ CDBPartitionKeyStatistics
+ | project AccountName=tolower(AccountName), PartitionKey, SizeKb;
+ CDBCassandraRequests
+ | project AccountName=tolower(AccountName),RequestCharge, ErrorCode, OperationName, ActivityId, DatabaseName, CollectionName, PIICommandText, RegionName
+ | where DatabaseName=="azure_cosmos" and CollectionName=="user"
+ | join kind=inner storageUtilizationPerPartitionKey on $left.AccountName==$right.AccountName
+ | where ErrorCode != -1 //successful
+ | project AccountName, PartitionKey,ErrorCode,RequestCharge,SizeKb, OperationName, ActivityId, DatabaseName, CollectionName, PIICommandText, RegionName;
+ ```
### Latency+ - Number of server-side timeouts (Status Code - 408) seen in the time window.
-```kusto
-CDBCassandraRequests
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| where ErrorCode in (4608, 4352) //Corresponding code in Cassandra
-| summarize max(DurationMs) by bin(TimeGenerated, 10m), ErrorCode
-| render timechart;
-```
+
+ ```kusto
+ CDBCassandraRequests
+ | where DatabaseName=="azure_cosmos" and CollectionName=="user"
+ | where ErrorCode in (4608, 4352) //Corresponding code in Cassandra
+ | summarize max(DurationMs) by bin(TimeGenerated, 10m), ErrorCode
+ | render timechart;
+ ```
- Do we observe spikes in server-side latencies in the specified time window?
-```kusto
-CDBCassandraRequests
-| where TimeGenerated > now(-6h)
-| DatabaseName=="azure_cosmos" and CollectionName=="user"
-| summarize max(DurationMs) by bin(TimeGenerated, 10m)
-| render timechart;
-```
+
+ ```kusto
+ CDBCassandraRequests
+ | where TimeGenerated > now(-6h)
+ | DatabaseName=="azure_cosmos" and CollectionName=="user"
+ | summarize max(DurationMs) by bin(TimeGenerated, 10m)
+ | render timechart;
+ ```
- Operations that are getting throttled.
-```kusto
-CDBCassandraRequests
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| project RequestLength, ResponseLength,
-RequestCharge, DurationMs, TimeGenerated, OperationName,
-query=split(split(PIICommandText,'"')[3], ' ')[0]
-| summarize max(DurationMs) by bin(TimeGenerated, 10m), RequestCharge, tostring(query),
-RequestLength, OperationName
-| order by RequestLength, RequestCharge;
-```
+
+ ```kusto
+ CDBCassandraRequests
+ | where DatabaseName=="azure_cosmos" and CollectionName=="user"
+ | project RequestLength, ResponseLength,
+ RequestCharge, DurationMs, TimeGenerated, OperationName,
+ query=split(split(PIICommandText,'"')[3], ' ')[0]
+ | summarize max(DurationMs) by bin(TimeGenerated, 10m), RequestCharge, tostring(query),
+ RequestLength, OperationName
+ | order by RequestLength, RequestCharge;
+ ```
### Throttling+ - Is your application experiencing any throttling?
-```kusto
-CDBCassandraRequests
-| where RetriedDueToRateLimiting != false and RateLimitingDelayMs > 0;
-```
+
+ ```kusto
+ CDBCassandraRequests
+ | where RetriedDueToRateLimiting != false and RateLimitingDelayMs > 0;
+ ```
+ - What queries are causing your application to throttle with a specified time period looking specifically at 429.
-```kusto
-CDBCassandraRequests
-| where DatabaseName=="azure_cosmos" and CollectionName=="user"
-| where ErrorCode==4097 // Corresponding error code in Cassandra
-| project DatabaseName , CollectionName , CassandraCommands=split(split(PIICommandText,'"')[3], ' ')[0] , OperationName, TimeGenerated;
-```
+ ```kusto
+ CDBCassandraRequests
+ | where DatabaseName=="azure_cosmos" and CollectionName=="user"
+ | where ErrorCode==4097 // Corresponding error code in Cassandra
+ | project DatabaseName , CollectionName , CassandraCommands=split(split(PIICommandText,'"')[3], ' ')[0] , OperationName, TimeGenerated;
+ ```
## Next steps+ - Enable [log analytics](../../azure-monitor/logs/log-analytics-overview.md) on your API for Cassandra account. - Overview [error code definition](error-codes-solution.md).
cosmos-db Diagnostic Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/diagnostic-queries.md
Title: Troubleshoot issues with advanced diagnostics queries for API for Gremlin-
-description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for the API for Gremlin.
--
+ Title: Troubleshoot issues with advanced diagnostics queries with Azure Cosmos DB for Apache Gremlin
+description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for Apache Gremlin.
+++ Previously updated : 06/12/2021- Last updated : 11/08/2022
-# Troubleshoot issues with advanced diagnostics queries for the API for Gremlin
+# Troubleshoot issues with advanced diagnostics queries with Azure Cosmos DB for Apache Gremlin
[!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin](../includes/appliesto-nosql-mongodb-cassandra-gremlin.md)]
-> [!div class="op_single_selector"]
-> * [API for NoSQL](../advanced-queries.md)
-> * [API for MongoDB](../mongodb/diagnostic-queries.md)
-> * [API for Cassandra](../cassandr)
-> * [API for Gremlin](diagnostic-queries.md)
->
In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account by using diagnostics logs sent to **Azure Diagnostics (legacy)** and **resource-specific (preview)** tables.
-For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](../monitor-resource-logs.md) to learn how to enable this feature.
-
-For [resource-specific tables](../monitor-resource-logs.md), data is written into individual tables for each category of the resource. We recommend this mode because it:
--- Makes it much easier to work with the data. -- Provides better discoverability of the schemas.-- Improves performance across both ingestion latency and query times. ## Common queries+ Common queries are shown in the resource-specific and Azure Diagnostics tables. ### Top N(10) Request Unit (RU) consuming requests or queries in a specific time frame
-# [Resource-specific](#tab/resource-specific)
+#### [Resource-specific](#tab/resource-specific)
```Kusto let topRequestsByRUcharge = CDBDataPlaneRequests
Common queries are shown in the resource-specific and Azure Diagnostics tables.
| order by RequestCharge desc | take 10 ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
+
+#### [Azure Diagnostics](#tab/azure-diagnostics)
```Kusto let topRequestsByRUcharge = AzureDiagnostics
Common queries are shown in the resource-specific and Azure Diagnostics tables.
| project databasename_s , collectionname_s , piiCommandText_s , requestCharge_s, TimeGenerated | order by requestCharge_s desc | take 10
- ```
+ ```
+
-### Requests throttled (statusCode = 429) in a specific time window
+### Requests throttled (statusCode = 429) in a specific time window
+
+#### [Resource-specific](#tab/resource-specific)
-# [Resource-specific](#tab/resource-specific)
```Kusto let throttledRequests = CDBDataPlaneRequests | where StatusCode == "429"
Common queries are shown in the resource-specific and Azure Diagnostics tables.
| join kind=inner throttledRequests on ActivityId | project DatabaseName , CollectionName , PIICommandText , OperationName, TimeGenerated ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
+
+#### [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto let throttledRequests = AzureDiagnostics | where Category == "DataPlaneRequests"
Common queries are shown in the resource-specific and Azure Diagnostics tables.
| project piiCommandText_s, activityId_g, databasename_s , collectionname_s | join kind=inner throttledRequests on activityId_g | project databasename_s , collectionname_s , piiCommandText_s , OperationName, TimeGenerated
- ```
+ ```
+ ### Queries with large response lengths (payload size of the server response)
-# [Resource-specific](#tab/resource-specific)
+#### [Resource-specific](#tab/resource-specific)
+ ```Kusto let operationsbyUserAgent = CDBDataPlaneRequests | project OperationName, DurationMs, RequestCharge, ResponseLength, ActivityId;
Common queries are shown in the resource-specific and Azure Diagnostics tables.
| order by max_ResponseLength desc ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
+#### [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto let operationsbyUserAgent = AzureDiagnostics | where Category=="DataPlaneRequests"
Common queries are shown in the resource-specific and Azure Diagnostics tables.
| join kind=inner operationsbyUserAgent on activityId_g | summarize max(responseLength_s1) by piiCommandText_s | order by max_responseLength_s1 desc
- ```
+ ```
+ ### RU consumption by physical partition (across all replicas in the replica set)
-# [Resource-specific](#tab/resource-specific)
+#### [Resource-specific](#tab/resource-specific)
+ ```Kusto CDBPartitionKeyRUConsumption | where TimeGenerated >= now(-1d)
Common queries are shown in the resource-specific and Azure Diagnostics tables.
| summarize sum(todouble(RequestCharge)) by toint(PartitionKeyRangeId) | render columnchart ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
+
+#### [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto AzureDiagnostics | where TimeGenerated >= now(-1d)
Common queries are shown in the resource-specific and Azure Diagnostics tables.
//| where operationType_s == 'Create' | summarize sum(todouble(requestCharge_s)) by toint(partitionKeyRangeId_s) | render columnchart
- ```
+ ```
+ ### RU consumption by logical partition (across all replicas in the replica set)
-# [Resource-specific](#tab/resource-specific)
+#### [Resource-specific](#tab/resource-specific)
+ ```Kusto CDBPartitionKeyRUConsumption | where TimeGenerated >= now(-1d)
Common queries are shown in the resource-specific and Azure Diagnostics tables.
| summarize sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId | render columnchart ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
+
+#### [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto AzureDiagnostics | where TimeGenerated >= now(-1d)
Common queries are shown in the resource-specific and Azure Diagnostics tables.
| summarize sum(todouble(requestCharge_s)) by partitionKey_s, partitionKeyRangeId_s | render columnchart ```+
-## Next steps
-* For more information on how to create diagnostic settings for Azure Cosmos DB, see [Create diagnostic settings](../monitor-resource-logs.md).
-* For detailed information about how to create a diagnostic setting by using the Azure portal, the Azure CLI, or PowerShell, see [Create diagnostic settings to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
+## Next steps
+
+- For more information on how to create diagnostic settings for Azure Cosmos DB, see [Create diagnostic settings](../monitor-resource-logs.md).
+- For detailed information about how to create a diagnostic setting by using the Azure portal, the Azure CLI, or PowerShell, see [Create diagnostic settings to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
cosmos-db Diagnostic Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/diagnostic-queries.md
Title: Troubleshoot issues with advanced diagnostics queries for API for MongoDB-
-description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for the API for MongoDB.
-
+ Title: Troubleshoot issues with advanced diagnostics queries with Azure Cosmos DB for MongoDB
+description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for MongoDB.
+++ Previously updated : 06/12/2021-- Last updated : 11/08/2022
-# Troubleshoot issues with advanced diagnostics queries for the API for MongoDB
+# Troubleshoot issues with advanced diagnostics queries with Azure Cosmos DB for MongoDB
[!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin](../includes/appliesto-nosql-mongodb-cassandra-gremlin.md)]
-> [!div class="op_single_selector"]
-> * [API for NoSQL](../advanced-queries.md)
-> * [API for MongoDB](diagnostic-queries.md)
-> * [API for Cassandra](../cassandr)
-> * [API for Gremlin](../queries-gremlin.md)
->
In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account by using diagnostics logs sent to **Azure Diagnostics (legacy)** and **resource-specific (preview)** tables.
-For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](../monitor-resource-logs.md) to learn how to enable this feature.
-
-For [resource-specific tables](../monitor-resource-logs.md), data is written into individual tables for each category of the resource. We recommend this mode because it:
--- Makes it much easier to work with the data. -- Provides better discoverability of the schemas.-- Improves performance across both ingestion latency and query times. ## Common queries+ Common queries are shown in the resource-specific and Azure Diagnostics tables. ### Top N(10) Request Unit (RU) consuming requests or queries in a specific time frame
-# [Resource-specific](#tab/resource-specific)
+#### [Resource-specific](#tab/resource-specific)
+ ```Kusto //Enable full-text query to view entire query text CDBMongoRequests
Common queries are shown in the resource-specific and Azure Diagnostics tables.
| take 10 ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
+#### [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto AzureDiagnostics | where Category == "MongoRequests"
Common queries are shown in the resource-specific and Azure Diagnostics tables.
| project piiCommandText_s, activityId_g, databaseName_s , collectionName_s, requestCharge_s | order by requestCharge_s desc | take 10
- ```
+ ```
+
-### Requests throttled (statusCode = 429 or 16500) in a specific time window
+### Requests throttled (statusCode = 429 or 16500) in a specific time window
+
+#### [Resource-specific](#tab/resource-specific)
-# [Resource-specific](#tab/resource-specific)
```Kusto CDBMongoRequests | where TimeGenerated > ago(24h)
Common queries are shown in the resource-specific and Azure Diagnostics tables.
| project DatabaseName, CollectionName, PIICommandText, OperationName, TimeGenerated ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
+#### [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto AzureDiagnostics | where Category == "MongoRequests" and TimeGenerated > ago(24h) | where ErrorCode == "429" or ErrorCode == "16500" | project databaseName_s , collectionName_s , piiCommandText_s , OperationName, TimeGenerated
- ```
+ ```
+
-### Timed-out requests (statusCode = 50) in a specific time window
+### Timed-out requests (statusCode = 50) in a specific time window
+
+#### [Resource-specific](#tab/resource-specific)
-# [Resource-specific](#tab/resource-specific)
```Kusto CDBMongoRequests | where TimeGenerated > ago(24h) | where ErrorCode == "50" | project DatabaseName, CollectionName, PIICommandText, OperationName, TimeGenerated ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
+
+#### [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto AzureDiagnostics | where Category == "MongoRequests" and TimeGenerated > ago(24h) | where ErrorCode == "50" | project databaseName_s , collectionName_s , piiCommandText_s , OperationName, TimeGenerated
- ```
+ ```
+ ### Queries with large response lengths (payload size of the server response)
-# [Resource-specific](#tab/resource-specific)
+#### [Resource-specific](#tab/resource-specific)
+ ```Kusto CDBMongoRequests //specify collection and database
Common queries are shown in the resource-specific and Azure Diagnostics tables.
| summarize max(ResponseLength) by PIICommandText, RequestCharge, DurationMs, OperationName, TimeGenerated | order by max_ResponseLength desc ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
+
+#### [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto AzureDiagnostics | where Category == "MongoRequests"
Common queries are shown in the resource-specific and Azure Diagnostics tables.
//| where databaseName_s == "DB NAME" and collectionName_s == "COLLECTIONNAME" | summarize max(responseLength_s) by piiCommandText_s, OperationName, duration_s, requestCharge_s | order by max_responseLength_s desc
- ```
+ ```
+ ### RU consumption by physical partition (across all replicas in the replica set)
-# [Resource-specific](#tab/resource-specific)
+#### [Resource-specific](#tab/resource-specific)
+ ```Kusto CDBPartitionKeyRUConsumption | where TimeGenerated >= now(-1d)
Common queries are shown in the resource-specific and Azure Diagnostics tables.
| render columnchart ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
+#### [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto AzureDiagnostics | where TimeGenerated >= now(-1d)
Common queries are shown in the resource-specific and Azure Diagnostics tables.
//| where operationType_s == 'Create' | summarize sum(todouble(requestCharge_s)) by toint(partitionKeyRangeId_s) | render columnchart
- ```
+ ```
+ ### RU consumption by logical partition (across all replicas in the replica set)
-# [Resource-specific](#tab/resource-specific)
+#### [Resource-specific](#tab/resource-specific)
+ ```Kusto CDBPartitionKeyRUConsumption | where TimeGenerated >= now(-1d)
Common queries are shown in the resource-specific and Azure Diagnostics tables.
| summarize sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId | render columnchart ```
-# [Azure Diagnostics](#tab/azure-diagnostics)
+
+#### [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto AzureDiagnostics | where TimeGenerated >= now(-1d)
Common queries are shown in the resource-specific and Azure Diagnostics tables.
| summarize sum(todouble(requestCharge_s)) by partitionKey_s, partitionKeyRangeId_s | render columnchart ```+
-## Next steps
-* For more information on how to create diagnostic settings for Azure Cosmos DB, see [Create diagnostic settings](../monitor-resource-logs.md).
-* For detailed information about how to create a diagnostic setting by using the Azure portal, the Azure CLI, or PowerShell, see [Create diagnostic settings to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
+## Next steps
+
+- For more information on how to create diagnostic settings for Azure Cosmos DB, see [Create diagnostic settings](../monitor-resource-logs.md).
+- For detailed information about how to create a diagnostic setting by using the Azure portal, the Azure CLI, or PowerShell, see [Create diagnostic settings to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
cosmos-db Monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-resource-logs.md
Title: Monitor Azure Cosmos DB data by using Azure Diagnostic settings description: Learn how to use Azure diagnostic settings to monitor the performance and availability of data stored in Azure Cosmos DB---+++ Previously updated : 05/20/2021 Last updated : 11/08/2022 # Monitor Azure Cosmos DB data by using diagnostic settings in Azure+ [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)]
-Diagnostic settings in Azure are used to collect resource logs. Azure resource Logs are emitted by a resource and provide rich, frequent data about the operation of that resource. These logs are captured per request and they are also referred to as "data plane logs". Some examples of the data plane operations include delete, insert, and readFeed. The content of these logs varies by resource type.
+Diagnostic settings in Azure are used to collect resource logs. Azure resource Logs are emitted by a resource and provide rich, frequent data about the operation of that resource. These logs are captured per request and they're also referred to as "data plane logs". Some examples of the data plane operations include delete, insert, and readFeed. The content of these logs varies by resource type.
Platform metrics and the Activity logs are collected automatically, whereas you must create a diagnostic setting to collect resource logs or forward them outside of Azure Monitor. You can turn on diagnostic setting for Azure Cosmos DB accounts and send resource logs to the following sources:+ - Log Analytics workspaces
- - Data sent to Log Analytics can be written into **Azure Diagnostics (legacy)** or **Resource-specific (preview)** tables
+ - Data sent to Log Analytics can be written into **Azure Diagnostics (legacy)** or **Resource-specific (preview)** tables
- Event hub - Storage Account > [!NOTE]
-> We recommend creating the diagnostic setting in resource-specific mode (for all APIs except API for Table) [following our instructions for creating diagnostics setting via REST API](monitor-resource-logs.md#create-diagnostic-setting). This option provides additional cost-optimizations with an improved view for handling data.
+> We recommend creating the diagnostic setting in resource-specific mode (for all APIs except API for Table) [following our instructions for creating diagnostics setting via REST API](monitor-resource-logs.md). This option provides additional cost-optimizations with an improved view for handling data.
+
+## Create diagnostic settings
-## <a id="create-setting-portal"></a> Create diagnostics settings via the Azure portal
+### [Azure portal](#tab/azure-portal)
1. Sign into the [Azure portal](https://portal.azure.com).
-2. Navigate to your Azure Cosmos DB account. Open the **Diagnostic settings** pane under the **Monitoring section**, and then select **Add diagnostic setting** option.
+1. Navigate to your Azure Cosmos DB account. Open the **Diagnostic settings** pane under the **Monitoring section**, and then select **Add diagnostic setting** option.
+
+ :::image type="content" source="media/monitor/diagnostics-settings-selection.png" lightbox="media/monitor/diagnostics-settings-selection.png" alt-text="Sreenshot of the diagnostics selection page.":::
+
+1. In the **Diagnostic settings** pane, fill the form with your preferred categories. Included here's a list of log categories.
+
+ | Category | API | Definition | Key Properties |
+ | | | | |
+ | **DataPlaneRequests** | All APIs | Logs back-end requests as data plane operations, which are requests executed to create, update, delete or retrieve data within the account. | `Requestcharge`, `statusCode`, `clientIPaddress`, `partitionID`, `resourceTokenPermissionId` `resourceTokenPermissionMode` |
+ | **MongoRequests** | Mongo | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for MongoDB. When you enable this category, make sure to disable DataPlaneRequests. | `Requestcharge`, `opCode`, `retryCount`, `piiCommandText` |
+ | **CassandraRequests** | Cassandra | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Cassandra. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
+ | **GremlinRequests** | Gremlin | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Gremlin. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText`, `retriedDueToRateLimiting` |
+ | **QueryRuntimeStatistics** | SQL | This table details query operations executed against an API for NoSQL account. By default, the query text and its parameters are obfuscated to avoid logging personal data with full text query logging available by request. | `databasename`, `partitionkeyrangeid`, `querytext` |
+ | **PartitionKeyStatistics** | All APIs | Logs the statistics of logical partition keys by representing the estimated storage size (KB) of the partition keys. This table is useful when troubleshooting storage skews. This PartitionKeyStatistics log is only emitted if the following conditions are true: 1. At least 1% of the documents in the physical partition have same logical partition key. 2. Out of all the keys in the physical partition, the top three keys with largest storage size are captured by the PartitionKeyStatistics log. </li></ul> If the previous conditions aren't met, the partition key statistics data isn't available. It's okay if the above conditions aren't met for your account, which typically indicates you have no logical partition storage skew. **Note**: The estimated size of the partition keys is calculated using a sampling approach that assumes the documents in the physical partition are roughly the same size. If the document sizes aren't uniform in the physical partition, the estimated partition key size may not be accurate. | `subscriptionId`, `regionName`, `partitionKey`, `sizeKB` |
+ | **PartitionKeyRUConsumption** | API for NoSQL | Logs the aggregated per-second RU/s consumption of partition keys. This table is useful for troubleshooting hot partitions. Currently, Azure Cosmos DB reports partition keys for API for NoSQL accounts only and for point read/write and stored procedure operations. | `subscriptionId`, `regionName`, `partitionKey`, `requestCharge`, `partitionKeyRangeId` |
+ | **ControlPlaneRequests** | All APIs | Logs details on control plane operations, which include, creating an account, adding or removing a region, updating account replication settings etc. | `operationName`, `httpstatusCode`, `httpMethod`, `region` |
+ | **TableApiRequests** | API for Table | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Table. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
- :::image type="content" source="./media/monitor/diagnostics-settings-selection.png" alt-text="Select diagnostics":::
+1. Once you select your **Categories details**, then send your Logs to your preferred destination. If you're sending Logs to a **Log Analytics Workspace**, make sure to select **Resource specific** as the Destination table.
+ :::image type="content" source="media/monitor/diagnostics-resource-specific.png" alt-text="Screenshot of the option to enable resource-specific diagnostics.":::
+
+### [Azure CLI](#tab/azure-cli)
+
+Use the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command to create a diagnostic setting with the Azure CLI. See the documentation for this command for descriptions of its parameters.
+
+> [!NOTE]
+> If you are using API for NoSQL, we recommend setting the **export-to-resource-specific** property to **true**.
-3. In the **Diagnostic settings** pane, fill the form with your preferred categories.
+1. Create shell variables for `subscriptionId`, `diagnosticSettingName`, `workspaceName` and `resourceGroupName`.
-### Choose log categories
+ ```azurecli
+ # Variable for subscription id
+ subscriptionId="<subscription-id>"
- |Category |API | Definition | Key Properties |
- |||||
- |DataPlaneRequests | All APIs | Logs back-end requests as data plane operations which are requests executed to create, update, delete or retrieve data within the account. | `Requestcharge`, `statusCode`, `clientIPaddress`, `partitionID`, `resourceTokenPermissionId` `resourceTokenPermissionMode` |
- |MongoRequests | Mongo | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for MongoDB. When you enable this category, make sure to disable DataPlaneRequests. | `Requestcharge`, `opCode`, `retryCount`, `piiCommandText` |
- |CassandraRequests | Cassandra | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Cassandra. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
- |GremlinRequests | Gremlin | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Gremlin. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText`, `retriedDueToRateLimiting` |
- |QueryRuntimeStatistics | SQL | This table details query operations executed against a API for NoSQL account. By default, the query text and its parameters are obfuscated to avoid logging personal data with full text query logging available by request. | `databasename`, `partitionkeyrangeid`, `querytext` |
- |PartitionKeyStatistics | All APIs | Logs the statistics of logical partition keys by representing the estimated storage size (KB) of the partition keys. This table is useful when troubleshooting storage skews. This PartitionKeyStatistics log is only emitted if the following conditions are true: <br/><ul><li> At least 1% of the documents in the physical partition have same logical partition key. </li><li> Out of all the keys in the physical partition, the top 3 keys with largest storage size are captured by the PartitionKeyStatistics log. </li></ul> If the previous conditions are not met, the partition key statistics data is not available. It's okay if the above conditions are not met for your account, which typically indicates you have no logical partition storage skew. <br/><br/>Note: The estimated size of the partition keys is calculated using a sampling approach that assumes the documents in the physical partition are roughly the same size. If the document sizes are not uniform in the physical partition, the estimated partition key size may not be accurate. | `subscriptionId`, `regionName`, `partitionKey`, `sizeKB` |
- |PartitionKeyRUConsumption | API for NoSQL | Logs the aggregated per-second RU/s consumption of partition keys. This table is useful for troubleshooting hot partitions. Currently, Azure Cosmos DB reports partition keys for API for NoSQL accounts only and for point read/write and stored procedure operations. | `subscriptionId`, `regionName`, `partitionKey`, `requestCharge`, `partitionKeyRangeId` |
- |ControlPlaneRequests | All APIs | Logs details on control plane operations i.e. creating an account, adding or removing a region, updating account replication settings etc. | `operationName`, `httpstatusCode`, `httpMethod`, `region` |
- |TableApiRequests | API for Table | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Table. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
+ # Variable for resource group name
+ resourceGroupName="<resource-group-name>"
+
+ # Variable for workspace name
+ workspaceName="<workspace-name>"
-4. Once you select your **Categories details**, then send your Logs to your preferred destination. If you're sending Logs to a **Log Analytics Workspace**, make sure to select **Resource specific** as the Destination table.
+ # Variable for diagnostic setting name
+ diagnosticSettingName="<diagnostic-setting-name>"
+ ```
- :::image type="content" source="./media/monitor/diagnostics-resource-specific.png" alt-text="Select enable resource-specific":::
+1. Use `az monitor diagnostic-settings create` to create the setting.
+
+ ```azurecli
+ az monitor diagnostic-settings create \
+ --resource "/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.DocumentDb/databaseAccounts/" \
+ --name $diagnosticSettingName \
+ --export-to-resource-specific true \
+ --logs '[{"category": "QueryRuntimeStatistics","categoryGroup": null,"enabled": true,"retentionPolicy": {"enabled": false,"days": 0}}]' \
+ --workspace "/subscriptions/$subscriptionId/resourcegroups/$resourceGroupName/providers/microsoft.operationalinsights/workspaces/$workspaceName"
+ ```
+
+### [REST API](#tab/rest-api)
-## <a id="create-diagnostic-setting"></a> Create diagnostic setting via REST API
Use the [Azure Monitor REST API](/rest/api/monitor/diagnosticsettings/createorupdate) for creating a diagnostic setting via the interactive console.
-> [!Note]
+
+> [!NOTE]
> We recommend setting the **logAnalyticsDestinationType** property to **Dedicated** for enabling resource specific tables.
-### Request
-
- ```HTTP
- PUT
- https://management.azure.com/{resource-id}/providers/microsoft.insights/diagnosticSettings/service?api-version={api-version}
- ```
-
-### Headers
-
- |Parameters/Headers | Value/Description |
- |||
- |name | The name of your Diagnostic setting. |
- |resourceUri | subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.DocumentDb/databaseAccounts/{ACCOUNT_NAME}/providers/microsoft.insights/diagnosticSettings/{DIAGNOSTIC_SETTING_NAME} |
- |api-version | 2017-05-01-preview |
- |Content-Type | application/json |
-
-### Body
-
-```json
-{
- "id": "/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.DocumentDb/databaseAccounts/{ACCOUNT_NAME}/providers/microsoft.insights/diagnosticSettings/{DIAGNOSTIC_SETTING_NAME}",
- "type": "Microsoft.Insights/diagnosticSettings",
- "name": "name",
- "location": null,
- "kind": null,
- "tags": null,
- "properties": {
- "storageAccountId": null,
- "serviceBusRuleId": null,
- "workspaceId": "/subscriptions/{SUBSCRIPTION_ID}/resourcegroups/{RESOURCE_GROUP}/providers/microsoft.operationalinsights/workspaces/{WORKSPACE_NAME}",
- "eventHubAuthorizationRuleId": null,
- "eventHubName": null,
- "logs": [
- {
- "category": "DataPlaneRequests",
- "categoryGroup": null,
- "enabled": true,
- "retentionPolicy": {
- "enabled": false,
- "days": 0
- }
- },
- {
- "category": "QueryRuntimeStatistics",
- "categoryGroup": null,
- "enabled": true,
- "retentionPolicy": {
- "enabled": false,
- "days": 0
- }
- },
- {
- "category": "PartitionKeyStatistics",
- "categoryGroup": null,
- "enabled": true,
- "retentionPolicy": {
- "enabled": false,
- "days": 0
+1. Create an HTTP `PUT` request.
+
+ ```HTTP
+ PUT
+ https://management.azure.com/{resource-id}/providers/microsoft.insights/diagnosticSettings/service?api-version={api-version}
+ ```
+
+1. Use these headers with the request.
+
+ | Parameters/Headers | Value/Description |
+ | | |
+ | **name** | The name of your diagnostic setting. |
+ | **resourceUri** | Microsoft Insights subresource URI for Azure Cosmos DB account. |
+ | **api-version** | `2017-05-01-preview` |
+ | **Content-Type** | `application/json` |
+
+ > [!NOTE]
+ > The URI for the Microsoft Insights subresource is in this format: `subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.DocumentDb/databaseAccounts/{ACCOUNT_NAME}/providers/microsoft.insights/diagnosticSettings/{DIAGNOSTIC_SETTING_NAME}`. For more information about Azure Cosmos DB resource URIs, see [resource URI syntax for Azure Cosmos DB REST API](/rest/api/cosmos-db/cosmosdb-resource-uri-syntax-for-rest).
++
+1. Set the body of the request to this JSON payload.
+
+ ```json
+ {
+ "id": "/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.DocumentDb/databaseAccounts/{ACCOUNT_NAME}/providers/microsoft.insights/diagnosticSettings/{DIAGNOSTIC_SETTING_NAME}",
+ "type": "Microsoft.Insights/diagnosticSettings",
+ "name": "name",
+ "location": null,
+ "kind": null,
+ "tags": null,
+ "properties": {
+ "storageAccountId": null,
+ "serviceBusRuleId": null,
+ "workspaceId": "/subscriptions/{SUBSCRIPTION_ID}/resourcegroups/{RESOURCE_GROUP}/providers/microsoft.operationalinsights/workspaces/{WORKSPACE_NAME}",
+ "eventHubAuthorizationRuleId": null,
+ "eventHubName": null,
+ "logs": [
+ {
+ "category": "DataPlaneRequests",
+ "categoryGroup": null,
+ "enabled": true,
+ "retentionPolicy": {
+ "enabled": false,
+ "days": 0
+ }
+ },
+ {
+ "category": "QueryRuntimeStatistics",
+ "categoryGroup": null,
+ "enabled": true,
+ "retentionPolicy": {
+ "enabled": false,
+ "days": 0
+ }
+ },
+ {
+ "category": "PartitionKeyStatistics",
+ "categoryGroup": null,
+ "enabled": true,
+ "retentionPolicy": {
+ "enabled": false,
+ "days": 0
+ }
+ },
+ {
+ "category": "PartitionKeyRUConsumption",
+ "categoryGroup": null,
+ "enabled": true,
+ "retentionPolicy": {
+ "enabled": false,
+ "days": 0
+ }
+ },
+ {
+ "category": "ControlPlaneRequests",
+ "categoryGroup": null,
+ "enabled": true,
+ "retentionPolicy": {
+ "enabled": false,
+ "days": 0
+ }
}
- },
- {
- "category": "PartitionKeyRUConsumption",
- "categoryGroup": null,
- "enabled": true,
- "retentionPolicy": {
- "enabled": false,
- "days": 0
- }
- },
- {
- "category": "ControlPlaneRequests",
- "categoryGroup": null,
- "enabled": true,
- "retentionPolicy": {
- "enabled": false,
- "days": 0
- }
- }
- ],
- "logAnalyticsDestinationType": "Dedicated"
- },
- "identity": null
-}
-```
-
-## Create diagnostic setting via Azure CLI
-Use the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command to create a diagnostic setting with the Azure CLI. See the documentation for this command for descriptions of its parameters.
+ ],
+ "logAnalyticsDestinationType": "Dedicated"
+ },
+ "identity": null
+ }
+ ```
-> [!Note]
-> If you are using API for NoSQL, we recommend setting the **export-to-resource-specific** property to **true**.
+
- ```azurecli-interactive
- az monitor diagnostic-settings create --resource /subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.DocumentDb/databaseAccounts/ --name {DIAGNOSTIC_SETTING_NAME} --export-to-resource-specific true --logs '[{"category": "QueryRuntimeStatistics","categoryGroup": null,"enabled": true,"retentionPolicy": {"enabled": false,"days": 0}}]' --workspace /subscriptions/{SUBSCRIPTION_ID}/resourcegroups/{RESOURCE_GROUP}/providers/microsoft.operationalinsights/workspaces/{WORKSPACE_NAME}"
- ```
-## <a id="full-text-query"></a> Enable full-text query for logging query text
+## Enable full-text query for logging query text
-> [!Note]
+> [!NOTE]
> Enabling this feature may result in additional logging costs, for pricing details visit [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). It is recommended to disable this feature after troubleshooting.
-Azure Cosmos DB provides advanced logging for detailed troubleshooting. By enabling full-text query, youΓÇÖll be able to view the deobfuscated query for all requests within your Azure Cosmos DB account. YouΓÇÖll also give permission for Azure Cosmos DB to access and surface this data in your logs.
+Azure Cosmos DB provides advanced logging for detailed troubleshooting. By enabling full-text query, youΓÇÖll be able to view the deobfuscated query for all requests within your Azure Cosmos DB account. YouΓÇÖll also give permission for Azure Cosmos DB to access and surface this data in your logs.
-1. To enable this feature, navigate to the `Features` blade in your Azure Cosmos DB account.
-
- :::image type="content" source="./media/monitor/full-text-query-features.png" alt-text="Navigate to Features blade":::
+### [Azure portal](#tab/azure-portal)
-2. Select `Enable`, this setting will then be applied in the within the next few minutes. All newly ingested logs will have the full-text or PIICommand text for each request.
-
- :::image type="content" source="./media/monitor/select-enable-full-text.png" alt-text="Select enable full-text":::
+1. To enable this feature, navigate to the `Features` page in your Azure Cosmos DB account.
-To learn how to query using this newly enabled feature visit [advanced queries](advanced-queries.md).
+ :::image type="content" source="media/monitor/full-text-query-features.png" lightbox="media/monitor/full-text-query-features.png" alt-text="Screenshot of navigation to the Features page.":::
-## Next steps
+2. Select `Enable`, this setting will then be applied within the next few minutes. All newly ingested logs will have the full-text or PIICommand text for each request.
+
+ :::image type="content" source="media/monitor/select-enable-full-text.png" alt-text="Screenshot of full-text being enabled.":::
+
+### [Azure CLI / REST API](#tab/azure-cli+rest-api)
+
+1. Ensure you're logged in to the Azure CLI. For more information, see [sign in with Azure CLI](/cli/azure/authenticate-azure-cli). Optionally, ensure that you've configured the active subscription for your CLI. For more information, see [change the active Azure CLI subscription](/cli/azure/manage-azure-subscriptions-azure-cli#change-the-active-subscription).
+
+1. Create shell variables for `accountName` and `resourceGroupName`.
+
+ ```azurecli
+ # Variable for resource group name
+ resourceGroupName="<resource-group-name>"
+
+ # Variable for account name
+ accountName="<account-name>"
+ ```
+
+1. Get the unique identifier for your existing account using [`az show`](/cli/azure/cosmosdb#az-cosmosdb-show).
+
+ ```azurecli
+ az cosmosdb show \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --query id
+ ```
+
+ Store the unique identifier in a shell variable named `$uri`.
+
+ ```azurecli
+ uri=$(
+ az cosmosdb show \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --query id \
+ --output tsv
+ )
+ ```
+
+1. Check if full-text query is already enabled by querying the resource using the REST API and [`az rest`](/cli/azure/reference-index#az-rest) with an HTTP `GET` verb.
-* For a reference of the log and metric data, see [monitoring Azure Cosmos DB data reference](monitor-reference.md#resource-logs).
+ ```azurecli
+ az rest \
+ --method GET \
+ --uri "https://management.azure.com/$uri/?api-version=2021-05-01-preview" \
+ --query "{AccountName:name, FullTextQueryEnabled:properties.diagnosticLogSettings.enableFullTextQuery}"
+ ```
-* For more information on how to query resource-specific tables see [troubleshooting using resource-specific tables](monitor-logs-basic-queries.md#resource-specific-queries).
+1. If full-text query isn't already enabled, enable it using `az rest` again with an HTTP `PATCH` verb and a JSON payload.
-* For more information on how to query AzureDiagnostics tables see [troubleshooting using AzureDiagnostics tables](monitor-logs-basic-queries.md#azure-diagnostics-queries).
+ ```azurecli
+ az rest \
+ --method PATCH \
+ --uri "https://management.azure.com/$uri/?api-version=2021-05-01-preview" \
+ --body '{"properties": {"diagnosticLogSettings": {"enableFullTextQuery": "True"}}}'
+ ```
+
+1. Wait a few minutes for the operation to complete. Check the status of full-text query by using `az rest` again.
+
+ ```azurecli
+ az rest \
+ --method GET \
+ --uri "https://management.azure.com/$uri/?api-version=2021-05-01-preview" \
+ --query "{AccountName:name, FullTextQueryEnabled:properties.diagnosticLogSettings.enableFullTextQuery}"
+ ```
+
+ The output should be similar to this example.
+
+ ```json
+ {
+ "AccountName": "<account-name>",
+ "FullTextQueryEnabled": "True"
+ }
+ ```
+++
+## Query data
+
+To learn how to query using these newly enabled features, see:
+
+- [API for NoSQL](nosql/diagnostic-queries.md)
+- [API for MongoDB](mongodb/diagnostic-queries.md)
+- [API for Apache Cassandra](cassandr)
+- [API for Apache Gremlin](gremlin/diagnostic-queries.md)
+
+## Next steps
-* For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) article.
+- For a reference of the log and metric data, see [monitoring Azure Cosmos DB data reference](monitor-reference.md#resource-logs).
+- For more information on how to query resource-specific tables, see [troubleshooting using resource-specific tables](monitor-logs-basic-queries.md#resource-specific-queries).
+- For more information on how to query AzureDiagnostics tables, see [troubleshooting using AzureDiagnostics tables](monitor-logs-basic-queries.md#azure-diagnostics-queries).
+- For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) article.
cosmos-db Diagnostic Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/diagnostic-queries.md
+
+ Title: Troubleshoot issues with advanced diagnostics queries with Azure Cosmos DB for NoSQL
+description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for NoSQL.
++++++ Last updated : 11/08/2022++
+# Troubleshoot issues with advanced diagnostics queries with Azure Cosmos DB for NoSQL
+++
+In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account by using diagnostics logs sent to **Azure Diagnostics (legacy)** and **resource-specific (preview**) tables.
++
+## Common queries
+
+Common queries are shown in the resource-specific and Azure Diagnostics tables.
+
+### Top N(10) queries ordered by Request Unit (RU) consumption in a specific time frame
+
+#### [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ let topRequestsByRUcharge = CDBDataPlaneRequests
+ | where TimeGenerated > ago(24h)
+ | project RequestCharge , TimeGenerated, ActivityId;
+ CDBQueryRuntimeStatistics
+ | project QueryText, ActivityId, DatabaseName , CollectionName
+ | join kind=inner topRequestsByRUcharge on ActivityId
+ | project DatabaseName , CollectionName , QueryText , RequestCharge, TimeGenerated
+ | order by RequestCharge desc
+ | take 10
+ ```
+
+#### [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ let topRequestsByRUcharge = AzureDiagnostics
+ | where Category == "DataPlaneRequests" and TimeGenerated > ago(24h)
+ | project requestCharge_s , TimeGenerated, activityId_g;
+ AzureDiagnostics
+ | where Category == "QueryRuntimeStatistics"
+ | project querytext_s, activityId_g, databasename_s , collectionname_s
+ | join kind=inner topRequestsByRUcharge on activityId_g
+ | project databasename_s , collectionname_s , querytext_s , requestCharge_s, TimeGenerated
+ | order by requestCharge_s desc
+ | take 10
+ ```
+
++
+### Requests throttled (statusCode = 429) in a specific time window
+
+#### [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ let throttledRequests = CDBDataPlaneRequests
+ | where StatusCode == "429"
+ | project OperationName , TimeGenerated, ActivityId;
+ CDBQueryRuntimeStatistics
+ | project QueryText, ActivityId, DatabaseName , CollectionName
+ | join kind=inner throttledRequests on ActivityId
+ | project DatabaseName , CollectionName , QueryText , OperationName, TimeGenerated
+ ```
+
+#### [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ let throttledRequests = AzureDiagnostics
+ | where Category == "DataPlaneRequests" and statusCode_s == "429"
+ | project OperationName , TimeGenerated, activityId_g;
+ AzureDiagnostics
+ | where Category == "QueryRuntimeStatistics"
+ | project querytext_s, activityId_g, databasename_s , collectionname_s
+ | join kind=inner throttledRequests on activityId_g
+ | project databasename_s , collectionname_s , querytext_s , OperationName, TimeGenerated
+ ```
+++
+### Queries with the largest response lengths (payload size of the server response)
+
+#### [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ let operationsbyUserAgent = CDBDataPlaneRequests
+ | project OperationName, DurationMs, RequestCharge, ResponseLength, ActivityId;
+ CDBQueryRuntimeStatistics
+ //specify collection and database
+ //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
+ | join kind=inner operationsbyUserAgent on ActivityId
+ | summarize max(ResponseLength) by QueryText
+ | order by max_ResponseLength desc
+ ```
+
+#### [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ let operationsbyUserAgent = AzureDiagnostics
+ | where Category=="DataPlaneRequests"
+ | project OperationName, duration_s, requestCharge_s, responseLength_s, activityId_g;
+ AzureDiagnostics
+ | where Category == "QueryRuntimeStatistics"
+ //specify collection and database
+ //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ | join kind=inner operationsbyUserAgent on activityId_g
+ | summarize max(responseLength_s1) by querytext_s
+ | order by max_responseLength_s1 desc
+ ```
+++
+### RU consumption by physical partition (across all replicas in the replica set)
+
+#### [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ CDBPartitionKeyRUConsumption
+ | where TimeGenerated >= now(-1d)
+ //specify collection and database
+ //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(RequestCharge)) by toint(PartitionKeyRangeId)
+ | render columnchart
+ ```
+
+#### [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= now(-1d)
+ | where Category == 'PartitionKeyRUConsumption'
+ //specify collection and database
+ //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(requestCharge_s)) by toint(partitionKeyRangeId_s)
+ | render columnchart
+ ```
+++
+### RU consumption by logical partition (across all replicas in the replica set)
+
+#### [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ CDBPartitionKeyRUConsumption
+ | where TimeGenerated >= now(-1d)
+ //specify collection and database
+ //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId
+ | render columnchart
+ ```
+
+#### [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= now(-1d)
+ | where Category == 'PartitionKeyRUConsumption'
+ //specify collection and database
+ //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(requestCharge_s)) by partitionKey_s, partitionKeyRangeId_s
+ | render columnchart
+ ```
+++
+## Next steps
+
+- For more information on how to create diagnostic settings for Azure Cosmos DB, see [Create diagnostic settings](../monitor-resource-logs.md).
+- For detailed information about how to create a diagnostic setting by using the Azure portal, the Azure CLI, or PowerShell, see [Create diagnostic settings to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
cost-management-billing View Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/view-utilization.md
Previously updated : 10/12/2022 Last updated : 11/08/2022
You can view savings plan utilization percentage in the Azure portal.
+> [!NOTE]
+> It can take up to 48 hours for initial savings plan purchase utilization data to appear in utilization reports and to get shown in cost analysis. Afterward, you can expect usage data show to appear within 2 to 24 hours.
+ ## View utilization in the Azure portal with Azure RBAC access To view savings plan utilization, you must have Azure RBAC access to the savings plan or you must have elevated access to manage all Azure subscriptions and management groups.
defender-for-cloud Monitoring Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/monitoring-components.md
Data is collected using:
## Why use Defender for Cloud to deploy monitoring components?
-The security of your workloads depends on the data that the monitoring components collect. The components ensure security coverage for all supported resources.
+Visibility into the security of your workloads depends on the data that the monitoring components collect. The components ensure security coverage for all supported resources.
To save you the process of manually installing the extensions, Defender for Cloud reduces management overhead by installing all required extensions on existing and new machines. Defender for Cloud assigns the appropriate **Deploy if not exists** policy to the workloads in the subscription. This policy type ensures the extension is provisioned on all existing and future resources of that type.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
To learn about *planned* changes that are coming soon to Defender for Cloud, see
> [!TIP] > If you're looking for items older than six months, you'll find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md).
+## November 2022
+
+Updates in November include:
+
+- [Protect containers in your entire GKE organization with Defender for Containers](#protect-containers-in-your-entire-gke-organization-with-defender-for-containers)
+
+### Protect containers in your entire GKE organization with Defender for Containers
+
+Defender for Containers helps you secure your Azure and multicloud container environments with environment hardening, vulnerability assessment, and run-time threat protection for nodes and clusters. GCP users enable this protection by connecting the GCP projects to Defender for Cloud using the native GCP connector.
+
+Now you can enable Defender for Containers for your GCP organization to protect clusters across your entire GCP organization. Create a new GCP connector or update your existing GCP connectors that connect organizations to Defender for Cloud, and enable Defender for Containers.
+
+Learn more about [connecting GCP projects and organizations](quickstart-onboard-gcp.md#connect-your-gcp-project) to Defender for Cloud.
+ ## October 2022 Updates in October include:
We have renamed the Auto-provisioning page to **Settings & monitoring**.
Auto-provisioning was meant to allow at-scale enablement of prerequisites, which are needed by Defender for Cloud's advanced features and capabilities. To better support our expanded capabilities, we are launching a new experience with the following changes: **The Defender for Cloud's plans page now includes**:-- When you enable Defender plans, a Defender plan that requires monitoring components automatically turns on the required components with default settings. These settings can be edited by the user at any time.
+- When you enable a Defender plan that requires monitoring components, those components are enabled for automatic provisioning with default settings. These settings can optionally be edited at any time.
- You can access the monitoring component settings for each Defender plan from the Defender plan page. - The Defender plans page clearly indicates whether all the monitoring components are in place for each Defender plan, or if your monitoring coverage is incomplete. **The Settings & monitoring page**:-- Each monitoring component indicates the Defender plans that it is related to.
+- Each monitoring component indicates the Defender plans to which it's related.
Learn more about [managing your monitoring settings](monitoring-components.md).
defender-for-iot Sample Connectivity Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/best-practices/sample-connectivity-models.md
Title: Sample OT network connectivity models - Microsoft Defender for IoT description: This article describes sample connectivity methods for Microsoft Defender for IoT OT sensor connections. Previously updated : 06/02/2022 Last updated : 11/08/2022
This article provides sample network models for Microsoft Defender for IoT senso
The following diagram shows an example of a ring network topology, in which each switch or node connects to exactly two other switches, forming a single continuous pathway for the traffic. ## Sample: Linear bus and star topology In a star network, every host is connected to a central hub. In its simplest form, one central hub acts as a conduit to transmit messages. In the following example, lower switches aren't monitored, and traffic that remains local to these switches won't be seen. Devices might be identified based on ARP messages, but connection information will be missing. ## Sample: Multi-layer, multi-tenant network
defender-for-iot Configure Mirror Rspan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/traffic-mirroring/configure-mirror-rspan.md
Title: Configure traffic mirroring with a Remote SPAN (RSPAN) port - Microsoft Defender for IoT description: This article describes how to configure a remote SPAN (RSPAN) port for traffic mirroring when monitoring OT networks with Microsoft Defender for IoT. Previously updated : 09/20/2022 Last updated : 11/08/2022
Data in the VLAN is then delivered through trunked ports, across multiple switch
The following diagram shows an example of a remote VLAN architecture: This article describes a sample procedure for configuring RSPAN on a Cisco 2960 switch with 24 ports running IOS. The steps described are intended as high-level guidance. For more information, see the Cisco documentation.
defender-for-iot Configure Mirror Tap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/traffic-mirroring/configure-mirror-tap.md
Title: Configure traffic mirroring with active or passive aggregation with terminal access points - Microsoft Defender for IoT description: This article describes traffic mirroring with active passive aggregation with terminal access points (TAP) for OT monitoring with Microsoft Defender for IoT. Previously updated : 09/20/2022 Last updated : 11/08/2022
A TAP is a hardware device that allows network traffic to flow back and forth be
For example: Some TAPs aggregate both *Receive* and *Transmit*, depending on the switch configuration. If your switch doesn't support aggregation, each TAP uses two ports on your OT network sensor to monitor both *Receive* and *Transmit* traffic.
event-grid Availability Zones Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/availability-zones-disaster-recovery.md
Azure availability zones are physically separate locations within each Azure reg
Event Grid resource definitions for topics, system topics, domains, and event subscriptions and event data are automatically replicated across three availability zones ([when available](../availability-zones/az-overview.md#azure-regions-with-availability-zones)) in the region. When there's a failure in one of the availability zones, Event Grid resources **automatically failover** to another availability zone without any human intervention. Currently, it isn't possible for you to control (enable or disable) this feature. When an existing region starts supporting availability zones, existing Event Grid resources would be automatically failed over to take advantage of this feature. No customer action is required. ## Geo-disaster recovery across regions
event-grid Onboard Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/onboard-partner.md
For step #5, you should decide what kind of user experience you want to provide.
This article shows you how to **onboard as an Azure Event Grid partner** using the **Azure portal**. ## Communicate your interest in becoming a partner
-Contact the Event Grid team at [GridPartner@microsoft.com](mailto:GridPartner@microsoft.com) communicating your interest in becoming a partner. We'll have a conversation with you providing detailed information on Partner EventsΓÇÖ use cases, personas, onboarding process, functionality, pricing, and more.
+Contact the Event Grid team at [askgrid@microsoft.com](mailto:askgrid@microsoft.com?subject=Interested&nbsp;to&nbsp;onboard&nbsp;as&nbsp;an&nbsp;Event&nbsp;Grid&nbsp;partner) communicating your interest in becoming a partner. We'll have a conversation with you providing detailed information on Partner EventsΓÇÖ use cases, personas, onboarding process, functionality, pricing, and more.
## Prerequisites To complete the remaining steps, make sure you have:
event-grid Partner Events Overview For Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview-for-partners.md
You can also create Event Grid resources to receive events from Azure Event Grid
For either publishing events or receiving events, you create the same kind of Event Grid [resources](#resources-managed-by-partners) following these general steps.
-1. Communicate your interest in becoming a partner by sending an email to [GridPartner@microsoft.com](mailto:GridPartner@microsoft.com). Once you contact us, we'll guide you through the onboarding process and help your service get an entry card on our [Azure Event Grid gallery](https://portal.azure.com/#create/Microsoft.EventGridPartnerTopic) so that your service can be found on the Azure portal.
+1. Contact the Event Grid team at [askgrid@microsoft.com](mailto:askgrid@microsoft.com?subject=Interested&nbsp;to&nbsp;onboard&nbsp;as&nbsp;an&nbsp;Event&nbsp;Grid&nbsp;partner) communicating your interest in becoming a partner. Once you contact us, we'll guide you through the onboarding process and help your service get an entry card on our [Azure Event Grid gallery](https://portal.azure.com/#create/Microsoft.EventGridPartnerTopic) so that your service can be found on the Azure portal.
2. Create a [partner registration](#partner-registration). This is a global resource and you usually need to create once. 3. Create a [partner namespace](#partner-namespace). This resource exposes an endpoint to which you can publish events to Azure. When creating the partner namespace, provide the partner registration you created. 4. Customer authorizes you to create a [partner topic](concepts.md#partner-topics) in customer's Azure subscription.
event-hubs Authenticate Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/authenticate-application.md
Title: Authenticate an application to access Azure Event Hubs resources description: This article provides information about authenticating an application with Azure Active Directory to access Azure Event Hubs resources Previously updated : 06/14/2021 Last updated : 11/08/2022
When a role is assigned to an Azure AD security principal, Azure grants access t
Azure provides the following Azure built-in roles for authorizing access to Event Hubs data using Azure AD and OAuth: - [Azure Event Hubs Data Owner](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-owner): Use this role to give complete access to Event Hubs resources.-- [Azure Event Hubs Data Sender](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-sender): Use this role to give send access to Event Hubs resources.
+- [Azure Event Hubs Data Sender](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-sender): Use this role to give access to Event Hubs resources.
- [Azure Event Hubs Data Receiver](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-receiver): Use this role to give receiving access to Event Hubs resources. For Schema Registry built-in roles, see [Schema Registry roles](schema-registry-overview.md#azure-role-based-access-control).
The following sections show you how to configure your native application or web
For an overview of the OAuth 2.0 code grant flow, see [Authorize access to Azure Active Directory web applications using the OAuth 2.0 code grant flow](../active-directory/develop/v2-oauth2-auth-code-flow.md). ### Register your application with an Azure AD tenant
-The first step in using Azure AD to authorize Event Hubs resources is registering your client application with an Azure AD tenant from the [Azure portal](https://portal.azure.com/). When you register your client application, you supply information about the application to AD. Azure AD then provides a client ID (also called an application ID) that you can use to associate your application with Azure AD runtime. To learn more about the client ID, see [Application and service principal objects in Azure Active Directory](../active-directory/develop/app-objects-and-service-principals.md).
+The first step in using Azure AD to authorize Event Hubs resources is registering your client application with an Azure AD tenant from the [Azure portal](https://portal.azure.com/). Follow steps in the [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md) to register an application in Azure AD that represents your application trying to access Event Hubs resources.
-The following images show steps for registering a web application:
+When you register your client application, you supply information about the application to AD. Azure AD then provides a client ID (also called an application ID) that you can use to associate your application with Azure AD runtime. To learn more about the client ID, see [Application and service principal objects in Azure Active Directory](../active-directory/develop/app-objects-and-service-principals.md).
-![Register an application](./media/authenticate-application/app-registrations-register.png)
> [!Note] > If you register your application as a native application, you can specify any valid URI for the Redirect URI. For native applications, this value does not have to be a real URL. For web applications, the redirect URI must be a valid URI, because it specifies the URL to which tokens are provided. After you've registered your application, you'll see the **Application (client) ID** under **Settings**:
-![Application ID of the registered application](./media/authenticate-application/application-id.png)
-
-For more information about registering an application with Azure AD, see [Integrating applications with Azure Active Directory](../active-directory/develop/quickstart-register-app.md).
### Create a client secret
-The application needs a client secret to prove its identity when requesting a token. To add the client secret, follow these steps.
-
-1. Navigate to your app registration in the Azure portal.
-1. Select the **Certificates & secrets** setting.
-1. Under **Client secrets**, select **New client secret** to create a new secret.
-1. Provide a description for the secret, and choose the wanted expiration interval.
-1. Immediately copy the value of the new secret to a secure location. The fill value is displayed to you only once.
-
- ![Client secret](./media/authenticate-application/client-secret.png)
+The application needs a client secret to prove its identity when requesting a token. Follow steps from [Add a client secret](../active-directory/develop/quickstart-register-app.md#add-a-client-secret) to create a client secret for your app in Azure AD.
## Assign Azure roles using the Azure portal
Once you define the role and its scope, you can test this behavior with samples
### Client libraries for token acquisition Once you've registered your application and granted it permissions to send/receive data in Azure Event Hubs, you can add code to your application to authenticate a security principal and acquire OAuth 2.0 token. To authenticate and acquire the token, you can use either one of the [Microsoft identity platform authentication libraries](../active-directory/develop/reference-v2-libraries.md) or another open-source library that supports OpenID or Connect 1.0. Your application can then use the access token to authorize a request against Azure Event Hubs.
-For a list of scenarios for which acquiring tokens is supported, see the [Scenarios](https://aka.ms/msal-net-scenarios) section of the [Microsoft Authentication Library (MSAL) for .NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) GitHub repository.
+For scenarios where acquiring tokens is supported, see the [Scenarios](https://aka.ms/msal-net-scenarios) section of the [Microsoft Authentication Library (MSAL) for .NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) GitHub repository.
## Samples
+- [Azure.Messaging.EventHubs samples](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Azure.Messaging.EventHubs/ManagedIdentityWebApp)
+
+ This sample has been updated to use the latest **Azure.Messaging.EventHubs** library.
- [Microsoft.Azure.EventHubs samples](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Microsoft.Azure.EventHubs/Rbac). These samples use the old **Microsoft.Azure.EventHubs** library, but you can easily update it to using the latest **Azure.Messaging.EventHubs** library. To move the sample from using the old library to new one, see the [Guide to migrate from Microsoft.Azure.EventHubs to Azure.Messaging.EventHubs](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/MigrationGuide.md).-- [Azure.Messaging.EventHubs samples](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Azure.Messaging.EventHubs/ManagedIdentityWebApp)
- This sample has been updated to use the latest **Azure.Messaging.EventHubs** library.
## Next steps - To learn more about Azure RBAC, see [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md)?
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/overview.md
If your organization has many Azure subscriptions, you may need a way to efficiently manage access, policies, and compliance for those subscriptions. _Management groups_ provide a governance scope
-above subscriptions. You organize subscriptions into management groups the governance conditions you apply
+above subscriptions. You organize subscriptions into management groups; the governance conditions you apply
cascade by inheritance to all associated subscriptions. Management groups give you
To learn more about management groups, see:
- [Create management groups to organize Azure resources](./create-management-group-portal.md) - [How to change, delete, or manage your management groups](./manage.md)-- See options for [How to protect your resource hierarchy](./how-to/protect-resource-hierarchy.md)
+- See options for [How to protect your resource hierarchy](./how-to/protect-resource-hierarchy.md)
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/convert-data.md
# Converting your data to FHIR for Azure API for FHIR
-The `$convert-data` custom endpoint in the FHIR service is meant for data conversion from different data types to FHIR. It uses the Liquid template engine and the templates from the [FHIR Converter](https://github.com/microsoft/FHIR-Converter) project as the default templates. You can customize these conversion templates as needed. Currently it supports three types of data conversion: **C-CDA to FHIR**, **HL7v2 to FHIR**, **JSON to FHIR**, **FHIR STU3 to FHIR R4(new!)**.
+The `$convert-data` custom endpoint in the FHIR service is meant for data conversion from different data types to FHIR. It uses the Liquid template engine and the templates from the [FHIR Converter](https://github.com/microsoft/FHIR-Converter) project as the default templates. You can customize these conversion templates as needed.
+
+Currently the `$convert-data` custom endpoint supports `four` types of data conversion:
+
+|Origin Data Format | Destination Data Format|
+| -- | -- |
+|C-CDA | FHIR |
+|HL7v2 | FHIR|
+|JSON | FHIR|
+|FHIR STU3 | FHIR R4 **(new!)**|
+ > [!NOTE] > `$convert-data` endpoint can be used as a component within an ETL pipeline for the conversion of raw healthcare data from legacy formats into FHIR format. However, it is not an ETL pipeline in itself. We recommend you to use an ETL engine such as Logic Apps or Azure Data Factory for a complete workflow in preparing your FHIR data to be persisted into the FHIR server. The workflow might include: data reading and ingestion, data validation, making $convert-data API calls, data pre/post-processing, data enrichment, and data de-duplication.
logic-apps Logic Apps Using File Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-file-connector.md
These steps apply only to Standard logic apps in an App Service Environment v3 w
1. Click inside the **File Content** parameter box. appears, click inside the edit box. From the dynamic content list that appears, in the **When a file is created** section, select **File Content**.
- ![Screenshot showing Standard workflow designer and the File System managed connector "Create file" action.](media/logic-apps-using-file-connector/action-file-system-create-file-built-in-standard.png)
+ ![Screenshot showing Standard workflow designer and the File System managed connector "Create file" action.](media/logic-apps-using-file-connector/action-file-system-create-file-managed-standard.png)
1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address.
machine-learning Concept Data Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data-analysis.md
Title: Understand your datasets
-description: Perform exploratory data analysis to understand feature biases and imbalances by using the Responsible AI dashboard's data explorer.
+description: Perform exploratory data analysis to understand feature biases and imbalances by using the Responsible AI dashboard's data analysis.
Previously updated : 08/17/2022 Last updated : 11/09/2022 # Understand your datasets (preview)
-Machine learning models "learn" from historical decisions and actions captured in training data. As a result, their performance in real-world scenarios is heavily influenced by the data they're trained on. When feature distribution in a dataset is skewed, it can cause a model to incorrectly predict data points that belong to an underrepresented group or to be optimized along an inappropriate metric.
+Machine learning models "learn" from historical decisions and actions captured in training data. As a result, their performance in real-world scenarios is heavily influenced by the data they're trained on. When feature distribution in a dataset is skewed, it can cause a model to incorrectly predict data points that belong to an underrepresented group or to be optimized along an inappropriate metric.
For example, while a model was training an AI system for predicting house prices, the training set was representing 75 percent of newer houses that had less than median prices. As a result, it was much less accurate in successfully identifying more expensive historic houses. The fix was to add older and expensive houses to the training data and augment the features to include insights about historical value. That data augmentation improved results.
-The data explorer component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) helps visualize datasets based on predicted and actual outcomes, error groups, and specific features. It helps you identify issues of overrepresentation and underrepresentation and to see how data is clustered in the dataset. Data visualizations consist of aggregate plots or individual data points.
+The data analysis component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) helps visualize datasets based on predicted and actual outcomes, error groups, and specific features. It helps you identify issues of overrepresentation and underrepresentation and to see how data is clustered in the dataset. Data visualizations consist of aggregate plots or individual data points.
-## When to use the data explorer
+## When to use data analysis
-Use the data explorer when you need to:
+Use data analysis when you need to:
- Explore your dataset statistics by selecting different filters to slice your data into different dimensions (also known as cohorts). - Understand the distribution of your dataset across different cohorts and feature groups.
Use the data explorer when you need to:
## Next steps -- Learn how to generate the Responsible AI dashboard via [CLI and SDK](how-to-responsible-ai-dashboard-sdk-cli.md) or [Azure Machine Learning studio UI](how-to-responsible-ai-dashboard-ui.md).-- Explore the [supported data explorer visualizations](how-to-responsible-ai-dashboard.md#data-explorer) of the Responsible AI dashboard.
+- Learn how to generate the Responsible AI dashboard via [CLI and SDK](how-to-responsible-ai-insights-sdk-cli.md) or [Azure Machine Learning studio UI](how-to-responsible-ai-insights-ui.md).
+- Explore the [supported data analysis visualizations](how-to-responsible-ai-dashboard.md#data-analysis) of the Responsible AI dashboard.
- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.
machine-learning Concept Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-designer.md
Compute targets are attached to your [Azure Machine Learning workspace](concept-
## Deploy
-To perform real-time inferencing, you must deploy a pipeline as a [online endpoint](concept-endpoints.md#what-are-online-endpoints). The online endpoint creates an interface between an external application and your scoring model. A call to an online endpoint returns prediction results to the application in real time. To make a call to an online endpoint, you pass the API key that was created when you deployed the endpoint. The endpoint is based on REST, a popular architecture choice for web programming projects.
+To perform real-time inferencing, you must deploy a pipeline as an [online endpoint](concept-endpoints.md#what-are-online-endpoints). The online endpoint creates an interface between an external application and your scoring model. A call to an online endpoint returns prediction results to the application in real time. To make a call to an online endpoint, you pass the API key that was created when you deployed the endpoint. The endpoint is based on REST, a popular architecture choice for web programming projects.
Online endpoints must be deployed to an Azure Kubernetes Service cluster.
machine-learning Concept Fairness Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-fairness-ml.md
The Fairlearn open-source package provides two types of unfairness mitigation al
## Next steps -- Learn how to generate the Responsible AI dashboard via [CLI and SDK](how-to-responsible-ai-dashboard-sdk-cli.md) or [Azure Machine Learning studio UI](how-to-responsible-ai-dashboard-ui.md).-- Explore the [supported model overview and fairness assessment visualizations](how-to-responsible-ai-dashboard.md#model-overview) of the Responsible AI dashboard.
+- Learn how to generate the Responsible AI dashboard via [CLI and SDK](how-to-responsible-ai-insights-sdk-cli.md) or [Azure Machine Learning studio UI](how-to-responsible-ai-insights-ui.md).
+- Explore the [supported model overview and fairness assessment visualizations](how-to-responsible-ai-dashboard.md#model-overview-and-fairness-metrics) of the Responsible AI dashboard.
- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard. - Learn how to use the components by checking out Fairlearn's [GitHub repository](https://github.com/fairlearn/fairlearn/), [user guide](https://fairlearn.github.io/main/user_guide/https://docsupdatetracker.net/index.html), [examples](https://fairlearn.github.io/main/auto_examples/https://docsupdatetracker.net/index.html), and [sample notebooks](https://github.com/fairlearn/fairlearn/tree/master/notebooks).
machine-learning Concept Responsible Ai Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-responsible-ai-dashboard.md
Previously updated : 08/17/2022 Last updated : 11/09/2022
The Responsible AI dashboard provides a single interface to help you implement R
- [Machine learning interpretability](https://interpret.ml/) - [Error analysis](https://erroranalysis.ai/) - [Counterfactual analysis and perturbations](https://github.com/interpretml/DiCE)-- [Causal inference](https://github.com/microsoft/EconML)
+- [Causal inference](https://github.com/microsoft/EconML)
The dashboard offers a holistic assessment and debugging of models so you can make informed data-driven decisions. Having access to all of these tools in one interface empowers you to: - Evaluate and debug your machine learning models by identifying model errors and fairness issues, diagnosing why those errors are happening, and informing your mitigation steps. - Boost your data-driven decision-making abilities by addressing questions such as:
-
+ "What is the minimum change that users can apply to their features to get a different outcome from the model?"
-
+ "What is the causal effect of reducing or increasing a feature (for example, red meat consumption) on a real-world outcome (for example, diabetes progression)?" You can customize the dashboard to include only the subset of tools that are relevant to your use case.
The Responsible AI dashboard is accompanied by a [PDF scorecard](how-to-responsi
The Responsible AI dashboard brings together, in a comprehensive view, various new and pre-existing tools. The dashboard integrates these tools with [Azure Machine Learning CLI v2, Azure Machine Learning Python SDK v2](concept-v2.md), and [Azure Machine Learning studio](overview-what-is-azure-machine-learning.md#studio). The tools include: -- [Data explorer](concept-data-analysis.md), to understand and explore your dataset distributions and statistics.
+- [Data analysis](concept-data-analysis.md), to understand and explore your dataset distributions and statistics.
- [Model overview and fairness assessment](concept-fairness-ml.md), to evaluate the performance of your model and evaluate your model's group fairness issues (how your model's predictions affect diverse groups of people). - [Error analysis](concept-error-analysis.md), to view and understand how errors are distributed in your dataset. - [Model interpretability](how-to-machine-learning-interpretability.md) (importance values for aggregate and individual features), to understand your model's predictions and how those overall and individual predictions are made.
The following table describes when to use Responsible AI dashboard components to
| Identify | Error analysis | The error analysis component helps you get a deeper understanding of model failure distribution and quickly identify erroneous cohorts (subgroups) of data. <br><br> The capabilities of this component in the dashboard come from the [Error Analysis](https://erroranalysis.ai/) package.| | Identify | Fairness analysis | The fairness component defines groups in terms of sensitive attributes such as sex, race, and age. It then assesses how your model predictions affect these groups and how you can mitigate disparities. It evaluates the performance of your model by exploring the distribution of your prediction values and the values of your model performance metrics across the groups. <br><br>The capabilities of this component in the dashboard come from the [Fairlearn](https://fairlearn.org/) package. | | Identify | Model overview | The model overview component aggregates model assessment metrics in a high-level view of model prediction distribution for better investigation of its performance. This component also enables group fairness assessment by highlighting the breakdown of model performance across sensitive groups. |
-| Diagnose | Data explorer | The data explorer visualizes datasets based on predicted and actual outcomes, error groups, and specific features. You can then identify issues of overrepresentation and underrepresentation, along with seeing how data is clustered in the dataset. |
+| Diagnose | Data analysis | Data analysis visualizes datasets based on predicted and actual outcomes, error groups, and specific features. You can then identify issues of overrepresentation and underrepresentation, along with seeing how data is clustered in the dataset. |
| Diagnose | Model interpretability | The interpretability component generates human-understandable explanations of the predictions of a machine learning model. It provides multiple views into a model's behavior: <br> - Global explanations (for example, which features affect the overall behavior of a loan allocation model) <br> - Local explanations (for example, why an applicant's loan application was approved or rejected) <br><br> The capabilities of this component in the dashboard come from the [InterpretML](https://interpret.ml/) package. | | Diagnose | Counterfactual analysis and what-if| This component consists of two functionalities for better error diagnosis: <br> - Generating a set of examples in which minimal changes to a particular point alter the model's prediction. That is, the examples show the closest data points with opposite model predictions. <br> - Enabling interactive and custom what-if perturbations for individual data points to understand how the model reacts to feature changes. <br> <br> The capabilities of this component in the dashboard come from the [DiCE](https://github.com/interpretml/DiCE) package. |
Exploratory data analysis, causal inference, and counterfactual analysis capabil
These components of the Responsible AI dashboard support responsible decision-making: -- **Data explorer**: You can reuse the data explorer component here to understand data distributions and to identify overrepresentation and underrepresentation. Data exploration is a critical part of decision making, because it isn't feasible to make informed decisions about a cohort that's underrepresented in the data.
+- **Data analysis**: You can reuse the data analysis component here to understand data distributions and to identify overrepresentation and underrepresentation. Data exploration is a critical part of decision making, because it isn't feasible to make informed decisions about a cohort that's underrepresented in the data.
- **Causal inference**: The causal inference component estimates how a real-world outcome changes in the presence of an intervention. It also helps construct promising interventions by simulating feature responses to various interventions and creating rules to determine which population cohorts would benefit from a particular intervention. Collectively, these functionalities allow you to apply new policies and effect real-world change. The capabilities of this component come from the [EconML](https://github.com/Microsoft/EconML) package, which estimates heterogeneous treatment effects from observational data via machine learning.
Need some inspiration? Here are some examples of how the dashboard's components
| Responsible AI dashboard flow | Use case | |-|-|
-| Model overview > error analysis > data explorer | To identify model errors and diagnose them by understanding the underlying data distribution |
-| Model overview > fairness assessment > data explorer | To identify model fairness issues and diagnose them by understanding the underlying data distribution |
+| Model overview > error analysis > data analysis | To identify model errors and diagnose them by understanding the underlying data distribution |
+| Model overview > fairness assessment > data analysis | To identify model fairness issues and diagnose them by understanding the underlying data distribution |
| Model overview > error analysis > counterfactuals analysis and what-if | To diagnose errors in individual instances with counterfactual analysis (minimum change to lead to a different model prediction) |
-| Model overview > data explorer | To understand the root cause of errors and fairness issues introduced via data imbalances or lack of representation of a particular data cohort |
+| Model overview > data analysis | To understand the root cause of errors and fairness issues introduced via data imbalances or lack of representation of a particular data cohort |
| Model overview > interpretability | To diagnose model errors through understanding how the model has made its predictions |
-| Data explorer > causal inference | To distinguish between correlations and causations in the data or decide the best treatments to apply to get a positive outcome |
+| Data analysis > causal inference | To distinguish between correlations and causations in the data or decide the best treatments to apply to get a positive outcome |
| Interpretability > causal inference | To learn whether the factors that the model has used for prediction-making have any causal effect on the real-world outcome|
-| Data explorer > counterfactuals analysis and what-if | To address customers' questions about what they can do next time to get a different outcome from an AI system|
+| Data analysis > counterfactuals analysis and what-if | To address customers' questions about what they can do next time to get a different outcome from an AI system|
## People who should use the Responsible AI dashboard
-The following people can use the Responsible AI dashboard, and its corresponding [Responsible AI scorecard](how-to-responsible-ai-scorecard.md), to build trust with AI systems:
+The following people can use the Responsible AI dashboard, and its corresponding [Responsible AI scorecard](concept-responsible-ai-scorecard.md), to build trust with AI systems:
- Machine learning professionals and data scientists who are interested in debugging and improving their machine learning models before deployment - Machine learning professionals and data scientists who are interested in sharing their model health records with product managers and business stakeholders to build trust and receive deployment permissions
The following people can use the Responsible AI dashboard, and its corresponding
## Next steps -- Learn how to generate the Responsible AI dashboard via [CLI and SDK](how-to-responsible-ai-dashboard-sdk-cli.md) or [Azure Machine Learning studio UI](how-to-responsible-ai-dashboard-ui.md).-- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed on the Responsible AI dashboard.
+- Learn how to generate the Responsible AI dashboard via [CLI and SDK](how-to-responsible-ai-insights-sdk-cli.md) or [Azure Machine Learning studio UI](how-to-responsible-ai-insights-ui.md).
+- Learn how to generate a [Responsible AI scorecard](concept-responsible-ai-scorecard.md) based on the insights observed on the Responsible AI dashboard.
machine-learning Concept Responsible Ai Scorecard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-responsible-ai-scorecard.md
+
+ Title: Share Responsible AI insights and make data-driven decisions with Azure Machine Learning Responsible AI scorecard
+
+description: Learn about how to use the Responsible AI scorecard to share responsible AI insights from your machine learning models and make data-driven decisions with non-technical and technical stakeholders.
+++++++ Last updated : 11/09/2022+++
+# Share Responsible AI insights using the Responsible AI scorecard (preview)
+
+Our Responsible AI dashboard is designed for machine learning professionals and data scientists to explore and evaluate model insights and inform their data-driven decisions. While it can help you implement Responsible AI practically in your machine learning lifecycle, there are some needs left unaddressed:
+
+- There often exists a gap between the technical Responsible AI tools (designed for machine-learning professionals) and the ethical, regulatory, and business requirements that define the production environment.
+- While an end-to-end machine learning life cycle includes both technical and non-technical stakeholders in the loop, there's little support to enable an effective multi-stakeholder alignment, helping technical experts get timely feedback and direction from the non-technical stakeholders.
+- AI regulations make it essential to be able to share model and data insights with auditors and risk officers for auditability purposes.
+
+One of the biggest benefits of using the Azure Machine Learning ecosystem is related to the archival of model and data insights in the Azure Machine Learning Run History (for quick reference in future). As a part of that infrastructure and to accompany machine learning models and their corresponding Responsible AI dashboards, we introduce the Responsible AI scorecard to empower ML professionals to generate and share their data and model health records easily.
+
+## Who should use a Responsible AI scorecard?
+
+- If you're a data scientist or a machine learning professional, after training your model and generating its corresponding Responsible AI dashboard(s) for assessment and decision-making purposes, you can extract those learnings via our PDF scorecard and share the report easily with your technical and non-technical stakeholders to build trust and gain their approval for deployment.
+
+- If you're a product manager, business leader, or an accountable stakeholder on an AI product, you can pass your desired model performance and fairness target values such as your target accuracy, target error rate, etc., to your data science team, asking them to generate this scorecard with respect to your identified target values and whether your model meets them. That can provide guidance into whether the model should be deployed or further improved.
+
+## Next steps
+
+- Learn how to generate the Responsible AI dashboard and scorecard via [CLI and SDK](how-to-responsible-ai-insights-sdk-cli.md) or [Azure Machine Learning studio UI](how-to-responsible-ai-insights-ui.md).
+- Learn more about how the Responsible AI dashboard and scorecard in this [tech community blog post](https://techcommunity.microsoft.com/t5/ai-machine-learning-blog/responsible-ai-dashboard-and-scorecard-in-azure-machine-learning/ba-p/3391068).
machine-learning Concept Responsible Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-responsible-ai.md
+
+ Title: What is Responsible AI (preview)
+
+description: Learn what Responsible AI is and how to use it with Azure Machine Learning to understand models, protect data, and control the model lifecycle.
+++++++ Last updated : 11/09/2022+
+#Customer intent: As a data scientist, I want to learn what Responsible AI is and how I can use it in Azure Machine Learning.
++
+# What is Responsible AI (preview)?
+++
+Responsible Artificial Intelligence (Responsible AI) is an approach to developing, assessing, and deploying AI systems in a safe, trustworthy, and ethical way. AI systems are the product of many decisions made by those who develop and deploy them. From system purpose to how people interact with AI systems, Responsible AI can help proactively guide these decisions toward more beneficial and equitable outcomes. That means keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability, and transparency.
+
+Microsoft has developed a [Responsible AI Standard](https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf). It's a framework for building AI systems according to six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. For Microsoft, these principles are the cornerstone of a responsible and trustworthy approach to AI, especially as intelligent technology becomes more prevalent in products and services that people use every day.
+
+This article demonstrates how Azure Machine Learning supports tools for enabling developers and data scientists to implement and operationalize the six principles.
++
+## Fairness and inclusiveness
+
+AI systems should treat everyone fairly and avoid affecting similarly situated groups of people in different ways. For example, when AI systems provide guidance on medical treatment, loan applications, or employment, they should make the same recommendations to everyone who has similar symptoms, financial circumstances, or professional qualifications.
+
+**Fairness and inclusiveness in Azure Machine Learning**: The [fairness assessment](./concept-fairness-ml.md) component of the [Responsible AI dashboard](./concept-responsible-ai-dashboard.md) enables data scientists and developers to assess model fairness across sensitive groups defined in terms of gender, ethnicity, age, and other characteristics.
+
+## Reliability and safety
+
+To build trust, it's critical that AI systems operate reliably, safely, and consistently. These systems should be able to operate as they were originally designed, respond safely to unanticipated conditions, and resist harmful manipulation. How they behave and the variety of conditions they can handle reflect the range of situations and circumstances that developers anticipated during design and testing.
+
+**Reliability and safety in Azure Machine Learning**: The [error analysis](./concept-error-analysis.md) component of the [Responsible AI dashboard](./concept-responsible-ai-dashboard.md) enables data scientists and developers to:
+
+- Get a deep understanding of how failure is distributed for a model.
+- Identify cohorts (subsets) of data with a higher error rate than the overall benchmark.
+
+These discrepancies might occur when the system or model underperforms for specific demographic groups or for infrequently observed input conditions in the training data.
+
+## Transparency
+
+When AI systems help inform decisions that have tremendous impacts on people's lives, it's critical that people understand how those decisions were made. For example, a bank might use an AI system to decide whether a person is creditworthy. A company might use an AI system to determine the most qualified candidates to hire.
+
+A crucial part of transparency is *interpretability*: the useful explanation of the behavior of AI systems and their components. Improving interpretability requires stakeholders to comprehend how and why AI systems function the way they do. The stakeholders can then identify potential performance issues, fairness issues, exclusionary practices, or unintended outcomes.
+
+**Transparency in Azure Machine Learning**: The [model interpretability](how-to-machine-learning-interpretability.md) and [counterfactual what-if](./concept-counterfactual-analysis.md) components of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) enable data scientists and developers to generate human-understandable descriptions of the predictions of a model.
+
+The model interpretability component provides multiple views into a model's behavior:
+
+- *Global explanations*. For example, what features affect the overall behavior of a loan allocation model?
+- *Local explanations*. For example, why was a customer's loan application approved or rejected?
+- *Model explanations for a selected cohort of data points*. For example, what features affect the overall behavior of a loan allocation model for low-income applicants?
+
+The counterfactual what-if component enables understanding and debugging a machine learning model in terms of how it reacts to feature changes and perturbations.
+
+Azure Machine Learning also supports a [Responsible AI scorecard](./how-to-responsible-ai-scorecard.md). The scorecard is a customizable PDF report that developers can easily configure, generate, download, and share with their technical and non-technical stakeholders to educate them about their datasets and models health, achieve compliance, and build trust. This scorecard can also be used in audit reviews to uncover the characteristics of machine learning models.
+
+## Privacy and security
+
+As AI becomes more prevalent, protecting privacy and securing personal and business information are becoming more important and complex. With AI, privacy and data security require close attention because access to data is essential for AI systems to make accurate and informed predictions and decisions about people. AI systems must comply with privacy laws that:
+
+- Require transparency about the collection, use, and storage of data.
+- Mandate that consumers have appropriate controls to choose how their data is used.
+
+**Privacy and security in Azure Machine Learning**: Azure Machine Learning enables administrators and developers to [create a secure configuration that complies](concept-enterprise-security.md) with their companies' policies. With Azure Machine Learning and the Azure platform, users can:
+
+- Restrict access to resources and operations by user account or group.
+- Restrict incoming and outgoing network communications.
+- Encrypt data in transit and at rest.
+- Scan for vulnerabilities.
+- Apply and audit configuration policies.
+
+Microsoft has also created two open-source packages that can enable further implementation of privacy and security principles:
+
+- [SmartNoise](https://github.com/opendifferentialprivacy/smartnoise-core): Differential privacy is a set of systems and practices that help keep the data of individuals safe and private. In machine learning solutions, differential privacy might be required for regulatory compliance. SmartNoise is an open-source project (co-developed by Microsoft) that contains components for building differentially private systems that are global.
+
+- [Counterfit](https://github.com/Azure/counterfit/): Counterfit is an open-source project that comprises a command-line tool and generic automation layer to allow developers to simulate cyberattacks against AI systems. Anyone can download the tool and deploy it through Azure Cloud Shell to run in a browser, or deploy it locally in an Anaconda Python environment. It can assess AI models hosted in various cloud environments, on-premises, or in the edge. The tool is agnostic to AI models and supports various data types, including text, images, or generic input.
+
+## Accountability
+
+The people who design and deploy AI systems must be accountable for how their systems operate. Organizations should draw upon industry standards to develop accountability norms. These norms can ensure that AI systems aren't the final authority on any decision that affects people's lives. They can also ensure that humans maintain meaningful control over otherwise highly autonomous AI systems.
+
+**Accountability in Azure Machine Learning**: [Machine learning operations (MLOps)](concept-model-management-and-deployment.md) is based on DevOps principles and practices that increase the efficiency of AI workflows. Azure Machine Learning provides the following MLOps capabilities for better accountability of your AI systems:
+
+- Register, package, and deploy models from anywhere. You can also track the associated metadata that's required to use the model.
+- Capture the governance data for the end-to-end machine learning lifecycle. The logged lineage information can include who is publishing models, why changes were made, and when models were deployed or used in production.
+- Notify and alert on events in the machine learning lifecycle. Examples include experiment completion, model registration, model deployment, and data drift detection.
+- Monitor applications for operational issues and issues related to machine learning. Compare model inputs between training and inference, explore model-specific metrics, and provide monitoring and alerts on your machine learning infrastructure.
+
+Besides the MLOps capabilities, the [Responsible AI scorecard](concept-responsible-ai-scorecard.md) in Azure Machine Learning creates accountability by enabling cross-stakeholder communications. The scorecard also creates accountability by empowering developers to configure, download, and share their model health insights with their technical and non-technical stakeholders about AI data and model health. Sharing these insights can help build trust.
+
+The machine learning platform also enables decision-making by informing business decisions through:
+
+- Data-driven insights, to help stakeholders understand causal treatment effects on an outcome, by using historical data only. For example, "How would a medicine affect a patient's blood pressure?" These insights are provided through the [causal inference](concept-causal-inference.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
+- Model-driven insights, to answer users' questions (such as "What can I do to get a different outcome from your AI next time?") so they can take action. Such insights are provided to data scientists through the [counterfactual what-if](concept-counterfactual-analysis.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
+
+## Next steps
+
+- For more information on how to implement Responsible AI in Azure Machine Learning, see [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
+- Learn how to generate the Responsible AI dashboard via [CLI and SDK](how-to-responsible-ai-dashboard-sdk-cli.md) or [Azure Machine Learning studio UI](how-to-responsible-ai-dashboard-ui.md).
+- Learn how to generate a [Responsible AI scorecard](concept-responsible-ai-scorecard.md) based on the insights observed in your Responsible AI dashboard.
+- Learn about the [Responsible AI Standard](https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf) for building AI systems according to six key principles.
machine-learning How To Log Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-mlflow-models.md
Logging models has the following advantages:
> * Models can be used as pipelines inputs directly. > * Models can be deployed without indicating a scoring script nor an environment. > * Swagger is enabled in deployed endpoints automatically and the __Test__ feature can be used in Azure ML studio.
-> * You can use the Responsable AI dashbord.
+> * You can use the Responsible AI dashboard.
There are different ways to start using the model's concept in Azure Machine Learning with MLflow, as explained in the following sections:
machine-learning How To Manage Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models.md
Type | Input/Output | `direct` | `download` | `ro_mount`
`mlflow` | Output | Γ£ô | Γ£ô | Γ£ô |
+### Follow along in Jupyter Notebooks
+
+You can follow along this sample in a Jupyter Notebook. In the [azureml-examples](https://github.com/azure/azureml-examples) repository, open the notebook: [model.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/assets/model/model.ipynb).
+ ## Create a model in the model registry [Model registration](concept-model-management-and-deployment.md) allows you to store and version your models in the Azure cloud, in your workspace. The model registry helps you organize and keep track of your trained models.
For a complete example, see the [model notebook](https://github.com/Azure/azurem
To create a model in Machine Learning, from the UI, open the **Models** page. Select **Register model**, and select where your model is located. Fill out the required fields, and then select **Register**. +++
+## Use model as input in a job
+
+# [Azure CLI](#tab/cli)
+
+Create a job specification YAML file (`<file-name>.yml`). Specify in the `inputs` section of the job:
+
+1. The `type`; whether the model is a `mlflow_model`,`custom_model` or `triton_model`.
+1. The `path` of where your data is located; can be any of the paths outlined in the [Supported Paths](#supported-paths) section.
+
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json
+
+# Possible Paths for models:
+# AzureML Datastore: azureml://datastores/<datastore-name>/paths/<path_on_datastore>
+# MLflow run: runs:/<run-id>/<path-to-model-relative-to-the-root-of-the-artifact-location>
+# Job: azureml://jobs/<job-name>/outputs/<output-name>/paths/<path-to-model-relative-to-the-named-output-location>
+# Model Asset: azureml:<my_model>:<version>
+command: |
+ ls ${{inputs.my_model}}
+code: <folder where code is located>
+inputs:
+ my_model:
+ type: <type> # mlflow_model,custom_model, triton_model
+ path: <path>
+environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
+compute: azureml:cpu-cluster
+```
+
+Next, run in the CLI
+
+```azurecli
+az ml job create -f <file-name>.yml
+```
+
+# [Python SDK](#tab/python)
+
+The `Input` class allows you to define:
+
+1. The `type`; whether the model is a `mlflow_model`,`custom_model` or `triton_model`.
+1. The `path` of where your data is located; can be any of the paths outlined in the [Supported Paths](#supported-paths) section.
+
+```python
+from azure.ai.ml import command
+from azure.ai.ml.entities import Model
+from azure.ai.ml import Input
+from azure.ai.ml.constants import AssetTypes
+from azure.ai.ml import MLClient
+
+# Possible Asset Types for Data:
+# AssetTypes.MLFLOW_MODEL
+# AssetTypes.CUSTOM_MODEL
+# AssetTypes.TRITON_MODEL
+
+# Possible Paths for Model:
+# Local path: mlflow-model/model.pkl
+# AzureML Datastore: azureml://datastores/<datastore-name>/paths/<path_on_datastore>
+# MLflow run: runs:/<run-id>/<path-to-model-relative-to-the-root-of-the-artifact-location>
+# Job: azureml://jobs/<job-name>/outputs/<output-name>/paths/<path-to-model-relative-to-the-named-output-location>
+# Model Asset: azureml:<my_model>:<version>
+
+my_job_inputs = {
+ "input_model": Input(type=AssetTypes.MLFLOW_MODEL, path="mlflowmodel")
+}
+
+job = command(
+ code="./src", # local path where the code is stored
+ command="ls ${{inputs.input_model}}",
+ inputs=my_job_inputs,
+ environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
+ compute="cpu-cluster",
+)
+
+# submit the command
+returned_job = ml_client.jobs.create_or_update(job)
+# get a URL for the status of the job
+returned_job.services["Studio"].endpoint
+```
++
+## Use model as output in a job
+
+In your job you can write model to your cloud-based storage using *outputs*.
+
+# [Azure CLI](#tab/cli)
+
+Create a job specification YAML file (`<file-name>.yml`), with the `outputs` section populated with the type and path of where you would like to write your data to:
+
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/CommandJob.schema.json
+
+# Possible Paths for Model:
+# Local path: mlflow-model/model.pkl
+# AzureML Datastore: azureml://datastores/<datastore-name>/paths/<path_on_datastore>
+# MLflow run: runs:/<run-id>/<path-to-model-relative-to-the-root-of-the-artifact-location>
+# Job: azureml://jobs/<job-name>/outputs/<output-name>/paths/<path-to-model-relative-to-the-named-output-location>
+# Model Asset: azureml:<my_model>:<version>
+
+code: src
+command: >-
+ python load_write_model.py
+ --input_model ${{inputs.input_model}}
+ --custom_model_output ${{outputs.output_folder}}
+inputs:
+ input_model:
+ type: <type> # mlflow_model,custom_model, triton_model
+ path: <path>
+outputs:
+ output_folder:
+ type: <type> # mlflow_model,custom_model, triton_model
+environment: azureml:AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9
+compute: azureml:cpu-cluster
+```
+
+Next create a job using the CLI:
+
+```azurecli
+az ml job create --file <file-name>.yml
+```
+
+# [Python SDK](#tab/python)
+
+```python
+from azure.ai.ml import command
+from azure.ai.ml.entities import Model
+from azure.ai.ml import Input, Output
+from azure.ai.ml.constants import AssetTypes
+
+# Possible Asset Types for Model:
+# AssetTypes.MLFLOW_MODEL
+# AssetTypes.CUSTOM_MODEL
+# AssetTypes.TRITON_MODEL
+
+# Possible Paths for Model:
+# Local path: mlflow-model/model.pkl
+# AzureML Datastore: azureml://datastores/<datastore-name>/paths/<path_on_datastore>
+# MLflow run: runs:/<run-id>/<path-to-model-relative-to-the-root-of-the-artifact-location>
+# Job: azureml://jobs/<job-name>/outputs/<output-name>/paths/<path-to-model-relative-to-the-named-output-location>
+# Model Asset: azureml:<my_model>:<version>
+
+my_job_inputs = {
+ "input_model": Input(type=AssetTypes.MLFLOW_MODEL, path="mlflow-model"),
+ "input_data": Input(type=AssetTypes.URI_FILE, path="./mlflow-model/input_example.json"),
+}
+
+my_job_outputs = {
+ "output_folder": Output(type=AssetTypes.CUSTOM_MODEL)
+}
+
+job = command(
+ code="./src", # local path where the code is stored
+ command="python load_write_model.py --input_model ${{inputs.input_model}} --output_folder ${{outputs.output_folder}}",
+ inputs=my_job_inputs,
+ outputs=my_job_outputs,
+ environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
+ compute="cpu-cluster",
+)
+
+# submit the command
+returned_job = ml_client.create_or_update(job)
+# get a URL for the status of the job
+returned_job.services["Studio"].endpoint
+
+```
++ ## Next steps * [Install and set up Python SDK v2](https://aka.ms/sdk-v2-install)
machine-learning How To Responsible Ai Dashboard Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-dashboard-ui.md
- Title: Generate a Responsible AI dashboard (preview) in the studio UI -
-description: Learn how to generate a Responsible AI dashboard with no-code experience in the Azure Machine Learning studio UI.
------ Previously updated : 08/17/2022---
-# Generate a Responsible AI dashboard (preview) in the studio UI
-
-In this article, you create a Responsible AI dashboard with a no-code experience in the [Azure Machine Learning studio UI](https://ml.azure.com/). To access the dashboard generation wizard, do the following:
-
-1. [Register your model](how-to-manage-models.md) in Azure Machine Learning so that you can access the no-code experience.
-1. On the left pane of Azure Machine Learning studio, select the **Models** tab.
-1. Select the registered model that you want to create Responsible AI insights for, and then select the **Details** tab.
-1. Select **Create Responsible AI dashboard (preview)**.
-
- :::image type="content" source="./media/how-to-responsible-ai-dashboard-ui/model-page.png" alt-text="Screenshot of the wizard details pane with 'Create Responsible AI dashboard (preview)' tab highlighted." lightbox ="./media/how-to-responsible-ai-dashboard-ui/model-page.png":::
-
-To learn more, see the Responsible AI dashboard [supported model types and limitations](concept-responsible-ai-dashboard.md#supported-scenarios-and-limitations).
-
-The wizard provides an interface for entering all the necessary parameters to create your Responsible AI dashboard without having to touch code. The experience takes place entirely in the Azure Machine Learning studio UI. The studio presents a guided flow and instructional text to help contextualize the variety of choices about which Responsible AI components youΓÇÖd like to populate your dashboard with.
-
-The wizard is divided into five sections:
-
-1. Datasets
-1. Modeling task
-1. Dashboard components
-1. Component parameters
-1. Experiment configuration
-
-## Select your datasets
-
-In the first section, you select the train and test datasets that you used when you trained your model to generate model-debugging insights. For components like causal analysis, which doesn't require a model, you use the train dataset to train the causal model to generate the causal insights.
-
-> [!NOTE]
-> Only tabular dataset formats are supported.
--
-1. **Select a dataset for training**: In the dropdown list of registered datasets in the Azure Machine Learning workspace, select the dataset you want to use to generate Responsible AI insights for components, such as model explanations and error analysis.
-
-1. **Select a dataset for testing**: In the dropdown list, select the dataset you want to use to populate your Responsible AI dashboard visualizations.
-
-1. If the train or test dataset you want to use isn't listed, select **New dataset** to upload it.
-
-## Select your modeling task
-
-After you've picked your datasets, select your modeling task type, as shown in the following image:
--
-> [!NOTE]
-> The wizard supports only models in MLflow format and with a sklearn (scikit-learn) flavor.
-
-## Select your dashboard components
-
-The Responsible AI dashboard offers two profiles for recommended sets of tools that you can generate:
--- **Model debugging**: Understand and debug erroneous data cohorts in your machine learning model by using error analysis, counterfactual what-if examples, and model explainability.-- **Real-life interventions**: Understand and debug erroneous data cohorts in your machine learning model by using causal analysis.-
- > [!NOTE]
- > Multi-class classification doesn't support the real-life interventions analysis profile.
--
-1. Select the profile you want to use.
-1. Select **Next**.
--
-## Configure parameters for dashboard components
-
-After youΓÇÖve selected a profile, the **Component parameters for model debugging** configuration pane for the corresponding components appears.
--
-Component parameters for model debugging:
-
-1. **Target feature (required)**: Specify the feature that your model was trained to predict.
-1. **Categorical features**: Indicate which features are categorical to properly render them as categorical values in the dashboard UI. This field is pre-loaded for you based on your dataset metadata.
-1. **Generate error tree and heat map**: Toggle on and off to generate an error analysis component for your Responsible AI dashboard.
-1. **Features for error heat map**: Select up to two features that you want to pre-generate an error heatmap for.
-1. **Advanced configuration**: Specify additional parameters, such as **Maximum depth of error tree**, **Number of leaves in error tree**, and **Minimum number of samples in each leaf node**.
-1. **Generate counterfactual what-if examples**: Toggle on and off to generate a counterfactual what-if component for your Responsible AI dashboard.
-1. **Number of counterfactuals (required)**: Specify the number of counterfactual examples that you want generated per data point. A minimum of 10 should be generated to enable a bar chart view of the features that were most perturbed, on average, to achieve the desired prediction.
-1. **Range of value predictions (required)**: Specify for regression scenarios the range that you want counterfactual examples to have prediction values in. For binary classification scenarios, the range will automatically be set to generate counterfactuals for the opposite class of each data point. For multi-classification scenarios, use the dropdown list to specify which class you want each data point to be predicted as.
-1. **Specify which features to perturb**: By default, all features will be perturbed. However, if you want only specific features to be perturbed, select **Specify which features to perturb for generating counterfactual explanations** to display a pane with a list of features to select.
-
- When you select **Specify which features to perturb**, you can specify the range you want to allow perturbations in. For example: for the feature YOE (Years of experience), specify that counterfactuals should have feature values ranging from only 10 to 21 instead of the default values of 5 to 21.
-
- :::image type="content" source="./media/how-to-responsible-ai-dashboard-ui/model-debug-counterfactuals.png" alt-text="Screenshot of the wizard, showing a pane of features you can specify to perturb." lightbox = "./media/how-to-responsible-ai-dashboard-ui/model-debug-counterfactuals.png":::
-
-1. **Generate explanations**: Toggle on and off to generate a model explanation component for your Responsible AI dashboard. No configuration is necessary, because a default opaque box mimic explainer will be used to generate feature importances.
-
-Alternatively, if you select the **Real-life interventions** profile, youΓÇÖll see the following screen generate a causal analysis. This will help you understand the causal effects of features you want to ΓÇ£treatΓÇ¥ on a certain outcome you want to optimize.
--
-Component parameters for real-life interventions use causal analysis. Do the following:
-
-1. **Target feature (required)**: Choose the outcome you want the causal effects to be calculated for.
-1. **Treatment features (required)**: Choose one or more features that youΓÇÖre interested in changing (ΓÇ£treatingΓÇ¥) to optimize the target outcome.
-1. **Categorical features**: Indicate which features are categorical to properly render them as categorical values in the dashboard UI. This field is pre-loaded for you based on your dataset metadata.
-1. **Advanced settings**: Specify additional parameters for your causal analysis, such as heterogenous features (that is, additional features to understand causal segmentation in your analysis, in addition to your treatment features) and which causal model you want to be used.
-
-## Configure your experiment
-
-Finally, configure your experiment to kick off a job to generate your Responsible AI dashboard.
--
-On the **Training job or experiment configuration** pane, do the following:
-
-1. **Name**: Give your dashboard a unique name so that you can differentiate it when youΓÇÖre viewing the list of dashboards for a given model.
-1. **Experiment name**: Select an existing experiment to run the job in, or create a new experiment.
-1. **Existing experiment**: In the dropdown list, select an existing experiment.
-1. **Select compute type**: Specify which compute type you want to use to execute your job.
-1. **Select compute**: In the dropdown list, select the compute you want to use. If there are no existing compute resources, select the plus sign (**+**), create a new compute resource, and then refresh the list.
-1. **Description**: Add a longer description of your Responsible AI dashboard.
-1. **Tags**: Add any tags to this Responsible AI dashboard.
-
-After youΓÇÖve finished configuring your experiment, select **Create** to start generating your Responsible AI dashboard. You'll be redirected to the experiment page to track the progress of your job.
-
-In the "Next steps" section, you can learn how to view and use your Responsible AI dashboard.
-
-## Next steps
--- After you've generated your Responsible AI dashboard, [view how to access and use it in Azure Machine Learning studio](how-to-responsible-ai-dashboard.md).-- Summarize and share your Responsible AI insights with the [Responsible AI scorecard as a PDF export](how-to-responsible-ai-scorecard.md).-- Learn more about the [concepts and techniques behind the Responsible AI dashboard](concept-responsible-ai-dashboard.md).-- Learn more about how to [collect data responsibly](concept-sourcing-human-data.md).-- Learn more about how to use the Responsible AI dashboard and scorecard to debug data and models and inform better decision-making in this [tech community blog post](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard).-- Learn about how the Responsible AI dashboard and scorecard were used by the UK National Health Service (NHS) in a [real life customer story](https://aka.ms/NHSCustomerStory).-- Explore the features of the Responsible AI dashboard through this [interactive AI Lab web demo](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard).
machine-learning How To Responsible Ai Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-dashboard.md
Title: Use the Responsible AI dashboard in Azure Machine Learning studio (preview)
+ Title: Use the Responsible AI dashboard in Azure Machine Learning studio
description: Learn how to use the various tools and visualization charts in the Responsible AI dashboard in Azure Machine Learning. -- Previously updated : 08/17/2022+++ Last updated : 11/09/2022
-# Use the Responsible AI dashboard (preview) in Azure Machine Learning studio
+# Use the Responsible AI dashboard in Azure Machine Learning studio
-Responsible AI dashboards are linked to your registered models. To view your Responsible AI dashboard, go into your model registry and select the registered model you've generated a Responsible AI dashboard for. Then, select the **Responsible AI (preview)** tab to view a list of generated dashboards.
+Responsible AI dashboards are linked to your registered models. To view your Responsible AI dashboard, go into your model registry and select the registered model you've generated a Responsible AI dashboard for. Then, select the **Responsible AI** tab to view a list of generated dashboards.
You can configure multiple dashboards and attach them to your registered model. Various combinations of components (interpretability, error analysis, causal analysis, and so on) can be attached to each Responsible AI dashboard. The following image displays a dashboard's customization and the components that were generated within it. In each dashboard, you can view or hide various components within the dashboard UI itself. Select the name of the dashboard to open it into a full view in your browser. To return to your list of dashboards, you can select **Back to models details** at any time.
The Responsible AI dashboard includes a robust, rich set of visualizations and f
- [Global controls](#global-controls) - [Error analysis](#error-analysis)-- [Model overview](#model-overview)-- [Data explorer](#data-explorer)
+- [Model overview and fairness metrics](#model-overview-and-fairness-metrics)
+- [Data analysis](#data-analysis)
- [Feature importance (model explanations)](#feature-importances-model-explanations) - [Counterfactual what-if](#counterfactual-what-if) - [Causal analysis](#causal-analysis)
Select the **Heat map** tab to switch to a different view of the error in the da
5. **Cells**: Represents a cohort of the dataset, with filters applied, and the percentage of errors out of the total number of data points in the cohort. A blue outline indicates selected cells, and the darkness of red represents the concentration of failures. 6. **Prediction path (filters)**: Lists the filters placed over the full dataset for each selected cohort.
-### Model overview
+### Model overview and fairness metrics
The model overview component provides a comprehensive set of performance and fairness metrics for evaluating your model, along with key performance disparity metrics along specified features and dataset cohorts.
The model overview component provides a comprehensive set of performance and fai
On the **Dataset cohorts** pane, you can investigate your model by comparing the model performance of various user-specified dataset cohorts (accessible via the **Cohort settings** icon at the top right of the dashboard).
-> [!NOTE]
-> You can create new dataset cohorts from the UI experience or pass your pre-built cohorts to the dashboard via the SDK experience.
- :::image type="content" source="./media/how-to-responsible-ai-dashboard/model-overview-dataset-cohorts.png" alt-text="Screenshot of the 'Model overview' pane, showing the 'Dataset cohorts' tab." lightbox= "./media/how-to-responsible-ai-dashboard/model-overview-dataset-cohorts.png"::: 1. **Help me choose metrics**: Select this icon to open a panel with more information about what model performance metrics are available to be shown in the table. Easily adjust which metrics to view by using the multi-select dropdown list to select and deselect performance metrics.
Select **Help me choose metrics** to open a panel with a list of model performan
| Regression | Mean absolute error, Mean squared error, R-squared, Mean prediction. | | Classification | Accuracy, Precision, Recall, F1 score, False positive rate, False negative rate, Selection rate. |
-Classification scenarios support accuracy scores, precision scores, recall, false positive rate, false negative rate, and selection rate (the percentage of predictions with label 1):
---
-Regression scenarios support mean absolute error, mean squared error, and mean prediction:
--- #### Feature cohorts On the **Feature cohorts** pane, you can investigate your model by comparing model performance across user-specified sensitive and non-sensitive features (for example, performance across various gender, race, and income level cohorts).
On the **Feature cohorts** pane, you can investigate your model by comparing mod
:::image type="content" source="./media/how-to-responsible-ai-dashboard/model-overview-choose-cohorts.png" alt-text="Screenshot of the dashboard 'Model overview' pane, showing how to choose cohorts." lightbox= "./media/how-to-responsible-ai-dashboard/model-overview-choose-cohorts.png"::: 8. **Choose metric (x-axis)**: Select this button to choose which metric to view in the bar chart.
+### Data analysis
-### Data explorer
+With the data analysis component, the **Table view** pane shows you a table view of your dataset for all features and rows.
-With the data explorer component, you can analyze data statistics along the x-axis and y-axis by using filters such as predicted outcome, dataset features, and error groups. This component helps you understand overrepresentation and underrepresentation in your dataset.
+The **Chart view** panel shows you aggregate and individual plots of datapoints. You can analyze data statistics along the x-axis and y-axis by using filters such as predicted outcome, dataset features, and error groups. This view helps you understand overrepresentation and underrepresentation in your dataset.
1. **Select a dataset cohort to explore**: Specify which dataset cohort from your list of cohorts you want to view data statistics for. 2. **X-axis**: Displays the type of value being plotted horizontally. Modify the values by selecting the button to open a side panel.
With the data explorer component, you can analyze data statistics along the x-ax
By selecting the **Individual data points** option under **Chart type**, you can shift to a disaggregated view of the data with the availability of a color axis. ### Feature importances (model explanations)
Counterfactual analysis provides a diverse set of *what-if* examples generated b
### Causal analysis
+The next sections cover how to read the causal analysis for your dataset on select user-specified treatments.
+ #### Aggregate causal effects Select the **Aggregate causal effects** tab of the causal analysis component to display the average causal effects for pre-defined treatment features (the features that you want to treat to optimize your outcome).
Select the **Treatment policy** tab to switch to a view to help determine real-w
## Next steps -- Summarize and share your Responsible AI insights with the [Responsible AI scorecard as a PDF export](how-to-responsible-ai-scorecard.md).
+- Summarize and share your Responsible AI insights with the [Responsible AI scorecard as a PDF export](concept-responsible-ai-scorecard.md).
- Learn more about the [concepts and techniques behind the Responsible AI dashboard](concept-responsible-ai-dashboard.md). - View [sample YAML and Python notebooks](https://aka.ms/RAIsamples) to generate a Responsible AI dashboard with YAML or Python. - Explore the features of the Responsible AI dashboard through this [interactive AI lab web demo](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard).
machine-learning How To Responsible Ai Insights Sdk Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-insights-sdk-cli.md
+
+ Title: Generate a Responsible AI insights with YAML and Python
+
+description: Learn how to generate a Responsible AI insights with Python and YAML in Azure Machine Learning.
+++++++ Last updated : 11/09/2022+++
+# Generate a Responsible AI insights with YAML and Python
++
+You can generate a Responsible AI dashboard and scorecard via a pipeline job by using Responsible AI components. There are six core components for creating Responsible AI dashboards, along with a couple of helper components. Here's a sample experiment graph:
++
+## Responsible AI components
+
+The core components for constructing the Responsible AI dashboard in Azure Machine Learning are:
+
+- `RAI Insights dashboard constructor`
+- The tool components:
+ - `Add Explanation to RAI Insights dashboard`
+ - `Add Causal to RAI Insights dashboard`
+ - `Add Counterfactuals to RAI Insights dashboard`
+ - `Add Error Analysis to RAI Insights dashboard`
+ - `Gather RAI Insights dashboard`
+ - `Gather RAI Insights score card`
+
+The `RAI Insights dashboard constructor` and `Gather RAI Insights dashboard` components are always required, plus at least one of the tool components. However, it isn't necessary to use all the tools in every Responsible AI dashboard.
+
+In the following sections are specifications of the Responsible AI components and examples of code snippets in YAML and Python. To view the full code, see [sample YAML and Python notebook](https://aka.ms/RAIsamplesProgrammer).
+
+### Limitations
+
+The current set of components have a number of limitations on their use:
+
+- All models must be registered in Azure Machine Learning in MLflow format with a sklearn (scikit-learn) flavor.
+- The models must be loadable in the component environment.
+- The models must be pickleable.
+- The models must be supplied to the Responsible AI components by using the `Fetch Registered Model` component, which we provide.
+- The dataset inputs must be in `mltable` format.
+- A model must be supplied even if only a causal analysis of the data is performed. You can use the `DummyClassifier` and `DummyRegressor` estimators from scikit-learn for this purpose.
+
+### RAI Insights dashboard constructor
+
+This component has three input ports:
+
+- The machine learning model
+- The training dataset
+- The test dataset
+
+To generate model-debugging insights with components such as error analysis and Model explanations, use the training and test dataset that you used when you trained your model. For components like causal analysis, which doesn't require a model, you use the training dataset to train the causal model to generate the causal insights. You use the test dataset to populate your Responsible AI dashboard visualizations.
+
+The easiest way to supply the model is to register the input model and reference the same model in the model input port of `RAI Insight Constructor` component, which we discuss later in this article.
+
+> [!NOTE]
+> Currently, only models in MLflow format and with a `sklearn` flavor are supported.
+
+The two datasets should be in `mltable` format. The training and test datasets provided don't have to be the same datasets that are used in training the model, but they can be the same. By default, for performance reasons, the test dataset is restricted to 5,000 rows of the visualization UI.
+
+The constructor component also accepts the following parameters:
+
+| Parameter name | Description | Type |
+||||
+| `title` | Brief description of the dashboard. | String |
+| `task_type` | Specifies whether the model is for classification or regression. | String, `classification` or `regression` |
+| `target_column_name` | The name of the column in the input datasets, which the model is trying to predict. | String |
+| `maximum_rows_for_test_dataset` | The maximum number of rows allowed in the test dataset, for performance reasons. | Integer, defaults to 5,000 |
+| `categorical_column_names` | The columns in the datasets, which represent categorical data. | Optional list of strings<sup>1</sup> |
+| `classes` | The full list of class labels in the training dataset. | Optional list of strings<sup>1</sup> |
+
+<sup>1</sup> The lists should be supplied as a single JSON-encoded string for `categorical_column_names` and `classes` inputs.
+
+The constructor component has a single output named `rai_insights_dashboard`. This is an empty dashboard, which the individual tool components operate on. All the results are assembled by the `Gather RAI Insights dashboard` component at the end.
+
+# [YAML](#tab/yaml)
+
+```yml
+ create_rai_job:
+
+ΓÇ» ΓÇ» type: command
+ΓÇ» ΓÇ» component: azureml://registries/azureml/components/microsoft_azureml_rai_tabular_insight_constructor/versions/<get current version>
+ΓÇ» ΓÇ» inputs:
+ΓÇ» ΓÇ» ΓÇ» Title: From YAML snippet
+ΓÇ» ΓÇ» ΓÇ» task_type: regression
+ type: mlflow_model
+ path: azureml:<registered_model_name>:<registered model version>
+ΓÇ» ΓÇ» ΓÇ» train_dataset: ${{parent.inputs.my_training_data}}
+ΓÇ» ΓÇ» ΓÇ» test_dataset: ${{parent.inputs.my_test_data}}
+ΓÇ» ΓÇ» ΓÇ» target_column_name: ${{parent.inputs.target_column_name}}
+ΓÇ» ΓÇ» ΓÇ» categorical_column_names: '["location", "style", "job title", "OS", "Employer", "IDE", "Programming language"]'
+```
+
+# [Python SDK](#tab/python)
+
+First load the component:
+
+```python
+# First load the component:
+
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_constructor_component = ml_client_registry.components.get(name="microsoft_azureml_rai_tabular_insight_constructor", label="latest")
+
+#Then inside the pipeline:
+
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» construct_job = rai_constructor_component(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» title="From Python",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» task_type="classification",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» model_input= model_input= Input(type=AssetTypes.MLFLOW_MODEL, path="<azureml:model_name:model_id>"),
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» train_dataset=train_data,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» test_dataset=test_data,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» target_column_name=target_column_name,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» categorical_column_names='["location", "style", "job title", "OS", "Employer", "IDE", "Programming language"]',
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» maximum_rows_for_test_dataset=5000,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» classes="[]",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» )
+```
+++
+### Add Causal to RAI Insights dashboard
+
+This component performs a causal analysis on the supplied datasets. It has a single input port, which accepts the output of the `RAI Insights dashboard constructor`. It also accepts the following parameters:
+
+| Parameter name | Description | Type&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |
+||||
+| `treatment_features` | A list of feature names in the datasets, which are potentially "treatable" to obtain different outcomes. | List of strings<sup>2</sup>. |
+| `heterogeneity_features` | A list of feature names in the datasets, which might affect how the "treatable" features behave. By default, all features will be considered. | Optional list of strings<sup>2</sup>.|
+| `nuisance_model` | The model used to estimate the outcome of changing the treatment features. | Optional string. Must be `linear` or `AutoML`, defaulting to `linear`. |
+| `heterogeneity_model` | The model used to estimate the effect of the heterogeneity features on the outcome. | Optional string. Must be `linear` or `forest`, defaulting to `linear`. |
+| `alpha` | Confidence level of confidence intervals. | Optional floating point number, defaults to 0.05. |
+| `upper_bound_on_cat_expansion` | The maximum expansion of categorical features. | Optional integer, defaults to 50. |
+| `treatment_cost` | The cost of the treatments. If 0, all treatments will have zero cost. If a list is passed, each element is applied to one of the `treatment_features`.<br><br>Each element can be a scalar value to indicate a constant cost of applying that treatment or an array indicating the cost for each sample. If the treatment is a discrete treatment, the array for that feature should be two dimensional, with the first dimension representing samples and the second representing the difference in cost between the non-default values and the default value. | Optional integer or list<sup>2</sup>.|
+| `min_tree_leaf_samples` | The minimum number of samples per leaf in the policy tree. | Optional integer, defaults to 2. |
+| `max_tree_depth` | The maximum depth of the policy tree. | Optional integer, defaults to 2. |
+| `skip_cat_limit_checks` | By default, categorical features need to have several instances of each category in order for a model to be fit robustly. Setting this to `True` will skip these checks. |Optional Boolean, defaults to `False`. |
+| `categories` | The categories to use for the categorical columns. If `auto`, the categories will be inferred for all categorical columns. Otherwise, this argument should have as many entries as there are categorical columns.<br><br>Each entry should be either `auto` to infer the values for that column or the list of values for the column. If explicit values are provided, the first value is treated as the "control" value for that column against which other values are compared. | Optional, `auto` or list<sup>2</sup>. |
+| `n_jobs` | The degree of parallelism to use. | Optional integer, defaults to 1. |
+| `verbose` | Expresses whether to provide detailed output during the computation. | Optional integer, defaults to 1. |
+| `random_state` | Seed for the pseudorandom number generator (PRNG). | Optional integer. |
+
+<sup>2</sup> For the `list` parameters: Several of the parameters accept lists of other types (strings, numbers, even other lists). To pass these into the component, they must first be JSON-encoded into a single string.
+
+This component has a single output port, which can be connected to one of the `insight_[n]` input ports of the `Gather RAI Insights Dashboard` component.
+
+# [YAML](#tab/yaml)
+
+```yml
+ΓÇ» causal_01:
+ΓÇ» ΓÇ» type: command
+ΓÇ» ΓÇ» component: azureml://registries/azureml/components/microsoft_azureml_rai_tabular_causal/versions/<version>
+ΓÇ» ΓÇ» inputs:
+ΓÇ» ΓÇ» ΓÇ» rai_insights_dashboard: ${{parent.jobs.create_rai_job.outputs.rai_insights_dashboard}}
+ΓÇ» ΓÇ» ΓÇ» treatment_features: `["Number of GitHub repos contributed to", "YOE"]'
+```
+
+# [Python SDK](#tab/python)
+
+```python
+#First load the component:
+
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_causal_component = ml_client_registry.components.get(name="microsoft_azureml_rai_tabular_causal", label="latest")
+
+#Use it inside a pipeline definition:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» causal_job = rai_causal_component(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_insights_dashboard=construct_job.outputs.rai_insights_dashboard,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» treatment_features='`["Number of GitHub repos contributed to", "YOE"]',
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» )
+```
+++
+### Add Counterfactuals to RAI Insights dashboard
+
+This component generates counterfactual points for the supplied test dataset. It has a single input port, which accepts the output of the RAI Insights dashboard constructor. It also accepts the following parameters:
+
+| Parameter name | Description | Type |
+||||
+| `total_CFs` | The number of counterfactual points to generate for each row in the test dataset. | Optional integer, defaults to 10. |
+| `method` | The `dice-ml` explainer to use. | Optional string. Either `random`, `genetic`, or `kdtree`. Defaults to `random`. |
+| `desired_class` | Index identifying the desired counterfactual class. For binary classification, this should be set to `opposite`. | Optional string or integer. Defaults to 0. |
+| `desired_range` | For regression problems, identify the desired range of outcomes. | Optional list of two numbers<sup>3</sup>. |
+| `permitted_range` | Dictionary with feature names as keys and the permitted range in a list as values. Defaults to the range inferred from training data. | Optional string or list<sup>3</sup>.|
+| `features_to_vary` | Either a string `all` or a list of feature names to vary. | Optional string or list<sup>3</sup>.|
+| `feature_importance` | Flag to enable computation of feature importances by using `dice-ml`. |Optional Boolean. Defaults to `True`. |
+
+<sup>3</sup> For the non-scalar parameters: Parameters that are lists or dictionaries should be passed as single JSON-encoded strings.
+
+This component has a single output port, which can be connected to one of the `insight_[n]` input ports of the `Gather RAI Insights dashboard` component.
+
+# [YAML](#tab/yaml)
+
+```yml
+ counterfactual_01:
+ΓÇ» ΓÇ» type: command
+ΓÇ» ΓÇ» component: azureml://registries/azureml/components/microsoft_azureml_rai_tabular_counterfactual/versions/<version>
+ΓÇ» ΓÇ» inputs:
+ΓÇ» ΓÇ» ΓÇ» rai_insights_dashboard: ${{parent.jobs.create_rai_job.outputs.rai_insights_dashboard}}
+ΓÇ» ΓÇ» ΓÇ» total_CFs: 10
+ΓÇ» ΓÇ» ΓÇ» desired_range: "[5, 10]"
+```
++
+# [Python SDK](#tab/python)
+
+```python
+#First load the component:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_counterfactual_component = ml_client_registry.components.get(name="microsoft_azureml_rai_tabular_counterfactual", label="latest")
+
+#Use it in a pipeline function:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» counterfactual_job = rai_counterfactual_component(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_insights_dashboard=create_rai_job.outputs.rai_insights_dashboard,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» total_cfs=10,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» desired_range="[5, 10]",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» )
+```
+++
+### Add Error Analysis to RAI Insights dashboard
+
+This component generates an error analysis for the model. It has a single input port, which accepts the output of the `RAI Insights Dashboard Constructor`. It also accepts the following parameters:
+
+| Parameter name | Description | Type |
+||||
+| `max_depth` | The maximum depth of the error analysis tree. | Optional integer. Defaults to 3. |
+| `num_leaves` | The maximum number of leaves in the error tree. | Optional integer. Defaults to 31. |
+| `min_child_samples` | The minimum number of datapoints required to produce a leaf. | Optional integer. Defaults to 20. |
+| `filter_features` | A list of one or two features to use for the matrix filter. | Optional list, to be passed as a single JSON-encoded string. |
+
+This component has a single output port, which can be connected to one of the `insight_[n]` input ports of the `Gather RAI Insights Dashboard` component.
+
+# [YAML](#tab/yaml)
+
+```yml
+ΓÇ» error_analysis_01:
+ΓÇ» ΓÇ» type: command
+ΓÇ» ΓÇ» component: azureml://registries/azureml/components/microsoft_azureml_rai_tabular_erroranalysis/versions/<version>
+ΓÇ» ΓÇ» inputs:
+ΓÇ» ΓÇ» ΓÇ» rai_insights_dashboard: ${{parent.jobs.create_rai_job.outputs.rai_insights_dashboard}}
+ΓÇ» ΓÇ» ΓÇ» filter_features: `["style", "Employer"]'
+```
+
+# [Python SDK](#tab/python)
+
+```python
+#First load the component:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_erroranalysis_component = ml_client_registry.components.get(name="microsoft_azureml_rai_tabular_erroranalysis", label="latest")
+
+#Use inside a pipeline:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» erroranalysis_job = rai_erroranalysis_component(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_insights_dashboard=create_rai_job.outputs.rai_insights_dashboard,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» filter_features='["style", "Employer"]',
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» )
+```
+++
+### Add Explanation to RAI Insights dashboard
+
+This component generates an explanation for the model. It has a single input port, which accepts the output of the `RAI Insights Dashboard Constructor`. It accepts a single, optional comment string as a parameter.
+
+This component has a single output port, which can be connected to one of the `insight_[n]` input ports of the Gather RAI Insights dashboard component.
++
+# [YAML](#tab/yaml)
+
+```yml
+ΓÇ» explain_01:
+ΓÇ» ΓÇ» type: command
+ΓÇ» ΓÇ» component: azureml://registries/azureml/components/microsoft_azureml_rai_tabular_explanation/versions/<version>
+ΓÇ» ΓÇ» inputs:
+ΓÇ» ΓÇ» ΓÇ» comment: My comment
+ΓÇ» ΓÇ» ΓÇ» rai_insights_dashboard: ${{parent.jobs.create_rai_job.outputs.rai_insights_dashboard}}
+```
++
+# [Python SDK](#tab/python)
+
+```python
+#First load the component:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_explanation_component = ml_client_registry.components.get(name="microsoft_azureml_rai_tabular_explanation", label="latest"
+
+#Use inside a pipeline:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» explain_job = rai_explanation_component(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» comment="My comment",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_insights_dashboard=create_rai_job.outputs.rai_insights_dashboard,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» )
+```
++
+### Gather RAI Insights dashboard
+
+This component assembles the generated insights into a single Responsible AI dashboard. It has five input ports:
+
+- The `constructor` port that must be connected to the RAI Insights dashboard constructor component.
+- Four `insight_[n]` ports that can be connected to the output of the tool components. At least one of these ports must be connected.
+
+There are two output ports:
+- The `dashboard` port contains the completed `RAIInsights` object.
+- The `ux_json` port contains the data required to display a minimal dashboard.
++
+# [YAML](#tab/yaml)
+
+```yml
+ΓÇ» gather_01:
+ΓÇ» ΓÇ» type: command
+ΓÇ» ΓÇ» component: azureml://registries/azureml/components/microsoft_azureml_rai_tabular_insight_gather/versions/<version>
+ΓÇ» ΓÇ» inputs:
+ΓÇ» ΓÇ» ΓÇ» constructor: ${{parent.jobs.create_rai_job.outputs.rai_insights_dashboard}}
+ΓÇ» ΓÇ» ΓÇ» insight_1: ${{parent.jobs.causal_01.outputs.causal}}
+ΓÇ» ΓÇ» ΓÇ» insight_2: ${{parent.jobs.counterfactual_01.outputs.counterfactual}}
+ΓÇ» ΓÇ» ΓÇ» insight_3: ${{parent.jobs.error_analysis_01.outputs.error_analysis}}
+ΓÇ» ΓÇ» ΓÇ» insight_4: ${{parent.jobs.explain_01.outputs.explanation}}
+```
++
+# [Python SDK](#tab/python)
+
+```python
+#First load the component:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_gather_component = ml_client_registry.components.get(name="microsoft_azureml_rai_tabular_insight_gather", label="latest")
+#Use in a pipeline:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» rai_gather_job = rai_gather_component(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» constructor=create_rai_job.outputs.rai_insights_dashboard,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» insight_1=explain_job.outputs.explanation,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» insight_2=causal_job.outputs.causal,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» insight_3=counterfactual_job.outputs.counterfactual,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» insight_4=erroranalysis_job.outputs.error_analysis,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» )
+```
++++
+## How to generate a Responsible AI scorecard?
+
+The configuration stage requires you to use your domain expertise around the problem to set your desired target values on model performance and fairness metrics.
+
+Like other Responsible AI dashboard components configured in the YAML pipeline, you can add a component to generate the scorecard in the YAML pipeline:
+
+```yml
+scorecard_01:
+
+ type: command
+ component: azureml:rai_score_card@latest
+ inputs:
+ dashboard: ${{parent.jobs.gather_01.outputs.dashboard}}
+ pdf_generation_config:
+ type: uri_file
+ path: ./pdf_gen.json
+ mode: download
+
+ predefined_cohorts_json:
+ type: uri_file
+ path: ./cohorts.json
+ mode: download
+
+```
+
+Where pdf_gen.json is the score card generation configuration json file, and *predifined_cohorts_json* ID the prebuilt cohorts definition json file.
+
+Here's a sample JSON file for cohorts definition and scorecard-generation configuration:
++
+Cohorts definition:
+```yml
+[
+ {
+ "name": "High Yoe",
+ "cohort_filter_list": [
+ {
+ "method": "greater",
+ "arg": [
+ 5
+ ],
+ "column": "YOE"
+ }
+ ]
+ },
+ {
+ "name": "Low Yoe",
+ "cohort_filter_list": [
+ {
+ "method": "less",
+ "arg": [
+ 6.5
+ ],
+ "column": "YOE"
+ }
+ ]
+ }
+]
+```
+
+Here's a scorecard-generation configuration file as a regression example:
+
+```yml
+{
+ "Model": {
+ "ModelName": "GPT-2 Access",
+ "ModelType": "Regression",
+ "ModelSummary": "This is a regression model to analyze how likely a programmer is given access to GPT-2"
+ },
+ "Metrics": {
+ "mean_absolute_error": {
+ "threshold": "<=20"
+ },
+ "mean_squared_error": {}
+ },
+ "FeatureImportance": {
+ "top_n": 6
+ },
+ "DataExplorer": {
+ "features": [
+ "YOE",
+ "age"
+ ]
+ },
+ "Fairness": {
+ "metric": ["mean_squared_error"],
+ "sensitive_features": ["YOUR SENSITIVE ATTRIBUTE"],
+ "fairness_evaluation_kind": "difference OR ratio"
+ },
+ "Cohorts": [
+ "High Yoe",
+ "Low Yoe"
+ ]
+}
+```
+
+Here's a scorecard-generation configuration file as a classification example:
+
+```yml
+{
+ "Model": {
+ "ModelName": "Housing Price Range Prediction",
+ "ModelType": "Classification",
+ "ModelSummary": "This model is a classifier that predicts whether the house will sell for more than the median price."
+ },
+ "Metrics" :{
+ "accuracy_score": {
+ "threshold": ">=0.85"
+ },
+ }
+ "FeatureImportance": {
+ "top_n": 6
+ },
+ "DataExplorer": {
+ "features": [
+ "YearBuilt",
+ "OverallQual",
+ "GarageCars"
+ ]
+ },
+ "Fairness": {
+ "metric": ["accuracy_score", "selection_rate"],
+ "sensitive_features": ["YOUR SENSITIVE ATTRIBUTE"],
+ "fairness_evaluation_kind": "difference OR ratio"
+ }
+}
+```
+
+### Definition of inputs for the Responsible AI scorecard component
+
+This section lists and defines the parameters that are required to configure the Responsible AI scorecard component.
+
+#### Model
+
+| ModelName | Name of model |
+|||
+| `ModelType` | Values in ['classification', 'regression']. |
+| `ModelSummary` | Enter text that summarizes what the model is for. |
+
+> [!NOTE]
+> For multi-class classification, you should first use the One-vs-Rest strategy to choose your reference class, and then split your multi-class classification model into a binary classification problem for your selected reference class versus the rest of the classes.
+
+#### Metrics
+
+| Performance metric | Definition | Model type |
+||||
+| `accuracy_score` | The fraction of data points that are classified correctly. | Classification |
+| `precision_score` | The fraction of data points that are classified correctly among those classified as 1. | Classification |
+| `recall_score` | The fraction of data points that are classified correctly among those whose true label is 1. Alternative names: true positive rate, sensitivity. | Classification |
+| `f1_score` | The F1 score is the harmonic mean of precision and recall. | Classification |
+| `error_rate` | The proportion of instances that are misclassified over the whole set of instances. | Classification |
+| `mean_absolute_error` | The average of absolute values of errors. More robust to outliers than `mean_squared_error`. | Regression |
+| `mean_squared_error` | The average of squared errors. | Regression |
+| `median_absolute_error` | The median of squared errors. | Regression |
+| `r2_score` | The fraction of variance in the labels explained by the model. | Regression |
+
+Threshold: The desired threshold for the selected metric. Allowed mathematical tokens are >, <, >=, and <=m, followed by a real number. For example, >= 0.75 means that the target for the selected metric is greater than or equal to 0.75.
+
+#### Feature importance
+
+top_n: The number of features to show, with a maximum of 10. Positive integers up to 10 are allowed.
+
+#### Fairness
+
+| Metric | Definition |
+|--|--|
+| `metric` | The primary metric for evaluation fairness. |
+| `sensitive_features` | A list of feature names from the input dataset to be designated as sensitive features for the fairness report. |
+| `fairness_evaluation_kind` | Values in ['difference', 'ratio']. |
+| `threshold` | The *desired target values* of the fairness evaluation. Allowed mathematical tokens are >, <, >=, and <=, followed by a real number.<br>For example, metric="accuracy", fairness_evaluation_kind="difference".<br><= 0.05 means that the target for the difference in accuracy is less than or equal to 0.05. |
+
+> [!NOTE]
+> Your choice of `fairness_evaluation_kind` (selecting 'difference' versus 'ratio') affects the scale of your target value. In your selection, be sure to choose a meaningful target value.
+
+You can select from the following metrics, paired with `fairness_evaluation_kind`, to configure your fairness assessment component of the scorecard:
+
+| Metric | fairness_evaluation_kind | Definition | Model type |
+|||||
+| `accuracy_score` | difference | The maximum difference in accuracy score between any two groups. | Classification |
+| `accuracy_score` | ratio | The minimum ratio in accuracy score between any two groups. | Classification |
+| `precision_score` | difference | The maximum difference in precision score between any two groups. | Classification |
+| `precision_score` | ratio | The maximum ratio in precision score between any two groups. | Classification |
+| `recall_score` | difference | The maximum difference in recall score between any two groups. | Classification |
+| `recall_score` | ratio | The maximum ratio in recall score between any two groups. | Classification |
+| `f1_score` | difference | The maximum difference in f1 score between any two groups. | Classification |
+| `f1_score` | ratio | The maximum ratio in f1 score between any two groups. | Classification |
+| `error_rate` | difference | The maximum difference in error rate between any two groups. | Classification |
+| `error_rate` | ratio | The maximum ratio in error rate between any two groups.|Classification|
+| `Selection_rate` | difference | The maximum difference in selection rate between any two groups. | Classification |
+| `Selection_rate` | ratio | The maximum ratio in selection rate between any two groups. | Classification |
+| `mean_absolute_error` | difference | The maximum difference in mean absolute error between any two groups. | Regression |
+| `mean_absolute_error` | ratio | The maximum ratio in mean absolute error between any two groups. | Regression |
+| `mean_squared_error` | difference | The maximum difference in mean squared error between any two groups. | Regression |
+| `mean_squared_error` | ratio | The maximum ratio in mean squared error between any two groups. | Regression |
+| `median_absolute_error` | difference | The maximum difference in median absolute error between any two groups. | Regression |
+| `median_absolute_error` | ratio | The maximum ratio in median absolute error between any two groups. | Regression |
+| `r2_score` | difference | The maximum difference in R<sup>2</sup> score between any two groups. | Regression |
+| `r2_Score` | ratio | The maximum ratio in R<sup>2</sup> score between any two groups. | Regression |
+
+## Input constraints
+
+### What model formats and flavors are supported?
+
+The model must be in the MLflow directory with a sklearn flavor available. Additionally, the model needs to be loadable in the environment that's used by the Responsible AI components.
+
+### What data formats are supported?
+
+The supplied datasets should be `mltable` with tabular data.
+
+## Next steps
+
+- After you've generated your Responsible AI dashboard, [view how to access and use it in Azure Machine Learning studio](how-to-responsible-ai-dashboard.md).
+- Summarize and share your Responsible AI insights with the [Responsible AI scorecard as a PDF export](how-to-responsible-ai-scorecard.md).
+- Learn more about the [concepts and techniques behind the Responsible AI dashboard](concept-responsible-ai-dashboard.md).
+- Learn more about how to [collect data responsibly](concept-sourcing-human-data.md).
+- View [sample YAML and Python notebooks](https://aka.ms/RAIsamples) to generate the Responsible AI dashboard with YAML or Python.
+- Learn more about how to use the Responsible AI dashboard and scorecard to debug data and models and inform better decision-making in this [tech community blog post](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard).
+- Learn about how the Responsible AI dashboard and scorecard were used by the UK National Health Service (NHS) in a [real life customer story](https://aka.ms/NHSCustomerStory).
+- Explore the features of the Responsible AI dashboard through this [interactive AI lab web demo](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard).
machine-learning How To Responsible Ai Insights Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-insights-ui.md
+
+ Title: Generate a Responsible AI insights in the studio UI
+
+description: Learn how to generate a Responsible AI insights with no-code experience in the Azure Machine Learning studio UI.
+++++++ Last updated : 11/09/2022+++
+# Generate a Responsible AI insights in the studio UI
+
+In this article, you create a Responsible AI dashboard and scorecard (preview) with a no-code experience in the [Azure Machine Learning studio UI](https://ml.azure.com/).
+
+To access the dashboard generation wizard and generate a Responsible AI dashboard, do the following:
+1. [Register your model](how-to-manage-models.md) in Azure Machine Learning so that you can access the no-code experience.
+1. On the left pane of Azure Machine Learning studio, select the **Models** tab.
+1. Select the registered model that you want to create Responsible AI insights for, and then select the **Details** tab.
+1. Select **Create Responsible AI dashboard (preview)**.
+
+ :::image type="content" source="./media/how-to-responsible-ai-insights-ui/create-responsible-ai-dashboard.png" alt-text="Screenshot of the wizard details pane with 'Create Responsible AI dashboard (preview)' tab highlighted." lightbox ="./media/how-to-responsible-ai-insights-ui/create-responsible-ai-dashboard.png":::
+
+To learn more supported model types and limitations in the Responsible AI dashboard, see [supported scenarios and limitations](concept-responsible-ai-dashboard.md#supported-scenarios-and-limitations).
+
+The wizard provides an interface for entering all the necessary parameters to create your Responsible AI dashboard without having to touch code. The experience takes place entirely in the Azure Machine Learning studio UI. The studio presents a guided flow and instructional text to help contextualize the variety of choices about which Responsible AI components youΓÇÖd like to populate your dashboard with.
+
+The wizard is divided into five sections:
+
+1. Training datasets
+1. Test dataset
+1. Modeling task
+1. Dashboard components
+1. Component parameters
+1. Experiment configuration
+
+## Select your datasets
+
+In the first two sections, you select the train and test datasets that you used when you trained your model to generate model-debugging insights. For components like causal analysis, which doesn't require a model, you use the train dataset to train the causal model to generate the causal insights.
+
+> [!NOTE]
+> Only tabular dataset formats in ML Table are supported.
+
+1. **Select a dataset for training**: In the list of registered datasets in the Azure Machine Learning workspace, select the dataset you want to use to generate Responsible AI insights for components, such as model explanations and error analysis.
+
+ :::image type="content" source="./media/how-to-responsible-ai-insights-ui/create-responsible-ai-dashboard-ui-train-dataset.png" alt-text="Screenshot of the train dataset tab." lightbox= "./media/how-to-responsible-ai-insights-ui/create-responsible-ai-dashboard-ui-train-dataset.png":::
+
+1. **Select a dataset for testing**: In the list of registered datasets, select the dataset you want to use to populate your Responsible AI dashboard visualizations.
+
+ :::image type="content" source="./media/how-to-responsible-ai-insights-ui/create-responsible-ai-dashboard-ui-test-dataset.png" alt-text="Screenshot of the test dataset tab." lightbox= "./media/how-to-responsible-ai-insights-ui/create-responsible-ai-dashboard-ui-test-dataset.png":::
+
+1. If the train or test dataset you want to use isn't listed, select **Create** to upload it.
+
+## Select your modeling task
+
+After you've picked your datasets, select your modeling task type, as shown in the following image:
++
+## Select your dashboard components
+
+The Responsible AI dashboard offers two profiles for recommended sets of tools that you can generate:
+
+- **Model debugging**: Understand and debug erroneous data cohorts in your machine learning model by using error analysis, counterfactual what-if examples, and model explainability.
+- **Real-life interventions**: Understand and debug erroneous data cohorts in your machine learning model by using causal analysis.
+
+ > [!NOTE]
+ > Multi-class classification doesn't support the real-life interventions analysis profile.
++
+1. Select the profile you want to use.
+1. Select **Next**.
+
+## Configure parameters for dashboard components
+
+After youΓÇÖve selected a profile, the **Component parameters for model debugging** configuration pane for the corresponding components appears.
++
+Component parameters for model debugging:
+
+1. **Target feature (required)**: Specify the feature that your model was trained to predict.
+1. **Categorical features**: Indicate which features are categorical to properly render them as categorical values in the dashboard UI. This field is pre-loaded for you based on your dataset metadata.
+1. **Generate error tree and heat map**: Toggle on and off to generate an error analysis component for your Responsible AI dashboard.
+1. **Features for error heat map**: Select up to two features that you want to pre-generate an error heatmap for.
+1. **Advanced configuration**: Specify additional parameters, such as **Maximum depth of error tree**, **Number of leaves in error tree**, and **Minimum number of samples in each leaf node**.
+1. **Generate counterfactual what-if examples**: Toggle on and off to generate a counterfactual what-if component for your Responsible AI dashboard.
+1. **Number of counterfactuals (required)**: Specify the number of counterfactual examples that you want generated per data point. A minimum of 10 should be generated to enable a bar chart view of the features that were most perturbed, on average, to achieve the desired prediction.
+1. **Range of value predictions (required)**: Specify for regression scenarios the range that you want counterfactual examples to have prediction values in. For binary classification scenarios, the range will automatically be set to generate counterfactuals for the opposite class of each data point. For multi-classification scenarios, use the dropdown list to specify which class you want each data point to be predicted as.
+1. **Specify which features to perturb**: By default, all features will be perturbed. However, if you want only specific features to be perturbed, select **Specify which features to perturb for generating counterfactual explanations** to display a pane with a list of features to select.
+
+ When you select **Specify which features to perturb**, you can specify the range you want to allow perturbations in. For example: for the feature YOE (Years of experience), specify that counterfactuals should have feature values ranging from only 10 to 21 instead of the default values of 5 to 21.
+
+ :::image type="content" source="./media/how-to-responsible-ai-insights-ui/model-debug-counterfactuals.png" alt-text="Screenshot of the wizard, showing a pane of features you can specify to perturb." lightbox = "./media/how-to-responsible-ai-insights-ui/model-debug-counterfactuals.png":::
+
+1. **Generate explanations**: Toggle on and off to generate a model explanation component for your Responsible AI dashboard. No configuration is necessary, because a default opaque box mimic explainer will be used to generate feature importances.
+
+Alternatively, if you select the **Real-life interventions** profile, youΓÇÖll see the following screen generate a causal analysis. This will help you understand the causal effects of features you want to ΓÇ£treatΓÇ¥ on a certain outcome you want to optimize.
++
+Component parameters for real-life interventions use causal analysis. Do the following:
+
+1. **Target feature (required)**: Choose the outcome you want the causal effects to be calculated for.
+1. **Treatment features (required)**: Choose one or more features that youΓÇÖre interested in changing (ΓÇ£treatingΓÇ¥) to optimize the target outcome.
+1. **Categorical features**: Indicate which features are categorical to properly render them as categorical values in the dashboard UI. This field is pre-loaded for you based on your dataset metadata.
+1. **Advanced settings**: Specify additional parameters for your causal analysis, such as heterogenous features (that is, additional features to understand causal segmentation in your analysis, in addition to your treatment features) and which causal model you want to be used.
+
+## Configure your experiment
+
+Finally, configure your experiment to kick off a job to generate your Responsible AI dashboard.
++
+On the **Training job** or **Experiment configuration** pane, do the following:
+
+1. **Name**: Give your dashboard a unique name so that you can differentiate it when youΓÇÖre viewing the list of dashboards for a given model.
+1. **Experiment name**: Select an existing experiment to run the job in, or create a new experiment.
+1. **Existing experiment**: In the dropdown list, select an existing experiment.
+1. **Select compute type**: Specify which compute type you want to use to execute your job.
+1. **Select compute**: In the dropdown list, select the compute you want to use. If there are no existing compute resources, select the plus sign (**+**), create a new compute resource, and then refresh the list.
+1. **Description**: Add a longer description of your Responsible AI dashboard.
+1. **Tags**: Add any tags to this Responsible AI dashboard.
+
+After youΓÇÖve finished configuring your experiment, select **Create** to start generating your Responsible AI dashboard. You'll be redirected to the experiment page to track the progress of your job with a link to the resulting Responsible AI dashboard from the job page when it's completed.
+
+To learn how to view and use your Responsible AI dashboard see, [Use the Responsible AI dashboard in Azure Machine Learning studio](how-to-responsible-ai-dashboard.md).
+
+## Generate Responsible AI scorecard (preview)
+
+Once you've created a dashboard, you can use a no-code UI in Azure Machine Learning studio to customize and generate a Responsible AI scorecard. This enables you to share key insights for responsible deployment of your model, such as fairness and feature importance, with non-technical and technical stakeholders. Similar to creating a dashboard, you can use the following steps to access the scorecard generation wizard:
+
+- Navigate to the Models tab from the left navigation bar in Azure Machine Learning studio.
+- Select the registered model youΓÇÖd like to create a scorecard for and select the **Responsible AI** tab.
+- From the top panel, select **Create Responsible AI insights (preview)** and then **Generate new PDF scorecard**.
+
+The wizard will allow you to customize your PDF scorecard without having to touch code. The experience takes place entirely in the Azure Machine Learning studio to help contextualize the variety of choices of UI with a guided flow and instructional text to help you choose the components youΓÇÖd like to populate your scorecard with. The wizard is divided into seven steps, with an eighth step (fairness assessment) that will only appear for models with categorical features:
+
+1. PDF scorecard summary
+2. Model performance
+3. Tool selection
+4. Data analysis (previously called data explorer)
+5. Causal analysis
+6. Interpretability
+7. Experiment configuration
+8. Fairness assessment (only if categorical features exist)
+
+### Configuring your scorecard
+
+1. First, enter a descriptive title for your scorecard. You can also enter an optional description about the model's functionality, data it was trained and evaluated on, architecture type, and more.
+
+ :::image type="content" source="./media/how-to-responsible-ai-insights-ui/scorecard-summary.png" alt-text="Screenshot of the wizard on scorecard summary configuration." lightbox= "./media/how-to-responsible-ai-insights-ui/scorecard-summary.png":::
+
+2. *The Model performance* section allows you to incorporate into your scorecard industry-standard model evaluation metrics, while enabling you to set desired target values for your selected metrics. Select your desired performance metrics (up to three) and target values using the dropdowns.
+
+ :::image type="content" source="./media/how-to-responsible-ai-insights-ui/scorecard-performance.png" alt-text="Screenshot of the wizard on scorecard model performance configuration." lightbox= "./media/how-to-responsible-ai-insights-ui/scorecard-performance.png":::
+
+3. *The Tool selection* step allows you to choose which subsequent components you would like to include in your scorecard. Check Include in scorecard to include all components, or check/uncheck each component individually. Select the info icon ("i" in a circle) next to the components to learn more about them.
+
+ :::image type="content" source="./media/how-to-responsible-ai-insights-ui/scorecard-selection.png" alt-text="Screenshot of the wizard on scorecard tool selection configuration." lightbox= "./media/how-to-responsible-ai-insights-ui/scorecard-selection.png":::
+
+4. *The Data analysis* section (previously called data explorer) enables cohort analysis. Here, you can identify issues of over- and under-representation explore how data is clustered in the dataset, and how model predictions impact specific data cohorts. Use checkboxes in the dropdown to select your features of interest below to identify your model performance on their underlying cohorts.
+
+ :::image type="content" source="./media/how-to-responsible-ai-insights-ui/scorecard-explorer.png" alt-text="Screenshot of the wizard on scorecard data analysis configuration." lightbox= "./media/how-to-responsible-ai-insights-ui/scorecard-explorer.png":::
+
+5. *The Fairness assessment* section can help with assessing which groups of people might be negatively impacted by predictions of a machine learning model. There are two fields in this section.
+
+ - Sensitive features: identify your sensitive attribute(s) of choice (for example, age, gender) by prioritizing up to 20 subgroups you would like to explore and compare.
+
+ - Fairness metric: select a fairness metric that is appropriate for your setting (for example, difference in accuracy, error rate ratio), and identify your desired target value(s) on your selected fairness metric(s). Your selected fairness metric (paired with your selection of difference or ratio via the toggle) will capture the difference or ratio between the extreme values across the subgroups. (max - min or max/min).
+
+ :::image type="content" source="./media/how-to-responsible-ai-insights-ui/scorecard-fairness.png" alt-text="Screenshot of the wizard on scorecard fairness assessment configuration." lightbox= "./media/how-to-responsible-ai-insights-ui/scorecard-fairness.png":::
+
+ > [!NOTE]
+ > The Fairness assessment is currently only available for categorical sensitive attributes such as gender.
+
+6. *The Causal analysis* section answers real-world ΓÇ£what ifΓÇ¥ questions about how changes of treatments would impact a real-world outcome. If the causal component is activated in the Responsible AI dashboard for which you're generating a scorecard, no more configuration is needed.
+
+ :::image type="content" source="./media/how-to-responsible-ai-insights-ui/scorecard-causal.png" alt-text="Screenshot of the wizard on scorecard causal analysis configuration." lightbox= "./media/how-to-responsible-ai-insights-ui/scorecard-causal.png":::
+
+7. *The Interpretability* section generates human-understandable descriptions for predictions made by of your machine learning model. Using model explanations, you can understand the reasoning behind decisions made by your model. Select a number (K) below to see the top K important features impacting your overall model predictions. The default value for K is 10.
+
+ :::image type="content" source="./media/how-to-responsible-ai-insights-ui/scorecard-interpretability.png" alt-text="Screenshot of the wizard on scorecard feature importance configuration." lightbox= "./media/how-to-responsible-ai-insights-ui/scorecard-interpretability.png":::
+
+8. Lastly, configure your experiment to kick off a job to generate your scorecard. These configurations are the same as the ones for your Responsible AI dashboard.
+
+ :::image type="content" source="./media/how-to-responsible-ai-insights-ui/scorecard-experiment.png" alt-text="Screenshot of the wizard on scorecard experiment configuration." lightbox= "./media/how-to-responsible-ai-insights-ui/scorecard-experiment.png":::
+
+9. Finally, review your configurations and select **Create** to start your job!
+
+ :::image type="content" source="./media/how-to-responsible-ai-insights-ui/scorecard-review.png" alt-text="Screenshot of the wizard on scorecard configuration review." lightbox= "./media/how-to-responsible-ai-insights-ui/scorecard-review.png":::
+
+ You'll be redirected to the experiment page to track the progress of your job once you've started it. To learn how to view and use your Responsible AI scorecard, see [Use Responsible AI scorecard (preview)](how-to-responsible-ai-scorecard.md).
+
+## Next steps
+
+- After you've generated your Responsible AI dashboard, [view how to access and use it in Azure Machine Learning studio](how-to-responsible-ai-dashboard.md).
+- Learn more about the [concepts and techniques behind the Responsible AI dashboard](concept-responsible-ai-dashboard.md).
+- Learn more about how to [collect data responsibly](concept-sourcing-human-data.md).
+- Learn more about how to use the Responsible AI dashboard and scorecard to debug data and models and inform better decision-making in this [tech community blog post](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard).
+- Learn about how the Responsible AI dashboard and scorecard were used by the UK National Health Service (NHS) in a [real life customer story](https://aka.ms/NHSCustomerStory).
+- Explore the features of the Responsible AI dashboard through this [interactive AI Lab web demo](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard).
machine-learning How To Responsible Ai Scorecard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-scorecard.md
Title: Share insights with a Responsible AI scorecard (preview)
+ Title: Use Responsible AI scorecard (preview) in Azure Machine Learning
description: Share insights with non-technical business stakeholders by exporting a PDF Responsible AI scorecard from Azure Machine Learning.
Previously updated : 08/17/2022 Last updated : 11/09/2022
-# Share insights with a Responsible AI scorecard (preview)
+# Use Responsible AI scorecard (preview) in Azure Machine Learning
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)] An Azure Machine Learning Responsible AI scorecard is a PDF report that's generated based on Responsible AI dashboard insights and customizations to accompany your machine learning models. You can easily configure, download, and share your PDF scorecard with your technical and non-technical stakeholders to educate them about your data and model health and compliance, and to help build trust. You can also use the scorecard in audit reviews to inform the stakeholders about the characteristics of your model.
+## Where to find your Responsible AI scorecard
-## Why a Responsible AI scorecard?
-
-The Responsible AI dashboard is designed for machine learning professionals and data scientists to explore and evaluate model insights and inform their data-driven decisions. Though the dashboard can help you implement Responsible AI practically in your machine learning lifecycle, there are some needs left unaddressed:
--- There often exists a gap between the technical Responsible AI tools (designed for machine learning professionals) and the ethical, regulatory, and business requirements that define the production environment.-- Although an end-to-end machine learning lifecycle keeps both technical and non-technical stakeholders in the loop, there's very little support to enable an effective multi-stakeholder alignment where technical experts get timely feedback and direction from the non-technical stakeholders.-- AI regulations make it essential to be able to share model and data insights with auditors and risk officers for auditability purposes.-
-One of the biggest benefits of using the Azure Machine Learning ecosystem is related to the ability to archive, for quick future reference, model and data insights in the Azure Machine Learning run history. As a part of that infrastructure and to accompany machine learning models and their corresponding Responsible AI dashboards, we're introducing the Responsible AI scorecard to empower machine learning professionals to generate and share their data and model health records easily.
-
-## Who should use a Responsible AI scorecard?
-
-* If you're a data scientist or machine learning professional:
-
- After training your model and generating its corresponding Responsible AI dashboards for assessment and decision-making purposes, you can extract those learnings via our PDF scorecard and share the report easily with your technical and non-technical stakeholders. Doing so helps build trust and gain their approval for deployment.
-
-* If you're a product manager, a business leader, or an accountable stakeholder on an AI product:
-
- You can pass your desired model performance and fairness target values, such as target accuracy or target error rate, to your data science team. The team can generate a scorecard with respect to your identified target values, assess whether your model meets them, and then advise as to whether the model should be deployed or further improved.
-
-## Generate a Responsible AI scorecard
-
-The configuration stage requires you to use your domain expertise around the problem to set your desired target values on model performance and fairness metrics.
-
-As with other Responsible AI dashboard components [configured in the YAML pipeline](how-to-responsible-ai-dashboard-sdk-cli.md?tabs=yaml#responsible-ai-components), you can add a component to generate the scorecard in the YAML pipeline.
-
-In the following code, the *pdf_gen.json* file is the JSON configuration file for scorecard generation, and *cohorts.json* is the JSON definition file for pre-built cohorts.
-
-```yml
-scorecard_01:
-
- type: command
- component: azureml:rai_score_card@latest
- inputs:
- dashboard: ${{parent.jobs.gather_01.outputs.dashboard}}
- pdf_generation_config:
- type: uri_file
- path: ./pdf_gen.json
- mode: download
-
- predefined_cohorts_json:
- type: uri_file
- path: ./cohorts.json
- mode: download
-
-```
-
-Here's a sample JSON file for cohorts definition and scorecard-generation configuration:
--
-Cohorts definition:
-```yml
-[
- {
- "name": "High Yoe",
- "cohort_filter_list": [
- {
- "method": "greater",
- "arg": [
- 5
- ],
- "column": "YOE"
- }
- ]
- },
- {
- "name": "Low Yoe",
- "cohort_filter_list": [
- {
- "method": "less",
- "arg": [
- 6.5
- ],
- "column": "YOE"
- }
- ]
- }
-]
-```
-
-Here's a scorecard-generation configuration file as a regression example:
-
-```yml
-{
- "Model": {
- "ModelName": "GPT-2 Access",
- "ModelType": "Regression",
- "ModelSummary": "This is a regression model to analyze how likely a programmer is given access to GPT-2"
- },
- "Metrics": {
- "mean_absolute_error": {
- "threshold": "<=20"
- },
- "mean_squared_error": {}
- },
- "FeatureImportance": {
- "top_n": 6
- },
- "DataExplorer": {
- "features": [
- "YOE",
- "age"
- ]
- },
- "Fairness": {
- "metric": ["mean_squared_error"],
- "sensitive_features": ["YOUR SENSITIVE ATTRIBUTE"],
- "fairness_evaluation_kind": "difference OR ratio"
- },
- "Cohorts": [
- "High Yoe",
- "Low Yoe"
- ]
-}
-```
-
-Here's a scorecard-generation configuration file as a classification example:
-
-```yml
-{
- "Model": {
- "ModelName": "Housing Price Range Prediction",
- "ModelType": "Classification",
- "ModelSummary": "This model is a classifier that predicts whether the house will sell for more than the median price."
- },
- "Metrics" :{
- "accuracy_score": {
- "threshold": ">=0.85"
- },
- }
- "FeatureImportance": {
- "top_n": 6
- },
- "DataExplorer": {
- "features": [
- "YearBuilt",
- "OverallQual",
- "GarageCars"
- ]
- },
- "Fairness": {
- "metric": ["accuracy_score", "selection_rate"],
- "sensitive_features": ["YOUR SENSITIVE ATTRIBUTE"],
- "fairness_evaluation_kind": "difference OR ratio"
- }
-}
-```
--
-### Definition of inputs for the Responsible AI scorecard component
-
-This section lists and defines the parameters that are required to configure the Responsible AI scorecard component.
-
-#### Model
-
-| ModelName | Name of model |
-|||
-| `ModelType` | Values in ['classification', 'regression']. |
-| `ModelSummary` | Enter text that summarizes what the model is for. |
-
-> [!NOTE]
-> For multi-class classification, you should first use the One-vs-Rest strategy to choose your reference class, and then split your multi-class classification model into a binary classification problem for your selected reference class versus the rest of the classes.
-
-#### Metrics
-
-| Performance metric | Definition | Model type |
-||||
-| `accuracy_score` | The fraction of data points that are classified correctly. | Classification |
-| `precision_score` | The fraction of data points that are classified correctly among those classified as 1. | Classification |
-| `recall_score` | The fraction of data points that are classified correctly among those whose true label is 1. Alternative names: true positive rate, sensitivity. | Classification |
-| `f1_score` | The F1 score is the harmonic mean of precision and recall. | Classification |
-| `error_rate` | The proportion of instances that are misclassified over the whole set of instances. | Classification |
-| `mean_absolute_error` | The average of absolute values of errors. More robust to outliers than `mean_squared_error`. | Regression |
-| `mean_squared_error` | The average of squared errors. | Regression |
-| `median_absolute_error` | The median of squared errors. | Regression |
-| `r2_score` | The fraction of variance in the labels explained by the model. | Regression |
-
-Threshold: The desired threshold for the selected metric. Allowed mathematical tokens are >, <, >=, and <=m, followed by a real number. For example, >= 0.75 means that the target for the selected metric is greater than or equal to 0.75.
-
-#### Feature importance
-
-top_n: The number of features to show, with a maximum of 10. Positive integers up to 10 are allowed.
-
-#### Fairness
-
-| Metric | Definition |
-|--|--|
-| `metric` | The primary metric for evaluation fairness. |
-| `sensitive_features` | A list of feature names from the input dataset to be designated as sensitive features for the fairness report. |
-| `fairness_evaluation_kind` | Values in ['difference', 'ratio']. |
-| `threshold` | The *desired target values* of the fairness evaluation. Allowed mathematical tokens are >, <, >=, and <=, followed by a real number.<br>For example, metric="accuracy", fairness_evaluation_kind="difference".<br><= 0.05 means that the target for the difference in accuracy is less than or equal to 0.05. |
-
-> [!NOTE]
-> Your choice of `fairness_evaluation_kind` (selecting 'difference' versus 'ratio') affects the scale of your target value. In your selection, be sure to choose a meaningful target value.
-
-You can select from the following metrics, paired with `fairness_evaluation_kind`, to configure your fairness assessment component of the scorecard:
-
-| Metric | fairness_evaluation_kind | Definition | Model type |
-|||||
-| `accuracy_score` | difference | The maximum difference in accuracy score between any two groups. | Classification |
-| `accuracy_score` | ratio | The minimum ratio in accuracy score between any two groups. | Classification |
-| `precision_score` | difference | The maximum difference in precision score between any two groups. | Classification |
-| `precision_score` | ratio | The maximum ratio in precision score between any two groups. | Classification |
-| `recall_score` | difference | The maximum difference in recall score between any two groups. | Classification |
-| `recall_score` | ratio | The maximum ratio in recall score between any two groups. | Classification |
-| `f1_score` | difference | The maximum difference in f1 score between any two groups. | Classification |
-| `f1_score` | ratio | The maximum ratio in f1 score between any two groups. | Classification |
-| `error_rate` | difference | The maximum difference in error rate between any two groups. | Classification |
-| `error_rate` | ratio | The maximum ratio in error rate between any two groups.|Classification|
-| `Selection_rate` | difference | The maximum difference in selection rate between any two groups. | Classification |
-| `Selection_rate` | ratio | The maximum ratio in selection rate between any two groups. | Classification |
-| `mean_absolute_error` | difference | The maximum difference in mean absolute error between any two groups. | Regression |
-| `mean_absolute_error` | ratio | The maximum ratio in mean absolute error between any two groups. | Regression |
-| `mean_squared_error` | difference | The maximum difference in mean squared error between any two groups. | Regression |
-| `mean_squared_error` | ratio | The maximum ratio in mean squared error between any two groups. | Regression |
-| `median_absolute_error` | difference | The maximum difference in median absolute error between any two groups. | Regression |
-| `median_absolute_error` | ratio | The maximum ratio in median absolute error between any two groups. | Regression |
-| `r2_score` | difference | The maximum difference in R<sup>2</sup> score between any two groups. | Regression |
-| `r2_Score` | ratio | The maximum ratio in R<sup>2</sup> score between any two groups. | Regression |
-
-## View your Responsible AI scorecard
-
-The Responsible AI scorecard is linked to a Responsible AI dashboard. To view your Responsible AI scorecard, go into your model registry and select the registered model that you've generated a Responsible AI dashboard for. After you've selected your model, select the **Responsible AI (preview)** tab to view a list of generated dashboards. Select which dashboard you want to export a Responsible AI scorecard PDF for by selecting **Responsible AI scorecard (preview)**.
+ Responsible AI scorecards are linked to your Responsible AI dashboards. To view your Responsible AI scorecard, go into your model registry by selecting the **Model** in Azure Machine Learning studio. Then select the registered model that you've generated a Responsible AI dashboard and scorecard for. After you've selected your model, select the **Responsible AI** tab to view a list of generated dashboards. Select which dashboard you want to export a Responsible AI scorecard PDF for by selecting **Responsible AI Insights** then **View all PDF scorecards.
:::image type="content" source="./media/how-to-responsible-ai-scorecard/scorecard-studio.png" alt-text="Screenshot of the 'Responsible AI (preview)' pane in Azure Machine Learning studio, with the 'Responsible AI scorecard (preview)' tab highlighted." lightbox = "./media/how-to-responsible-ai-scorecard/scorecard-studio.png":::
The Responsible AI scorecard is linked to a Responsible AI dashboard. To view yo
:::image type="content" source="./media/how-to-responsible-ai-scorecard/studio-select-scorecard.png" alt-text="Screenshot of the 'Responsible AI scorecards' pane for selecting a scorecard to download." lightbox= "./media/how-to-responsible-ai-scorecard/studio-select-scorecard.png":::
-## Read your Responsible AI scorecard
+## How to read your Responsible AI scorecard
The Responsible AI scorecard is a PDF summary of key insights from your Responsible AI dashboard. The first summary segment of the scorecard gives you an overview of the machine learning model and the key target values you've set to help your stakeholders determine whether the model is ready to be deployed: :::image type="content" source="./media/how-to-responsible-ai-scorecard/scorecard-summary.png" alt-text="Screenshot of the model summary on the Responsible AI scorecard PDF.":::
-The data explorer segment shows you characteristics of your data, because any model story is incomplete without a correct understanding of your data:
+The data analysis segment shows you characteristics of your data, because any model story is incomplete without a correct understanding of your data:
The model performance segment displays your model's most important metrics and characteristics of your predictions and how well they satisfy your desired target values:
Finally, you can see your dataset's causal insights summarized, which can help y
## Next steps -- See the how-to guide for generating a Responsible AI dashboard via [CLI&nbsp;v2 and SDK&nbsp;v2](how-to-responsible-ai-dashboard-sdk-cli.md) or the [Azure Machine Learning studio UI](how-to-responsible-ai-dashboard-ui.md).
+- See the how-to guide for generating a Responsible AI dashboard via [CLI&nbsp;v2 and SDK&nbsp;v2](how-to-responsible-ai-insights-sdk-cli.md) or the [Azure Machine Learning studio UI](how-to-responsible-ai-insights-ui.md).
- Learn more about the [concepts and techniques behind the Responsible AI dashboard](concept-responsible-ai-dashboard.md). - View [sample YAML and Python notebooks](https://aka.ms/RAIsamples) to generate a Responsible AI dashboard with YAML or Python. - Learn more about how you can use the Responsible AI dashboard and scorecard to debug data and models and inform better decision-making in this [tech community blog post](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard).
machine-learning Reference Managed Online Endpoints Vm Sku List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-managed-online-endpoints-vm-sku-list.md
This table shows the VM SKUs that are supported for Azure Machine Learning manag
| X-Large| - | Standard_F32s_v2 <br/> Standard_F48s_v2 <br/> Standard_F64s_v2 <br/> Standard_F72s_v2 <br/> Standard_FX24mds <br/> Standard_FX36mds <br/> Standard_FX48mds| Standard_E32s_v3 <br/> Standard_E48s_v3 <br/> Standard_E64s_v3 | Standard_ND40rs_v2 <br/> Standard_ND96asr_v4 <br/> Standard_ND96amsr_A100_v4 <br/>| > [!CAUTION]
-> `Standard_DS1_v2` and `Standard_DS2_v2` may be too small to compute resources used with managed online endpoints. If you want to reduce the cost of deploying multiple models, see [the example for multi models](how-to-deploy-managed-online-endpoints.md#use-more-than-one-model).
+> `Standard_DS1_v2` and `Standard_F2s_v2` may be too small to compute resources used with managed online endpoints. If you want to reduce the cost of deploying multiple models, see [the example for multi models](how-to-deploy-managed-online-endpoints.md#use-more-than-one-model).
migrate Concepts Vmware Agentless Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-vmware-agentless-migration.md
ms. Previously updated : 05/31/2021 Last updated : 09/01/2022 # Azure Migrate agentless migration of VMware virtual machines
There are two stages in every replication cycle that ensures data integrity betw
1. First, we validate if every sector that has changed in the source disk is replicated to the target disk. Validation is performed using bitmaps. Source disk is divided into sectors of 512 bytes. Every sector in the source disk is mapped to a bit in the bitmap. When data replication starts, bitmap is created for all the changed blocks (in delta cycle) in the source disk that needs to be replicated. Similarly, when the data is transferred to the target Azure disk, a bitmap is created. Once the data transfer completes successfully, the cloud service compares the two bitmaps to ensure no changed block is missed. In case there's any mismatch between the bitmaps, the cycle is considered failed. As every cycle is resynchronization, the mismatch will be fixed in the next cycle.
-1. Next we ensure that the data that's transferred to the Azure disks is the same as the data that was replicated from the source disks. Every changed block that is uploaded is compressed and encrypted before it's written as a blob in the log storage account. We compute the checksum of this block before compression. This checksum is stored as metadata along with the compressed data. Upon decompression, the checksum for the data is calculated and compared with the checksum computed in the source environment. If there's a mismatch, the data is not written to the Azure disks, and the cycle is considered failed. As every cycle is resynchronization, the mismatch will be fixed in the next cycle.
+1. Next we ensure that the data that's transferred to the Azure disks is the same as the data that was replicated from the source disks. Every changed block that is uploaded is compressed and encrypted before it's written as a blob in the log storage account. We compute the checksum of this block before compression. This checksum is stored as metadata along with the compressed data. Upon decompression, the checksum for the data is calculated and compared with the checksum computed in the source environment. If there's a mismatch, the data isn't written to the Azure disks, and the cycle is considered failed. As every cycle is resynchronization, the mismatch will be fixed in the next cycle.
## Security
The Azure Migrate appliance compresses data and encrypts before uploading. Data
When a VM undergoes replication (data copy), there are a few possible states: - **Initial replication queued**: The VM is queued for replication (or migration) as there may be other VMs that are consuming the on-premises resources (during replication or migration). Once the resources are free, this VM will be processed. - **Initial replication in progress**: The VM is being scheduled for initial replication. -- **Initial replication**: The VM is undergoing initial replication. When the VM is undergoing initial replication, you cannot proceed with test migration and migration. You can only stop replication at this stage.
+- **Initial replication**: The VM is undergoing initial replication. When the VM is undergoing initial replication, you can't proceed with test migration and migration. You can only stop replication at this stage.
- **Initial replication (x%)**: The initial replication is active and has progressed by x%. - **Delta sync**: The VM may be undergoing a delta replication cycle that replicates the remaining data churn since the last replication cycle. - **Pause in progress**: The VM is undergoing an active delta replication cycle and will be paused in some time.
When a VM undergoes replication (data copy), there are a few possible states:
### Other states -- **Initial replication failed**: The initial data could not be copied for the VM. Follow the remediation guidance to resolve. -- **Repair pending**: There was an issue in the replication cycle. You can select the link to understand possible causes and actions to remediate (as applicable). If you had opted for **Automatically repair replication** by selecting **Yes** when you triggered replication of VM, the tool will try to repair it for you. Else, select the VM, and select **Repair Replication**. If you did not opt for **Automatically repair replication** or if the above step did not work for you, then stop replication for the virtual machine, reset the changed block tracking on the virtual machine, and then reconfigure the replication.
+- **Initial replication failed**: The initial data couldn't be copied for the VM. Follow the remediation guidance to resolve.
+- **Repair pending**: There was an issue in the replication cycle. You can select the link to understand possible causes and actions to remediate (as applicable). If you had opted for **Automatically repair replication** by selecting **Yes** when you triggered replication of VM, the tool will try to repair it for you. Else, select the VM, and select **Repair Replication**. If you didn't opt for **Automatically repair replication** or if the above step didn't work for you, then stop replication for the virtual machine, reset the changed block tracking on the virtual machine, and then reconfigure the replication.
- **Repair replication queued**: The VM is queued for replication repair as there are other VMs that are consuming the on-premises resources. Once the resources are free, the VM will be processed for repair replication. - **Resync (x%)**: The VM is undergoing a data resynchronization. This can happen if there was some issue / mismatch during data replication. - **Stop replication/complete migration failed**: Select the link to understand the possible causes for failure and actions to remediate (as applicable).
When a VM undergoes replication (data copy), there are a few possible states:
## Scheduling logic
-Initial replication is scheduled when replication is configured for a VM. It is followed by incremental replications (delta replications).
+Initial replication is scheduled when replication is configured for a VM. It's followed by incremental replications (delta replications).
Delta replication cycles are scheduled as follows:
That is, next delta replication will be scheduled no sooner than one hour. For e
- Ongoing VM replications are prioritized over scheduled replications (new replications) - Pre-failover (on-demand delta replication) cycle has the highest priority followed by initial replication cycle. Delta replication cycle has the least priority.
-That is, whenever a migrate operation is triggered, the on-demand replication cycle for the VM is scheduled and other ongoing replications take back seat if they are competing for resources.
+That is, whenever a migrate operation is triggered, the on-demand replication cycle for the VM is scheduled and other ongoing replications take back seat if they're competing for resources.
**Constraints:**
We use the following constraints to ensure that we don't exceed the IOPS limits
## Scale-out replication
-Azure Migrate supports concurrent replication of 500 virtual machines. When you are planning to replicate more than 300 virtual machines, you must deploy a scale-out appliance. The scale-out appliance is similar to an Azure Migrate primary appliance but consists only of gateway agent to facilitate data transfer to Azure. The following diagram shows the recommended way to use the scale-out appliance.
+Azure Migrate supports concurrent replication of 500 virtual machines. When you're planning to replicate more than 300 virtual machines, you must deploy a scale-out appliance. The scale-out appliance is similar to an Azure Migrate primary appliance but consists only of gateway agent to facilitate data transfer to Azure. The following diagram shows the recommended way to use the scale-out appliance.
![Scale-out configuration.](./media/concepts-vmware-agentless-migration/scale-out-configuration.png)
When you stop replication, the intermediate managed disks (seed disks) created d
The VM for which the replication is stopped, can be replicated by enabling replication again. If the VM was migrated, you can resume replication and migration again.
-As a best practice, you should always complete the migration after the VM has migrated successfully to Azure to ensure that you don't incur extra charges for storage transactions on the intermediate managed disks (seed disks). In some cases, you will notice that stop replication takes time. It is because whenever you stop replication, the ongoing replication cycle is completed (only when the VM is in delta sync) before deleting the artifacts.
+As a best practice, you should always complete the migration after the VM has migrated successfully to Azure to ensure that you don't incur extra charges for storage transactions on the intermediate managed disks (seed disks). In some cases, you'll notice that stop replication takes time. It's because whenever you stop replication, the ongoing replication cycle is completed (only when the VM is in delta sync) before deleting the artifacts.
## Impact of churn
You can increase or decrease the replication bandwidth using the _NetQosPolicy._
You could create a policy on the Azure Migrate appliance to throttle replication traffic from the appliance by creating a policy such as this one:
-`New-NetQosPolicy -Name "ThrottleReplication" -AppPathNameMatchCondition "GatewayWindowsService.exe" -ThrottleRateActionBitsPerSecond 1MB`
+```New-NetQosPolicy -Name "ThrottleReplication" -AppPathNameMatchCondition "GatewayWindowsService.exe" -ThrottleRateActionBitsPerSecond 1MB```
> [!NOTE] > This is applicable to all the replicating VMs from the Azure Migrate appliance simultaneously.
You can also increase and decrease replication bandwidth based on a schedule usi
Azure Migrate provides a configuration-based mechanism through which customers can specify the time interval during which they don't want any replications to proceed. This time interval is called the blackout window. The need for a blackout window can arise in multiple scenarios such as when the source environment is resource constrained or when customers want replication to go through only during non-business hours, etc. > [!NOTE]
-> The existing replication cycles at the start of the blackout window will complete before the replication pauses.
+> - The existing replication cycles at the start of the blackout window will complete before the replication pauses.
+> - For any migration initiated during the blackout window, the final replication will not run, causing the migration to fail.
A blackout window can be specified for the appliance by creating/updating the file GatewayDataWorker.json in C:\ProgramData\Microsoft Azure\Config. A typical file would be of the form:
purview Troubleshoot Policy Distribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/troubleshoot-policy-distribution.md
+
+ Title: Troubleshoot distribution of Microsoft Purview access policies
+description: Learn how to troubleshoot the enforcement of access policies that were created in Microsoft Purview
+++++ Last updated : 11/09/2022++
+# Tutorial: troubleshoot distribution of Microsoft Purview access policies (preview)
++
+In this tutorial, learn how to programmatically fetch access policies that were created in Microsoft Purview. With this you can troubleshoot the communication of policies between Microsoft Purview, where policies are created and updated, and the data sources, on which these policies are enforced.
+
+To get the necessary context about Microsoft Purview policies, see concept guides listed in [next-steps](#next-steps).
+
+This guide will use examples for Azure SQL Server as data source.
+
+## Prerequisites
+
+* If you don't have an Azure subscription, [create a free one](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
+* You must have an existing Microsoft Purview account. If you don't have one, see the [quickstart for creating a Microsoft Purview account](create-catalog-portal.md).
+* Register a data source, enable *Data use management*, and create a policy. To do so, follow one of the Microsoft Purview policies guides. To follow along the examples in this tutorial you can [create a DevOps policy for Azure SQL Database](how-to-policies-devops-azure-sql-db.md)
+* To establish a bearer token and to call any data plane APIs, see [the documentation about how to call REST APIs for Microsoft Purview data planes](tutorial-using-rest-apis.md). In order to be authorized to fetch policies, you need to be Policy Author, Data Source Admin or Data Curator at root-collection level in Microsoft Purview. You can assign those roles by following this guide: [managing Microsoft Purview role assignments](catalog-permissions.md#assign-permissions-to-your-users).
+
+## Overview
+There are two ways to fetch access policies from Microsoft Purview
+- Full pull: Provides a complete set of policies for a particular data resource scope.
+- Delta pull: Provides an incremental view of policies, that is, what changed since the last pull request, regardless of whether the last pull was a full or a delta one. A full pull is required prior to issuing the first delta pull.
+
+Microsoft Purview policy model is described using [JSON syntax](https://datatracker.ietf.org/doc/html/rfc8259)
+
+The policy distribution endpoint can be constructed from the Microsoft Purview account name as:
+`{endpoint} = https://<account-name>.purview.azure.com/pds`
+
+## Full pull
+
+### Request
+To fetch policies for a data source via full pull, send a `GET` request to /policyElements as follows:
+
+```
+GET {{endpoint}}/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProvider}/{resourceType}/{resourceName}/policyelements?api-version={apiVersion}
+```
+
+where the path /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProvider}/{resourceType}/{resourceName} matches the resource ID for the data source.
+
+>[!Tip]
+> The resource ID can be found under the properties for the data source in Azure portal.
++
+### Response status codes
+
+|Http Code|Http Code Description|Type|Description|Response|
+|||-|--|--|
+|200|Success|Success|Request processed successfully|Policy data|
+|401|Unauthenticated|Error|No bearer token passed in request or invalid token|Error data|
+|403|Forbidden|Error|Other authentication errors|Error data|
+|404|Not found|Error|The request path is invalid or not registered|Error data|
+|500|Internal server error|Error|Backend service unavailable|Error data|
+|503|Backend service unavailable|Error|Backend service unavailable|Error data|
+
+### Example for Azure SQL Server (Azure SQL Database)
+
+##### Example parameters:
+- Microsoft Purview account: relecloud-pv
+- Data source Resource ID: /subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1
+
+##### Example request:
+```
+GET https://relecloud-pv.purview.azure.com/pds/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1/policyElements?api-version=2021-01-01-preview
+```
+
+##### Example response:
+
+`200 OK`
+
+```json
+{
+ "count": 7,
+ "syncToken": "820:0",
+ "elements": [
+ {
+ "id": "9912572d-58bc-4835-a313-b913ac5bef97",
+ "kind": "policy",
+ "updatedAt": "2022-11-04T20:57:20.9389522Z",
+ "version": 1,
+ "elementJson": "{\"id\":\"9912572d-58bc-4835-a313-b913ac5bef97\",\"name\":\"Finance-rg_sqlsecurityauditor\",\"kind\":\"policy\",\"version\":1,\"updatedAt\":\"2022-11-04T20:57:20.9389522Z\",\"decisionRules\":[{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/**\"]}],[{\"fromRule\":\"purviewdatarole_builtin_sqlsecurityauditor\",\"attributeName\":\"derived.purview.role\",\"attributeValueIncludes\":\"purviewdatarole_builtin_sqlsecurityauditor\"}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]},{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"id\":\"auto_0235e4df-0d3f-41ca-98ed-edf1b8bfcf9f\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/**\"]}],[{\"attributeName\":\"request.azure.dataAction\",\"attributeValueIncludedIn\":[\"Microsoft.Sql/sqlservers/Connect\"]}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]},{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"id\":\"auto_45fa5236-a2a3-4291-9f0a-813b2883f118\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/**\"]}],[{\"attributeName\":\"request.azure.dataAction\",\"attributeValueIncludedIn\":[\"Microsoft.Sql/sqlservers/databases/Connect\"]}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]}]}"
+ },
+ {
+ "id": "f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4",
+ "scopes": [
+ "/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg"
+ ],
+ "kind": "policyset",
+ "updatedAt": "2022-11-04T20:57:20.9389456Z",
+ "version": 1,
+ "elementJson": "{\"id\":\"f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4\",\"name\":\"f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4\",\"kind\":\"policyset\",\"version\":1,\"updatedAt\":\"2022-11-04T20:57:20.9389456Z\",\"preconditionRules\":[{\"dnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/**\"]}]]}],\"policyRefs\":[\"9912572d-58bc-4835-a313-b913ac5bef97\"]}"
+ }
+ ]
+}
+```
+
+## Delta pull
+
+### Request
+To fetch policies via full pull, send a `GET` request to /policyEvents as follows:
+
+```
+GET {{endpoint}}/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProvider}/{resourceType}/{resourceName}/policyEvents?api-version={apiVersion}&syncToken={syncToken}
+```
+
+Provide the syncToken you got from the prior pull in any successive delta pulls.
+
+### Response status codes
+
+|Http Code|Http Code Description|Type|Description|Response|
+|||-|--|--|
+|200|Success|Success|Request processed successfully|Policy data|
+|304|Not modified|Success|No events received since last delta pull call|None|
+|401|Unauthenticated|Error|No bearer token passed in request or invalid token|Error data|
+|403|Forbidden|Error|Other authentication errors|Error data|
+|404|Not found|Error|The request path is invalid or not registered|Error data|
+|500|Internal server error|Error|Backend service unavailable|Error data|
+|503|Backend service unavailable|Error|Backend service unavailable|Error data|
+
+### Example for Azure SQL Server (Azure SQL Database)
+
+##### Example parameters:
+- Microsoft Purview account: relecloud-pv
+- Data source Resource ID: /subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1
+- syncToken: 820:0
+
+##### Example request:
+```
+https://relecloud-pv.purview.azure.com/pds/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1/policyEvents?api-version=2021-01-01-preview&syncToken=820:0
+```
+
+##### Example response:
+
+`200 OK`
+
+```json
+{
+ "count": 2,
+ "syncToken": "822:0",
+ "elements": [
+ {
+ "eventType": "Microsoft.Purview/PolicyElements/Delete",
+ "id": "f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4",
+ "scopes": [
+ "/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg"
+ ],
+ "kind": "policyset",
+ "updatedAt": "2022-11-04T20:57:20.9389456Z",
+ "version": 1,
+ "elementJson": "{\"id\":\"f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4\",\"name\":\"f1f2ecc0-c8fa-473f-9adf-7f7bd53ffdb4\",\"kind\":\"policyset\",\"version\":1,\"updatedAt\":\"2022-11-04T20:57:20.9389456Z\",\"preconditionRules\":[{\"dnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/**\"]}]]}],\"policyRefs\":[\"9912572d-58bc-4835-a313-b913ac5bef97\"]}"
+ },
+ {
+ "eventType": "Microsoft.Purview/PolicyElements/Delete",
+ "id": "9912572d-58bc-4835-a313-b913ac5bef97",
+ "scopes": [
+ "/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg"
+ ],
+ "kind": "policy",
+ "updatedAt": "2022-11-04T20:57:20.9389522Z",
+ "version": 1,
+ "elementJson": "{\"id\":\"9912572d-58bc-4835-a313-b913ac5bef97\",\"name\":\"Finance-rg_sqlsecurityauditor\",\"kind\":\"policy\",\"version\":1,\"updatedAt\":\"2022-11-04T20:57:20.9389522Z\",\"decisionRules\":[{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/**\"]}],[{\"fromRule\":\"purviewdatarole_builtin_sqlsecurityauditor\",\"attributeName\":\"derived.purview.role\",\"attributeValueIncludes\":\"purviewdatarole_builtin_sqlsecurityauditor\"}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]},{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"id\":\"auto_0235e4df-0d3f-41ca-98ed-edf1b8bfcf9f\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/**\"]}],[{\"attributeName\":\"request.azure.dataAction\",\"attributeValueIncludedIn\":[\"Microsoft.Sql/sqlservers/Connect\"]}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]},{\"kind\":\"decisionrule\",\"effect\":\"Permit\",\"id\":\"auto_45fa5236-a2a3-4291-9f0a-813b2883f118\",\"updatedAt\":\"11/04/2022 20:57:20\",\"cnfCondition\":[[{\"attributeName\":\"resource.azure.path\",\"attributeValueIncludedIn\":[\"/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg/**\"]}],[{\"attributeName\":\"request.azure.dataAction\",\"attributeValueIncludedIn\":[\"Microsoft.Sql/sqlservers/databases/Connect\"]}],[{\"attributeName\":\"principal.microsoft.groups\",\"attributeValueIncludedIn\":[\"b29c1676-8d2c-4a81-b7e1-365b79088375\"]}]]}]}"
+ }
+ ]
+}
+```
+
+In this example, the delta pull communicates the event that the policy on the resource group Finance-rg, which had the scope ```"scopes": ["/subscriptions/b285630c-8185-456b-80ae-97296561303e/resourceGroups/Finance-rg"]``` was deleted, per the ```"eventType": "Microsoft.Purview/PolicyElements/Delete"```.
++
+## Policy constructs
+There are 3 top-level policy constructs used within the full pull (/policyElements) and delta pull (/policyEvents) requests: PolicySet, Policy and AttributeRule.
+
+### PolicySet
+
+PolicySet associates Policy to a resource scope. Purview policy decision compute starts with a list of PolicySets. PolicySet evaluation triggers evaluation of Policy referenced in the PolicySet.
+
+|member|value|type|cardinality|description|
+||--|-|--|--|
+|ID| |string|1||
+|name| |string|1||
+|kind| |string|1||
+|version|1|number|1||
+|updatedAt| |string|1|String representation of time in yyyy-MM-ddTHH:mm:ss.fffffffZ Ex: "2022-01-11T09:55:52.6472858Z"|
+|preconditionRules| |array[Object:Rule]|0..1||
+|policyRefs| |array[string]|1|List of policy IDs|
+
+### Policy
+
+Policy specifies decision that should be emitted if the policy is applicable for the request provided request context attributes satisfy attribute predicates specified in the policy. Evaluation of policy triggers evaluation of AttributeRules referenced in the Policy.
+
+|member|value|type|cardinality|description|
+||--|-|--|--|
+|ID| |string|1||
+|name| |string|1||
+|kind| |string|1||
+|version|1|number|1||
+|updatedAt| |string|1|String representation of time in yyyy-MM-ddTHH:mm:ss.fffffffZ Ex: "2022-01-11T09:55:52.6472858Z"|
+|preconditionRules| |array[Object:Rule]|0..1|All the rules are 'anded'|
+|decisionRules| |array[Object:DecisionRule]|1||
++
+### AttributeRule
+
+AttributeRule produces derived attributes and add them to request context attributes. Evaluation of AttributeRule triggers evaluation of additional AttributeRules referenced in the AttributeRule.
+
+|member|value|type|cardinality|description|
+||--|-|--|--|
+|ID| |string|1||
+|name| |string|1||
+|kind|AttributeRule|string|1||
+|version|1|number|1||
+|dnfCondition| |array[array[Object:AttributePredicate]]|0..1||
+|cnfCondition| |array[array[Object:AttributePredicate]]|0..1||
+|condition| |Object: Condition|0..1||
+|derivedAttributes| |array[Object:DerivedAttribute]|1||
+
+## Common sub-constructs used in PolicySet, Policy, AttributeRule
+
+#### AttributePredicate
+AttributePredicate checks whether predicate specified on an attribute is satisfied. AttributePredicate can specify the following properties:
+- attributeName: specifies attribute name on which attribute predicate needs to be evaluated.
+- matcherId: ID of matcher function that is used to compare the attribute value looked up in request context by the attribute name to the attribute value literal specified in the predicate. At present we support 2 matcherId(s): ExactMatcher, GlobMatcher. If matcherId isn't specified, it defaults to GlobMatcher.
+- fromRule: optional property specifying the ID of an AttributeRule that needs to be evaluated to populate the request context with attribute values that would be compared in this predicate.
+- attributeValueIncludes: scalar literal value that should match the request context attribute values.
+- attributeValueIncludedIn: array of literal values that should match the request context attribute values.
+- attributeValueExcluded: scalar literal value that should not match the request context attribute values.
+- attributeValueExcludedIn: array of literal values that should not match the request context attribute values.
+
+#### CNFCondition
+Array of array of AttributePredicates that have to be satisfied with the semantic of ANDofORs.
+
+#### DNFCondition
+Array of array of AttributePredicates that have to be satisfied with the semantic of ORofANDs.
+
+#### PreConditionRule
+- A PreConditionRule can specify at most one each of CNFCondition, DNFConition, Condition.
+- All of the specified CNFCondition, DNFCondition, Condition should evaluate to ΓÇ£trueΓÇ¥ for the PreConditionRule to be satisfied for the current request.
+- If any of the precondition rules is not satisfied, containing PolicySet or Policy is considered not applicable for the current request and skipped.
+
+#### Condition
+- A Condition allows specifying a complex condition of predicates that can nest functions from library of functions.
+- At decision compute time the Condition evaluates to ΓÇ£trueΓÇ¥ or ΓÇ£falseΓÇ¥ and also could emit optional Obligation(s).
+- If the Condition evaluates to ΓÇ£falseΓÇ¥ the containing DecisionRule is considered Not Applicable to the current request.
++
+## Next steps
+
+Concept guides for Microsoft Purview access policies:
+- [DevOps policies](concept-policies-devops.md)
+- [Self-service access policies](concept-self-service-data-access-policy.md)
+- [Data owner policies](concept-policies-data-owner.md)
reliability Availability Service By Category https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-service-by-category.md
+
+ Title: Azure services
+description: Learn about Region types and service categories in Azure.
++++ Last updated : 08/18/2022++++
+# Available services by region types and categories
+
+Availability of services across Azure regions depends on a region's type. There are two types of regions in Azure: *recommended* and *alternate*.
+
+- **Recommended**: These regions provide the broadest range of service capabilities and currently support availability zones. Designated in the Azure portal as **Recommended**.
+- **Alternate**: These regions extend Azure's footprint within a data residency boundary where a recommended region currently exists. Alternate regions help to optimize latency and provide a second region for disaster recovery needs but don't support availability zones. Azure conducts regular assessments of alternate regions to determine if they should become recommended regions. Designated in the Azure portal as **Other**.
+
+## Service categories across region types
+
+Azure services are grouped into three categories: *foundational*, *mainstream*, and *strategic*. Azure's general policy on deploying services into any given region is primarily driven by region type, service categories, and customer demand.
+
+- **Foundational**: Available in all recommended and alternate regions when the region is generally available, or within 90 days of a new foundational service becoming generally available.
+- **Mainstream**: Available in all recommended regions within 90 days of the region general availability. Demand-driven in alternate regions, and many are already deployed into a large subset of alternate regions.
+- **Strategic** (previously Specialized): Targeted service offerings, often industry-focused or backed by customized hardware. Demand-driven availability across regions, and many are already deployed into a large subset of recommended regions.
+
+To see which services are deployed in a region and the future roadmap for preview or general availability of services in a region, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/).
+
+If a service offering isn't available in a region, contact your Microsoft sales representative for more information and to explore options.
+
+| Region type | Non-regional | Foundational | Mainstream | Strategic | Availability zones | Data residency |
+| | | | | | | |
+| Recommended | **Y** | **Y** | **Y** | Demand-driven | **Y** | **Y** |
+| Alternate | **Y** | **Y** | Demand-driven | Demand-driven | N/A | **Y** |
+
+## Available services by region category
+
+Azure assigns service categories as foundational, mainstream, and strategic at general availability. Typically, services start as a strategic service and are upgraded to mainstream and foundational as demand and use grow.
+
+Azure services are presented in the following tables by category. Note that some services are non-regional. That means they're available globally regardless of region. For information and a complete list of non-regional services, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/).
+
+> [!div class="mx-tableFixed"]
+> | ![An icon that signifies this service is foundational.](media/icon-foundational.svg) Foundational | ![An icon that signifies this service is mainstream.](media/icon-mainstream.svg) Mainstream |
+> |-||
+> | Azure Application Gateway | Azure API Management |
+> | Azure Backup | Azure App Configuration |
+> | Azure Cosmos DB | Azure App Service |
+> | Azure Event Hubs | Azure Active Directory Domain Services |
+> | Azure ExpressRoute | Azure Bastion |
+> | Azure Key Vault | Azure Batch |
+> | Azure Load Balancer | Azure Cache for Redis |
+> | Azure Public IP | Azure Cognitive Search |
+> | Azure Service Bus | Azure Container Registry |
+> | Azure Service Fabric | Azure Container Instances |
+> | Azure Site Recovery | Azure Data Explorer |
+> | Azure SQL | Azure Data Factory |
+> | Azure Storage: Disk Storage | Azure Database for MySQL |
+> | Azure Storage Accounts | Azure Database for PostgreSQL |
+> | Azure Storage: Blob Storage | Azure DDoS Protection |
+> | Azure Storage Data Lake Storage | Azure Event Grid |
+> | Azure Virtual Machines | Azure Firewall |
+> | Azure Virtual Machine Scale Sets | Azure Firewall Manager |
+> | Virtual Machines: Av2-series | Azure Functions |
+> | Virtual Machines: Bs-series | Azure HDInsight |
+> | Virtual Machines: Dv2 and DSv2-series | Azure IoT Hub |
+> | Virtual Machines: Dv3 and DSv3-series | Azure Kubernetes Service (AKS) |
+> | Virtual Machines: ESv3 abd ESv3-series | Azure Logic Apps |
+> | Azure Virtual Network | Azure Media Services |
+> | Azure VPN Gateway | Azure Monitor: Application Insights |
+> | | Azure Monitor: Log Analytics |
+> | | Azure Network Watcher |
+> | | Azure Private Link |
+> | | Azure Storage: Files Storage |
+> | | Azure Virtual WAN |
+> | | Premium Blob Storage |
+> | | Virtual Machines: Ddsv4-series |
+> | | Virtual Machines: Ddv4-series |
+> | | Virtual Machines: Dsv4-series |
+> | | Virtual Machines: Dv4-series |
+> | | Virtual Machines: Edsv4-series |
+> | | Virtual Machines: Edv4-series |
+> | | Virtual Machines: Esv4-series |
+> | | Virtual Machines: Ev4-series |
+> | | Virtual Machines: Fsv2-series |
+> | | Virtual Machines: M-series |
+
+### Strategic services
+As mentioned previously, Azure classifies services into three categories: foundational, mainstream, and strategic. Service categories are assigned at general availability. Often, services start their lifecycle as a strategic service and as demand and utilization increases may be promoted to mainstream or foundational. The following table lists strategic services.
+
+> [!div class="mx-tableFixed"]
+> | ![An icon that signifies this service is strategic.](media/icon-strategic.svg) Strategic |
+> ||
+> | Azure API for FHIR |
+> | Azure Analysis Services |
+> | Azure Applied AI Services |
+> | Azure Automation |
+> | Azure Cognitive Services |
+> | Azure Data Share |
+> | Azure Databricks |
+> | Azure Database for MariaDB |
+> | Azure Database Migration Service |
+> | Azure Dedicated HSM |
+> | Azure Digital Twins |
+> | Azure HPC Cache |
+> | Azure Lab Services |
+> | Azure Machine Learning |
+> | Azure Managed Instance for Apache Cassandra |
+> | Azure NetApp Files |
+> | Microsoft Purview |
+> | Azure Red Hat OpenShift |
+> | Azure Remote Rendering |
+> | Azure SignalR Service |
+> | Azure Spatial Anchors |
+> | Azure Spring Cloud |
+> | Azure Storage: Archive Storage |
+> | Azure Synapse Analytics |
+> | Azure Ultra Disk Storage |
+> | Azure VMware Solution |
+> | Microsoft Azure Attestation |
+> | SQL Server Stretch Database |
+> | Virtual Machines: DAv4 and DASv4-series |
+> | Virtual Machines: Dasv5 and Dadsv5-series |
+> | Virtual Machines: DCsv2-series |
+> | Virtual Machines: Ddv5 and Ddsv5-series |
+> | Virtual Machines: Dv5 and Dsv5-series |
+> | Virtual Machines: Eav4 and Easv4-series |
+> | Virtual Machines: Easv5 and Eadsv5-series |
+> | Virtual Machines: Edv5 and Edsv5-series |
+> | Virtual Machines: Ev5 and Esv5-series |
+> | Virtual Machines: FX-series |
+> | Virtual Machines: HBv2-series |
+> | Virtual Machines: HBv3-series |
+> | Virtual Machines: HCv1-series |
+> | Virtual Machines: LSv2-series |
+> | Virtual Machines: Mv2-series |
+> | Virtual Machines: NCv3-series |
+> | Virtual Machines: NCasT4 v3-series |
+> | Virtual Machines: NDasr A100 v4-Series |
+> | Virtual Machines: NDm A100 v4-Series |
+> | Virtual Machines: NDv2-series |
+> | Virtual Machines: NP-series |
+> | Virtual Machines: NVv3-series |
+> | Virtual Machines: NVv4-series |
+> | Virtual Machines: SAP HANA on Azure Large Instances |
+
+Older generations of services or virtual machines aren't listed. For more information, see [Previous generations of virtual machine sizes](../virtual-machines/sizes-previous-gen.md).
+
+To learn more about preview services that aren't yet in general availability and to see a listing of these services, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/). For a complete listing of services that support availability zones, see [Azure services that support availability zones](availability-zones-service-support.md).
+
+## Next steps
+
+- [Azure services and regions that support availability zones](availability-zones-service-support.md)
+
reliability Availability Zones Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-migration-overview.md
+
+ Title: Availability zone migration guidance overview for Microsoft Azure products and services
+description: Availability zone migration guidance overview for Microsoft Azure products and services
+++ Last updated : 11/08/2022++++
+# Availability zone migration guidance overview
+
+Azure services that support availability zones, including zonal and zone-redundant offerings, are continually expanding. For that reason, resources that don't currently have availability zone support, may have an opportunity to gain that support. The Migration Guides section offers a collection of guides for each service that requires certain procedures in order to move a resource from non-availability zone support to availability support. You'll find information on prerequisites for migration, download requirements, important migration considerations and recommendations.
+
+The table below lists each product that offers migration guidance and/or information.
+
+## Azure services migration guides
+
+### ![An icon that signifies this service is foundational.](media/icon-foundational.svg) Foundational services
+
+| **Products** |
+| |
+| [Azure Application Gateway (V2)](migrate-app-gateway-v2.md) |
+| [Azure Backup](migrate-recovery-services-vault.md) |
+| [Azure Site Recovery](migrate-recovery-services-vault.md) |
+| [Azure Storage account](migrate-storage.md) |
+| [Azure Storage: Azure Data Lake Storage](migrate-storage.md) |
+| [Azure Storage: Disk Storage](migrate-storage.md)|
+| [Azure Storage: Blob Storage](migrate-storage.md) |
+| [Azure Storage: Managed Disks](migrate-storage.md)|
+| [Azure Virtual Machine Scale Sets](migrate-vm.md)|
+| [Azure Virtual Machines](migrate-vm.md) |
+| Virtual Machines:ΓÇ»[Av2-Series](migrate-vm.md) |
+| Virtual Machines:ΓÇ»[Bs-Series](migrate-vm.md) |
+| Virtual Machines:ΓÇ»[DSv2-Series](migrate-vm.md) |
+| Virtual Machines:ΓÇ»[DSv3-Series](migrate-vm.md) |
+| Virtual Machines:ΓÇ»[Dv2-Series](migrate-vm.md) |
+| Virtual Machines:ΓÇ»[Dv3-Series](migrate-vm.md) |
+| Virtual Machines:ΓÇ»[ESv3-Series](migrate-vm.md) |
+| Virtual Machines:ΓÇ»[Ev3-Series](migrate-vm.md) |
+| Virtual Machines:ΓÇ»[F-Series](migrate-vm.md) |
+| Virtual Machines:ΓÇ»[FS-Series](migrate-vm.md) |
+| Virtual Machines:ΓÇ»[Azure Compute Gallery](migrate-vm.md)|
+
+\*VMs that support availability zones: AV2-series, B-series, DSv2-series, DSv3-series, Dv2-series, Dv3-series, ESv3-series, Ev3-series, F-series, FS-series, FSv2-series, and M-series.
+
+### ![An icon that signifies this service is mainstream.](media/icon-mainstream.svg) Mainstream services
+
+| **Products** |
+| |
+| [Azure API Management](migrate-api-mgt.md)|
+| [Azure App Service: App Service Environment](migrate-app-service-environment.md)|
+| [Azure Cache for Redis](migrate-cache-redis.md)|
+| [Azure Container Instances](migrate-container-instances.md) |
+| [Azure Monitor: Log Analytics](migrate-monitor-log-analytics.md)|
+| Azure Storage:ΓÇ»[Files Storage](migrate-storage.md)|
+| Virtual Machines:ΓÇ»[Azure Dedicated Host](migrate-vm.md) |
+| Virtual Machines:ΓÇ»[Ddsv4-Series](migrate-vm.md) |
+| Virtual Machines:ΓÇ»[Ddv4-Series](migrate-vm.md) |
+| Virtual Machines:ΓÇ»[Dsv4-Series](migrate-vm.md) |
+| Virtual Machines:ΓÇ»[Dv4-Series](migrate-vm.md) |
+| Virtual Machines:ΓÇ»[Edsv4-Series](migrate-vm.md) |
+| Virtual Machines:ΓÇ»[Edv4-Series](migrate-vm.md) |
+| Virtual Machines:ΓÇ»[Esv4-Series](migrate-vm.md) |
+| Virtual Machines:ΓÇ»[Ev4-Series](migrate-vm.md) |
+| Virtual Machines:ΓÇ»[Fsv2-Series](migrate-vm.md) |
+| Virtual Machines:ΓÇ»[M-Series](migrate-vm.md) |
++
+## Next steps
++
+> [!div class="nextstepaction"]
+> [Azure services and regions with availability zones](availability-zones-service-support.md)
+
+> [!div class="nextstepaction"]
+> [Availability of service by category](availability-service-by-category.md)
+
+> [!div class="nextstepaction"]
+> [Microsoft commitment to expand Azure availability zones to more regions](https://azure.microsoft.com/blog/our-commitment-to-expand-azure-availability-zones-to-more-regions/)
+
+> [!div class="nextstepaction"]
+> [Build solutions for high availability using availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability)
reliability Availability Zones Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-overview.md
+
+ Title: What are Azure regions and availability zones?
+description: Learn about regions and availability zones and how they work to help you achieve reliability
+++ Last updated : 10/25/2022+++++++
+# What are Azure regions and availability zones?
+
+Azure regions and availability zones are designed to help you achieve reliability for your business-critical workloads. Azure maintains multiple geographies. These discrete demarcations define disaster recovery and data residency boundaries across one or multiple Azure regions. Maintaining many regions ensures customers are supported across the world.
+
+## Regions
+
+Each Azure region features datacenters deployed within a latency-defined perimeter. They're connected through a dedicated regional low-latency network. This design ensures that Azure services within any region offer the best possible performance and security.
+
+To see which regions support availability zones, see [Azure regions with availability zone support](availability-zones-service-support.md#azure-regions-with-availability-zone-support).
+
+## Availability zones
+
+Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved because of redundancy and logical isolation of Azure services. To ensure resiliency, a minimum of three separate availability zones are present in all availability zone-enabled regions.
+
+Azure availability zones are connected by a high-performance network with a round-trip latency of less than 2ms. They help your data stay synchronized and accessible when things go wrong. Each zone is composed of one or more datacenters equipped with independent power, cooling, and networking infrastructure. Availability zones are designed so that if one zone is affected, regional services, capacity, and high availability are supported by the remaining two zones.
+
+![Image showing physically separate availability zone locations within an Azure region.](media/availability-zones.png)
+
+Datacenter locations are selected by using rigorous vulnerability risk assessment criteria. This process identifies all significant datacenter-specific risks and considers shared risks between availability zones.
+
+With availability zones, you can design and operate applications and databases that automatically transition between zones without interruption. Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple datacenter infrastructures.
+
+Each data center is assigned to a physical zone. Physical zones are mapped to logical zones in your Azure subscription. Azure subscriptions are automatically assigned this mapping at the time a subscription is created. You can use the dedicated ARM API called: [checkZonePeers](/rest/api/resources/subscriptions/check-zone-peers) to compare zone mapping for resilient solutions that span across multiple subscriptions.
+
+You can design resilient solutions by using Azure services that use availability zones. Co-locate your compute, storage, networking, and data resources across an availability zone, and replicate this arrangement in other availability zones.
+
+Azure *availability zones-enabled services* are designed to provide the right level of resiliency and flexibility. They can be configured in two ways. They can be either *zone redundant*, with automatic replication across zones, or *zonal*, with instances pinned to a specific zone. You can also combine these approaches.
+
+Some organizations require high availability of availability zones and protection from large-scale phenomena and regional disasters. Azure regions are designed to offer protection against localized disasters with availability zones and protection from regional or large geography disasters with disaster recovery, by making use of another region. To learn more about business continuity, disaster recovery, and cross-region replication, see [Cross-region replication in Azure](cross-region-replication-azure.md).
+
+![Image showing availability zones that protect against localized disasters and regional or large geography disasters by using another region.](media/availability-zones-region-geography.png)
+
+To see which services support availability zones, see [Azure regions with availability zone support](availability-zones-service-support.md#azure-regions-with-availability-zone-support).
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Azure services and regions with availability zones](availability-zones-service-support.md)
+
+> [!div class="nextstepaction"]
+> [Availability zone migration guidance](availability-zones-migration-overview.md)
+
+> [!div class="nextstepaction"]
+> [Availability of service by category](availability-service-by-category.md)
+
+> [!div class="nextstepaction"]
+> [Microsoft commitment to expand Azure availability zones to more regions](https://azure.microsoft.com/blog/our-commitment-to-expand-azure-availability-zones-to-more-regions/)
+
+> [!div class="nextstepaction"]
+> [Build solutions for high availability using availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability)
reliability Availability Zones Service Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md
+
+ Title: Azure services that support availability zones
+description: Learn what services are supported by availability zones and understand resiliency across all Azure services.
+++ Last updated : 10/20/2022++++++
+# Availability zone service and regional support
+
+Azure availability zones are physically separate locations within each Azure region. This article shows you which regions and services support availability zones.
+
+For more information on availability zones and regions, see [What are Azure regions and availability zones?](availability-zones-overview.md),
++
+## Azure regions with availability zone support
++
+## Azure services with availability zone support
+
+Azure services that support availability zones, including zonal and zone-redundant offerings, are continually expanding.
+
+Three types of Azure services support availability zones: *zonal*, *zone-redundant*, and *always-available* services. You can combine all three of these approaches to architecture when you design your reliability strategy.
+
+- **Zonal services**: A resource can be deployed to a specific, self-selected availability zone to achieve more stringent latency or performance requirements. Resiliency is self-architected by replicating applications and data to one or more zones within the region. Resources can be pinned to a specific zone. For example, virtual machines, managed disks, or standard IP addresses can be pinned to a specific zone, which allows for increased resiliency by having one or more instances of resources spread across zones.
+- **Zone-redundant services**: Resources are replicated or distributed across zones automatically. For example, zone-redundant services replicate the data across three zones so that a failure in one zone doesn't affect the high availability of the data.ΓÇ»
+- **Always-available services**: Always available across all Azure geographies and are resilient to zone-wide outages and region-wide outages. For a complete list of always-available services, also called non-regional services, in Azure, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/).
+
+For more information on older-generation virtual machines, see [Previous generations of virtual machine sizes](../virtual-machines/sizes-previous-gen.md).
+
+The following tables provide a summary of the current offering of zonal, zone-redundant, and always-available Azure services. They list Azure offerings according to the regional availability of each.
+
+##### Legend
+![Legend containing icons and meaning of each with respect to service category and regional availability of each service in the table.](media/legend.png)
+
+In the Product Catalog, always-available services are listed as "non-regional" services.
+
+Azure offerings are grouped into three categories that reflect their _regional_ availability: *foundational*, *mainstream*, and *strategic* services. Azure's general policy on deploying services into any given region is primarily driven by region type, service category, and customer demand. For more information, see [Azure services](availability-service-by-category.md).
+
+- **Foundational services**: Available in all recommended and alternate regions when a region is generally available, or within 90 days of a new foundational service becoming generally available.
+- **Mainstream services**: Available in all recommended regions within 90 days of a region's general availability. Mainstream services are demand-driven in alternate regions, and many are already deployed into a large subset of alternate regions.
+- **Strategic services**: Targeted service offerings, often industry-focused or backed by customized hardware. Strategic services are demand-driven for availability across regions, and many are already deployed into a large subset of recommended regions
+
+### ![An icon that signifies this service is foundational.](media/icon-foundational.svg) Foundational services
+
+| **Products** | **Resiliency** |
+| | |
+| [Azure Application Gateway (V2)](migrate-app-gateway-v2.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| [Azure Backup](migrate-recovery-services-vault.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| [Azure Cosmos DB](../cosmos-db/high-availability.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure DNS: Azure DNS Private Zones](../dns/private-dns-getstarted-portal.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure DNS: Azure DNS Private Resolver](../dns/dns-private-resolver-get-started-portal.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure ExpressRoute](../expressroute/designing-for-high-availability-with-expressroute.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Public IP](../virtual-network/ip-services/public-ip-addresses.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| [Azure Site Recovery](migrate-recovery-services-vault.md) | ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
+| [Azure SQL](/azure/azure-sql/database/high-availability-sla) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Event Hubs](../event-hubs/event-hubs-geo-dr.md#availability-zones) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Key Vault](../key-vault/general/disaster-recovery-guidance.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Load Balancer](../load-balancer/load-balancer-standard-availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
+| [Azure Service Bus](../service-bus-messaging/service-bus-geo-dr.md#availability-zones) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Service Fabric](../service-fabric/service-fabric-cross-availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
+| [Azure Storage account](migrate-storage.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Storage: Azure Data Lake Storage](migrate-storage.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Storage: Disk Storage](migrate-storage.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Storage: Blob Storage](migrate-storage.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Storage: Managed Disks](migrate-storage.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
+| [Azure Virtual Machine Scale Sets](../virtual-machines/availability.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| [Azure Virtual Machines](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Av2-Series](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Bs-Series](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[DSv2-Series](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[DSv3-Series](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Dv2-Series](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Dv3-Series](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[ESv3-Series](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Ev3-Series](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[F-Series](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[FS-Series](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Azure Compute Gallery](../virtual-machines/availability.md)| ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Virtual Network](../vpn-gateway/create-zone-redundant-vnet-gateway.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure VPN Gateway](../vpn-gateway/about-zone-redundant-vnet-gateways.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+
+\*VMs that support availability zones: AV2-series, B-series, DSv2-series, DSv3-series, Dv2-series, Dv3-series, ESv3-series, Ev3-series, F-series, FS-series, FSv2-series, and M-series.\*
+
+### ![An icon that signifies this service is mainstream.](media/icon-mainstream.svg) Mainstream services
+
+| **Products** | **Resiliency** |
+| | |
+| [Azure Active Directory Domain Services](../active-directory-domain-services/overview.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure API Management](migrate-api-mgt.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure App Configuration](../azure-app-configuration/faq.yml#how-does-app-configuration-ensure-high-data-availability) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure App Service](migrate-app-service.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure App Service: App Service Environment](migrate-app-service-environment.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
+| [Azure Bastion](../bastion/bastion-overview.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Batch](../batch/create-pool-availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Cache for Redis](../azure-cache-for-redis/cache-high-availability.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
+| [Azure Cognitive Search](../search/search-performance-optimization.md#availability-zones) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Container Instances](../container-instances/availability-zones.md) | ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
+| [Azure Container Registry](../container-registry/zone-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Data Explorer](/azure/data-explorer/create-cluster-database-portal) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Data Factory](../data-factory/concepts-data-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| Azure Database for MySQL ΓÇôΓÇ»[Flexible Server](../mysql/flexible-server/concepts-high-availability.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| Azure Database for PostgreSQL ΓÇôΓÇ»[Flexible Server](../postgresql/flexible-server/overview.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure DDoS Protection](../ddos-protection/ddos-faq.yml) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Disk Encryption](../virtual-machines/disks-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Event Grid](../event-grid/overview.md) | ![An icon that signifies this service is zone-redundant](media/icon-zone-redundant.svg) |
+| [Azure Firewall](../firewall/deploy-availability-zone-powershell.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Firewall Manager](../firewall-manager/quick-firewall-policy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Functions](../azure-functions/azure-functions-az-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure HDInsight](../hdinsight/hdinsight-use-availability-zones.md) | ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
+| [Azure IoT Hub](../iot-hub/iot-hub-ha-dr.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Kubernetes Service (AKS)](../aks/availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
+| Azure Logic Apps | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Monitor](../azure-monitor/logs/availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Monitor: Application Insights](../azure-monitor/logs/availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Monitor: Log Analytics](../azure-monitor/logs/availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Network Watcher](../network-watcher/frequently-asked-questions.yml) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| Azure Network Watcher:ΓÇ»[Traffic Analytics](../network-watcher/frequently-asked-questions.yml) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| Azure Notification Hubs | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Private Link](../private-link/private-link-overview.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Route Server](../route-server/route-server-faq.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| Azure Stream Analytics | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [SQL Server on Azure Virtual Machines](/azure/azure-sql/database/high-availability-sla) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| Azure Storage:ΓÇ»[Files Storage](migrate-storage.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Virtual WAN](../virtual-wan/virtual-wan-faq.md#how-are-availability-zones-and-resiliency-handled-in-virtual-wan) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Web Application Firewall](../firewall/deploy-availability-zone-powershell.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Power BI Embedded](/power-bi/admin/service-admin-failover#what-does-high-availability) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| Virtual Machines:ΓÇ»[Azure Dedicated Host](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Ddsv4-Series](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Ddv4-Series](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Dsv4-Series](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Dv4-Series](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Edsv4-Series](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Edv4-Series](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Esv4-Series](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Ev4-Series](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[Fsv2-Series](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual Machines:ΓÇ»[M-Series](../virtual-machines/availability.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Virtual WAN:ΓÇ»[Azure ExpressRoute](../virtual-wan/virtual-wan-faq.md#how-are-availability-zones-and-resiliency-handled-in-virtual-wan) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| Virtual WAN:ΓÇ»[Point-to-site VPN Gateway](../vpn-gateway/about-zone-redundant-vnet-gateways.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| Virtual WAN:ΓÇ»[Site-to-site VPN Gateway](../vpn-gateway/about-zone-redundant-vnet-gateways.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+
+### ![An icon that signifies this service is strategic.](media/icon-strategic.svg) Strategic services
+
+| **Products** | **Resiliency** |
+| | |
+| [Azure HPC Cache](../hpc-cache/hpc-cache-overview.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| [Azure IoT Hub Device Provisioning Service](../iot-dps/about-iot-dps.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure NetApp Files](../azure-netapp-files/use-availability-zones.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| Azure Red Hat OpenShift | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
+| [Azure Managed Instance for Apache Cassandra](../managed-instance-apache-cassandr) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| Azure Storage: Ultra Disk | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+
+### ![An icon that signifies this service is non-regional.](media/icon-always-available.svg) Non-regional services (always-available services)
+
+| **Products** | **Resiliency** |
+| | |
+| Azure Active Directory | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Microsoft Defender for Identity | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Azure Advisor | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Azure Blueprints | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Azure Bot Services | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Azure Cloud Shell | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Azure Content Delivery Network | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Azure Cost Management and Billing | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Microsoft Defender for IoT | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Azure DNS | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Azure Front Door | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Azure Information Protection | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Azure Lighthouse | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Azure Managed Applications | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Azure Maps | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Azure Peering Service | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Azure Performance Diagnostics | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Azure Policy | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Azure portal | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Azure Resource Graph | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Azure Stack Edge | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Azure Traffic Manager | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Customer Lockbox for Microsoft Azure | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Microsoft Defender for Cloud | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Microsoft Graph | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Microsoft Intune | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Microsoft Sentinel | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
++
+## Pricing for virtual machines in availability zones
+
+You can access Azure availability zones by using your Azure subscription. To learn more, see [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Azure services and regions with availability zones](availability-zones-service-support.md)
+
+> [!div class="nextstepaction"]
+> [Availability zone migration guidance overview](availability-zones-migration-overview.md)
+
+> [!div class="nextstepaction"]
+> [Availability of service by category](availability-service-by-category.md)
+
+> [!div class="nextstepaction"]
+> [Microsoft commitment to expand Azure availability zones to more regions](https://azure.microsoft.com/blog/our-commitment-to-expand-azure-availability-zones-to-more-regions/)
+
+> [!div class="nextstepaction"]
+> [Overview of the reliability pillar](/azure/architecture/framework/resiliency/overview)
++
reliability Business Continuity Management Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/business-continuity-management-program.md
+
+ Title: Business continuity management program in Azure
+description: Learn about one of the most mature business continuity management programs in the industry.
++++ Last updated : 10/21/2021++++
+# Business continuity management in Azure
+
+Azure maintains one of the most mature and respected business continuity management programs in the industry. The goal of business continuity in Azure is to build and advance recoverability and resiliency for all independently recoverable services, whether a service is customer-facing (part of an Azure offering) or an internal supporting platform service.
+
+In understanding business continuity, it's important to note that many offerings are made up of multiple services. At Azure, each service is statically identified through tooling and is the unit of measure used for privacy, security, inventory, risk business continuity management, and other functions. To properly measure capabilities of a service, the three elements of people, process, and technology are included for each service, whatever the service type.
+
+![An image describing how elements such as people (those who work on the service and are required to support it), process (any process to do tasks that support the service), and technology (the technology used to deliver the service or the technology provided as the service itself) combine to create a service that benefits a cloud user.](./media/people-process-technology.png)
+
+For example:
+
+- If there's a business process based on people, such as a help desk or team, the service delivery is what they do. The people use processes and technology to perform the service.
+- If there's technology as a service, such as Azure Virtual Machines, the service delivery is the technology along with the people and processes that support its operation.
+
+## Shared responsibility model
+
+Many of the offerings Azure provides require customers to set up disaster recovery in multiple regions and aren't the responsibility of Microsoft. Not all Azure services automatically replicate data or automatically fall back from a failed region to cross-replicate to another enabled region. In these cases, recovery and replication must be configured by the customer.
+
+Microsoft does ensure that the baseline infrastructure and platform services are available. But in some scenarios, usage requires the customer to duplicate their deployments and storage in a multi-region capacity, if they opt to. These examples illustrate the shared responsibility model. It's a fundamental pillar in your business continuity and disaster recovery strategy.
+
+### Division of responsibility
+
+In any on-premises datacenter, you own the whole stack. As you move assets to the cloud, some responsibilities transfer to Microsoft. The following diagram illustrates areas and division of responsibility between you and Microsoft according to the type of deployment.
+
+![A visual showing what responsibilities belong to the cloud customer versus the cloud provider.](./media/shared-responsibility-model.png)
+
+A good example of the shared responsibility model is the deployment of virtual machines. If a customer wants to set up *cross-region replication* for resiliency if there's region failure, they must deploy a duplicate set of virtual machines in an alternate enabled region. Azure doesn't automatically replicate these services over if there's a failure. It's the customer's responsibility to deploy necessary assets. The customer must have a process to manually change primary regions, or they must use a traffic manager to detect and automatically fail over.
+
+Customer-enabled disaster recovery services all have public-facing documentation to guide you. For an example of public-facing documentation for customer-enabled disaster recovery, see [Azure Data Lake Analytics](../data-lake-analytics/data-lake-analytics-disaster-recovery.md).
+
+For more information on the shared responsibility model, see [Microsoft Trust Center](../security/fundamentals/shared-responsibility.md).
+
+## Business continuity compliance: Service-level responsibility
+
+Each service is required to complete Business Continuity Disaster Recovery records in the Azure Business Continuity Manager Tool. Service owners can use the tool to work within a federated model to complete and incorporate requirements that include:
+
+- **Service properties**: Defines the service and how disaster recovery and resiliency are achieved and identifies the responsible party for disaster recovery (for technology). For details on recovery ownership, see the discussion on the shared responsibility model in the preceding section and diagram.
+
+- **Business impact analysis**: This analysis helps the service owner define the recovery time objective (RTO) and recovery point objective (RPO) based on the criticality of the service across a table of impacts. Operational, legal, regulatory, brand image, and financial impacts are used as target goals for recovery.
+
+ > [!NOTE]
+ > Microsoft doesn't publish RTO or RPOs for services because this data is for internal measures only. All customer promises and measures are SLA-based because it covers a wider range versus RTO or RPO, which is only applicable in catastrophic loss.
+
+- **Dependencies**: Each service maps the dependencies (other services) it requires to operate no matter how critical, and is mapped to runtime, needed for recovery only, or both. If there are storage dependencies, another data is mapped that defines what's stored, and if it requires point-in-time snapshots, for example.
+
+- **Workforce**: As noted in the definition of a service, it's important to know the location and quantity of workforce able to support the service, ensuring no single points of failure, and if critical employees are dispersed to avoid failures by cohabitation in a single location.
+
+- **External suppliers**: Microsoft keeps a comprehensive list of external suppliers, and the suppliers deemed critical are measured for capabilities. If identified by a service as a dependency, supplier capabilities are compared to the needs of the service to ensure a third-party outage doesn't disrupt Azure services.
+
+- **Recovery rating**: This rating is unique to the Azure Business Continuity Management program. This rating measures several key elements to create a resiliency score:
+
+ - Willingness to fail over: Although there can be a process, it might not be the first choice for short-term outages.
+ - Automation of failover.
+ - Automation of the decision to fail over.
+
+ The most reliable and shortest time to failover is a service that's automated and requires no human decision. An automated service uses heartbeat monitoring or synthetic transactions to determine a service is down and to start immediate remediation.
+
+- **Recovery plan and test**: Azure requires every service to have a detailed recovery plan and to test that plan as if the service has failed because of catastrophic outage. The recovery plans are required to be written so that someone with similar skills and access can complete the tasks. A written plan avoids relying on subject matter experts being available.
+
+ Testing is done in several ways, including self-test in a production or near-production environment, and as part of Azure full-region down drills in canary region sets. These enabled regions are identical to production regions but can be disabled without affecting customers. Testing is considered integrated because all services are affected simultaneously.
+
+- **Customer enablement**: When the customer is responsible for setting up disaster recovery, Azure is required to have public-facing documentation guidance. For all such services, links are provided to documentation and details about the process.
+
+## Verify your business continuity compliance
+
+When a service has completed its business continuity management record, you must submit it for approval. It's assigned to a business continuity management experienced practitioner who reviews the entire record for completeness and quality. If the record meets all requirements, it's approved. If it doesn't, it's rejected with a request for reworking. This process ensures that both parties agree that business continuity compliance has been met and that the work is only attested to by the service owner. Azure internal audit and compliance teams also do periodic random sampling to ensure the best data is being submitted.
+
+## Testing of services
+
+Microsoft and Azure do extensive testing for both disaster recovery and for availability zone readiness. Services are self-tested in a production or pre-production environment to demonstrate independent recoverability for services that aren't dependent on major platform failovers.
+
+To ensure services can similarly recover in a true region-down scenario, &quot;pull-the-plug&quot;-type testing is done in canary environments that are fully deployed regions matching production. For example, the clusters, racks, and power units are literally turned off to simulate a total region failure.
+
+During these tests, Azure uses the same production process for detection, notification, response, and recovery. No individuals are expecting a drill, and engineers relied on for recovery are the normal on-call rotation resources. This timing avoids depending on subject matter experts who might not be available during an actual event.
+
+Included in these tests are services where the customer is responsible for setting up disaster recovery following Microsoft public-facing documentation. Service teams create customer-like instances to show that customer-enabled disaster recovery works as expected and that the instructions provided are accurate.
+
+For more information on certifications, see the [Microsoft Trust Center](https://www.microsoft.com/trust-center) and the section on compliance.
+
+## Next steps
+
+- [Azure services and regions that support availability zones](availability-zones-service-support.md)
+- [Azure Resiliency whitepaper](https://azure.microsoft.com/resources/resilience-in-azure-whitepaper/)
+- [Quickstart templates](https://aka.ms/azqs)
reliability Cross Region Replication Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/cross-region-replication-azure.md
+
+ Title: Cross-region replication in Azure
+description: Learn about Cross-region replication in Azure.
++++ Last updated : 3/01/2022+++++
+# Cross-region replication in Azure: Business continuity and disaster recovery
+
+Many organizations require both high availability provided by availability zones that are also supported with protection from large-scale phenomena and regional disasters. Azure regions are designed to offer protection against local disasters with availability zones. But they can also provide protection from regional or large geography disasters with disaster recovery by making use of another region that uses *cross-region replication*.
+
+## Cross-region replication
+
+To ensure customers are supported across the world, Azure maintains multiple geographies. These discrete demarcations define a disaster recovery and data residency boundary across one or multiple Azure regions.
+
+Cross-region replication is one of several important pillars in the Azure business continuity and disaster recovery strategy. Cross-region replication builds on the synchronous replication of your applications and data that exists by using availability zones within your primary Azure region for high availability. Cross-region replication asynchronously replicates the same applications and data across other Azure regions for disaster recovery protection.
+
+![Image depicting high availability via asynchronous replication of applications and data across other Azure regions for disaster recovery protection.](./media/cross-region-replication.png)
+
+Some Azure services take advantage of cross-region replication to ensure business continuity and protect against data loss. Azure provides several [storage solutions](../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region) that make use of cross-region replication to ensure data availability. For example, [Azure geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage) (GRS) replicates data to a secondary region automatically. This approach ensures that data is durable even if the primary region isn't recoverable.
+
+Not all Azure services automatically replicate data or automatically fall back from a failed region to cross-replicate to another enabled region. In these scenarios, recovery and replication must be configured by the customer. These examples are illustrations of the *shared responsibility model*. It's a fundamental pillar in your disaster recovery strategy. For more information about the shared responsibility model and to learn about business continuity and disaster recovery in Azure, see [Business continuity management in Azure](business-continuity-management-program.md).
+
+Shared responsibility becomes the crux of your strategic decision-making when it comes to disaster recovery. Azure doesn't require you to use cross-region replication, and you can use services to build resiliency without cross-replicating to another enabled region. But we strongly recommend that you configure your essential services across regions to benefit from [isolation](../security/fundamentals/isolation-choices.md) and improve [availability](availability-zones-service-support.md).
+
+For applications that support multiple active regions, we recommend that you use available multiple enabled regions. This practice ensures optimal availability for applications and minimized recovery time if an event affects availability. Whenever possible, design your application for [maximum resiliency](/azure/architecture/framework/resiliency/overview) and ease of [disaster recovery](/azure/architecture/framework/resiliency/backup-and-recovery).
+
+## Benefits of cross-region replication
+
+Architecting cross-regional replication for your services and data can be decided on a per-service basis. You'll necessarily take a cost-benefit analysis approach based on your organization's strategic and business requirements. Primary and ripple benefits of cross-region replication are complex, extensive, and deserve elaboration. These benefits include:
+
+- **Region recovery sequence**: If a geography-wide outage occurs, recovery of one region is prioritized out of every enabled set of regions. Applications that are deployed across enabled region sets are guaranteed to have one of the regions prioritized for recovery. If an application is deployed across regions, any of which isn't enabled for cross-regional replication, recovery can be delayed.
+- **Sequential updating**: Planned Azure system updates for your enabled regions are staggered chronologically to minimize downtime, impact of bugs, and any logical failures in the rare event of a faulty update.
+- **Physical isolation**: Azure strives to ensure a minimum distance of 300 miles (483 kilometers) between datacenters in enabled regions, although it isn't possible across all geographies. Datacenter separation reduces the likelihood that natural disaster, civil unrest, power outages, or physical network outages can affect multiple regions. Isolation is subject to the constraints within a geography, such as geography size, power or network infrastructure availability, and regulations.
+- **Data residency**: Regions reside within the same geography as their enabled set (except for Brazil South and Singapore) to meet data residency requirements for tax and law enforcement jurisdiction purposes.
+
+Although it is not possible to create your own regional pairings, you can nevertheless create your own disaster recovery solution by building your services in any number of regions and then using Azure services to pair them. For example, you can use Azure services such as [AzCopy](../storage/common/storage-use-azcopy-v10.md) to schedule data backups to an Azure Storage account in a different region. Using [Azure DNS and Azure Traffic Manager](../networking/disaster-recovery-dns-traffic-manager.md), you can design a resilient architecture for your applications that will survive the loss of the primary region.
+
+Azure controls planned maintenance and recovery prioritization for regional pairs. Some Azure services rely upon regional pairs by default, such as Azure [redundant storage](../storage/common/storage-redundancy.md).
+
+You are not limited to using services within your regional pairs. Although an Azure service can rely upon a specific regional pair, you can host your other services in any region that satisfies your business needs. For example, an Azure GRS storage solution can pair data in Canada Central with a peer in Canada East while using Azure Compute resources located in East US.
+
+## Azure cross-region replication pairings for all geographies
+
+Regions are paired for cross-region replication based on proximity and other factors.
+
+**Azure regional pairs**
+
+| Geography | Regional pair A | Regional pair B |
+| | | |
+| Asia-Pacific |East Asia (Hong Kong) | Southeast Asia (Singapore) |
+| Australia |Australia East |Australia Southeast |
+| Australia |Australia Central |Australia Central 2\* |
+| Brazil |Brazil South |South Central US |
+| Brazil |Brazil Southeast\* |Brazil South |
+| Canada |Canada Central |Canada East |
+| China |China North |China East|
+| China |China North 2 |China East 2|
+| China |China North 3 |China East 3\* |
+| Europe |North Europe (Ireland) |West Europe (Netherlands) |
+| France |France Central|France South\*|
+| Germany |Germany West Central |Germany North\* |
+| India |Central India |South India |
+| India |West India |South India |
+| Japan |Japan East |Japan West |
+| Korea |Korea Central |Korea South\* |
+| North America |East US |West US |
+| North America |East US 2 |Central US |
+| North America |North Central US |South Central US |
+| North America |West US 2 |West Central US |
+| North America |West US 3 |East US |
+| Norway | Norway East | Norway West\* |
+| South Africa | South Africa North |South Africa West\* |
+| Sweden | Sweden Central |Sweden South\* |
+| Switzerland | Switzerland North |Switzerland West\* |
+| UK |UK West |UK South |
+| United Arab Emirates | UAE North | UAE Central\* |
+| US Department of Defense |US DoD East\* |US DoD Central\* |
+| US Government |US Gov Arizona\* |US Gov Texas\* |
+| US Government |US Gov Iowa\* |US Gov Virginia\* |
+| US Government |US Gov Virginia\* |US Gov Texas\* |
+
+(\*) Certain regions are access restricted to support specific customer scenarios, such as in-country disaster recovery. These regions are available only upon request by [creating a new support request in the Azure portal](https://portal.azure.com/#blade/Microsoft\_Azure\_Support/HelpAndSupportBlade/newsupportrequest).
+
+> [!IMPORTANT]
+> - West India is paired in one direction only. West India's secondary region is South India, but South India's secondary region is Central India.
+> - Brazil South is unique because it's paired with a region outside of its geography. Brazil South's secondary region is South Central US. The secondary region of South Central US isn't Brazil South.
+
+## Regions with availability zones and no region pair
+
+Azure continues to expand globally with Qatar as the first region with no regional pair and achieves high availability by leveraging [availability zones](../reliability/availability-zones-overview.md) and [locally redundant or zone-redundant storage (LRS/ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage). Regions without a pair will not have [geo-redundant storage (GRS)](../storage/common/storage-redundancy.md#geo-redundant-storage). Such regions follow [data residency](https://azure.microsoft.com/global-infrastructure/data-residency/#overview) guidelines allowing the option to keep data resident within the same region. Customers are responsible for data resiliency based on their RTO/RPO needs and may move, copy, or access their data from any location globally. In the rare event that an entire Azure region is unavailable, customers will need to plan for their Cross Region Disaster Recovery per guidance from [Azure services that support high availability](../reliability/availability-zones-service-support.md#azure-services-with-availability-zone-support) and [Azure Resiliency ΓÇô Business Continuity and Disaster Recovery](https://azure.microsoft.com/mediahandler/files/resourcefiles/resilience-in-azure-whitepaper/resiliency-whitepaper-2022.pdf)
+
+## Next steps
+
+- [Azure services and regions that support availability zones](availability-zones-service-support.md)
+- [Quickstart templates](https://aka.ms/azqs)
reliability Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/glossary.md
+
+ Title: Azure resiliency terminology
+description: Understanding terms
++++ Last updated : 10/01/2021+++++
+# Reliability erminology
+
+To better understand regions and availability zones in Azure, it helps to understand key terms or concepts.
+
+| Term or concept | Description |
+| | |
+| region | A set of datacenters deployed within a latency-defined perimeter and connected through a dedicated regional low-latency network. |
+| geography | An area of the world that contains at least one Azure region. Geographies define a discrete market that preserves data-residency and compliance boundaries. Geographies allow customers with specific data-residency and compliance needs to keep their data and applications close. Geographies are fault tolerant to withstand complete region failure through their connection to our dedicated high-capacity networking infrastructure. |
+| availability zone | Unique physical locations within a region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. |
+| recommended region | A region that provides the broadest range of service capabilities and is designed to support availability zones now, or in the future. These regions are designated in the Azure portal as **Recommended**. |
+| alternate (other) region | A region that extends Azure's footprint within a data-residency boundary where a recommended region also exists. Alternate regions help to optimize latency and provide a second region for disaster recovery needs. They aren't designed to support availability zones, although Azure conducts regular assessment of these regions to determine if they should become recommended regions. These regions are designated in the Azure portal as **Other**. |
+| cross-region replication (formerly paired region) | A reliability strategy and implementation that combines high availability of availability zones with protection from region-wide incidents to meet both disaster recovery and business continuity needs. |
+| foundational service | A core Azure service that's available in all regions when the region is generally available. |
+| mainstream service | An Azure service that's available in all recommended regions within 90 days of the region general availability or demand-driven availability in alternate regions. |
+| strategic service | An Azure service that's demand-driven availability across regions backed by customized/specialized hardware. |
+| regional service | An Azure service that's deployed regionally and enables the customer to specify the region into which the service will be deployed. For a complete list, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all). |
+| non-regional service | An Azure service for which there's no dependency on a specific Azure region. Non-regional services are deployed to two or more regions. If there's a regional failure, the instance of the service in another region continues servicing customers. For a complete list, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all). |
+| zonal service | An Azure service that supports availability zones, and that enables a resource to be deployed to a specific, self-selected availability zone to achieve more stringent latency or performance requirements. |
+| zone-redundant service | An Azure service that supports availability zones, and that enables resources to be replicated or distributed across zones automatically. |
+| always-available service | An Azure service that supports availability zones, and that enables resources to be always available across all Azure geographies as well as resilient to zone-wide and region-wide outages. |
reliability Migrate Api Mgt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-api-mgt.md
+
+ Title: Migrate Azure API Management to availability zone support
+description: Learn how to migrate your Azure API Management instances to availability zone support.
+++ Last updated : 07/07/2022+++++
+# Migrate Azure API Management to availability zone support
+
+This guide describes how to enable availability zone support for your API Management instance. The API Management service supports [Zone redundancy](../reliability/availability-zones-overview.md), which provides resiliency and high availability to a service instance in a specific Azure region. With zone redundancy, the gateway and the control plane of your API Management instance (Management API, developer portal, Git configuration) are replicated across datacenters in physically separated zones, making it resilient to a zone failure.
+
+In this article, we'll take you through the different options for availability zone migration.
+
+## Prerequisites
+
+* To configure API Management for zone redundancy, your instance must be in one of the following regions:
+
+ * Australia East
+ * Brazil South
+ * Canada Central
+ * Central India
+ * Central US
+ * East Asia
+ * East US
+ * East US 2
+ * France Central
+ * Germany West Central
+ * Japan East
+ * Korea Central (*)
+ * North Europe
+ * Norway East
+ * South Africa North (*)
+ * South Central US
+ * Southeast Asia
+ * Switzerland North
+ * UK South
+ * West Europe
+ * West US 2
+ * West US 3
+
+ > [!IMPORTANT]
+ > The regions with * against them have restrictive access in an Azure subscription to enable availability zone support. Please work with your Microsoft sales or customer representative.
+
+* If you haven't yet created an API Management service instance, see [Create an API Management service instance](../api-management/get-started-create-service-instance.md). Select the Premium service tier.
+
+* API Management service must be in the Premium tier. If it isn't, you can [upgrade](../api-management/upgrade-and-scale.md#change-your-api-management-service-tier) to the Premium tier.
+
+* If your API Management instance is deployed (injected) in a [Azure virtual network (VNet)](../api-management/api-management-using-with-vnet.md), check the version of the [compute platform](../api-management/compute-infrastructure.md) (stv1 or stv2) that hosts the service.
+
+## Downtime requirements
+
+There are no downtime requirements for any of the migration options.
+
+## Considerations
+
+* Changes can take from 15 to 45 minutes to apply. The API Management gateway can continue to handle API requests during this time.
+
+* Migrating to availability zones or changing the availability zone configuration will trigger a public [IP address change](../api-management/api-management-howto-ip-addresses.md#changes-to-the-ip-addresses).
+
+* If you've configured autoscaling for your API Management instance in the primary location, you might need to adjust your autoscale settings after enabling zone redundancy. The number of API Management units in autoscale rules and limits must be a multiple of the number of zones.
++
+## Option 1: Migrate existing location of API Management instance, not injected in VNet
+
+Use this option to migrate an existing location of your API Management instance to availability zones when itΓÇÖs not injected (deployed) in a virtual network.
+
+### How to migrate API Management in a VNet
+
+1. In the Azure portal, navigate to your API Management service.
+
+1. Select **Locations** in the menu, and then select the location to be migrated. The location must [support availability zones](#prerequisites).
+
+1. Select the number of scale [Units](../api-management/upgrade-and-scale.md) desired in the location.
+
+1. In **Availability zones**, select one or more zones. The number of units selected must be distributed evenly across the availability zones. For example, if you selected 3 units, select 3 zones so that each zone hosts one unit.
+
+1. Select **Apply**, and then select **Save**.
+
+ :::image type="content" alt-text="Screenshot of how to migrate existing location of API Management instance not injected in VNet." source ="media/migrate-api-mgt/option-one-not-injected-in-vnet.png":::
+
++
+## Option 2: Migrate existing location of API Management instance (stv1 platform), injected in VNet
+
+Use this option to migrate an existing location of your API Management instance to availability zones when it is currently injected (deployed) in a virtual network. The following steps are needed when the API Management instance is currently hosted on the stv1 platform. Migrating to availability zones will also migrate the instance to the stv2 platform.
+
+1. Create a new subnet and public IP address in location to migrate to availability zones. Detailed requirements are in [virtual networking guidance](../api-management/api-management-using-with-vnet.md?tabs=stv2#prerequisites).
+
+1. In the Azure portal, navigate to your API Management service.
+
+1. Select **Locations** in the menu, and then select the location to be migrated. The location must [support availability zones](#prerequisites).
+
+1. Select the number of scale [Units](../api-management/upgrade-and-scale.md) desired in the location.
+
+1. In **Availability zones**, select one or more zones. The number of units selected must be distributed evenly across the availability zones. For example, if you selected 3 units, select 3 zones so that each zone hosts one unit.
+
+1. Select the new subnet and new public IP address in the location.
+
+1. Select **Apply**, and then select **Save**.
++
+ :::image type="content" alt-text="Screenshot of how to migrate existing location of API Management instance injected in VNet." source ="media/migrate-api-mgt/option-two-injected-in-vnet.png":::
+
+## Option 3: Migrate existing location of API Management instance (stv2 platform), injected in VNet
+
+Use this option to migrate an existing location of your API Management instance to availability zones when it is currently injected (deployed) in a virtual network. The following steps are used when the API Management instance is already hosted on the stv2 platform.
+
+1. Create a new subnet and public IP address in location to migrate to availability zones. Detailed requirements are in [virtual networking guidance](../api-management/api-management-using-with-vnet.md?tabs=stv2#prerequisites).
+
+1. In the Azure portal, navigate to your API Management service.
+
+1. Select **Locations** in the menu, and then select the location to be migrated. The location must [support availability zones](#prerequisites).
+
+1. Select the number of scale [Units](../api-management/upgrade-and-scale.md) desired in the location.
+
+1. In **Availability zones**, select one or more zones. The number of units selected must be distributed evenly across the availability zones. For example, if you selected 3 units, select 3 zones so that each zone hosts one unit.
+
+1. Select the new public IP address in the location.
+
+1. Select **Apply**, and then select **Save**.
+
+ :::image type="content" alt-text="Screenshot of how to migrate existing location of API Management instance (stv2 platform) injected in VNet." source ="media/migrate-api-mgt/option-three-stv2-injected-in-vnet.png":::
+
+## Option 4. Add new location for API Management instance (with or without VNet) with availability zones
+
+Use this option to add a new location to your API Management instance and enable availability zones in that location.
+
+If your API Management instance is deployed in a virtual network in the primary location, ensure that you set up a [virtual network](../api-management/api-management-using-with-vnet.md?tabs=stv2), subnet, and public IP address in any new location where you plan to enable zone redundancy.
+
+1. In the Azure portal, navigate to your API Management service.
+
+1. Select **+ Add** in the top bar to add a new location. The location must [support availability zones](#prerequisites).
+
+1. Select the number of scale [Units](../api-management/upgrade-and-scale.md) desired in the location.
+
+1. In **Availability zones**, select one or more zones. The number of units selected must be distributed evenly across the availability zones. For example, if you selected 3 units, select 3 zones so that each zone hosts one unit.
+
+1. If your API Management instance is deployed in a [virtual network](../api-management/api-management-using-with-vnet.md?tabs=stv2), select the virtual network, subnet, and public IP address that are available in the location.
+
+1. Select **Add**, and then select **Save**.
+
+ :::image type="content" alt-text="Screenshot of how to add new location for API Management instance with or without VNet." source ="media/migrate-api-mgt/option-four-add-new-location.png":::
+
+## Next steps
+
+Learn more about:
+
+> [!div class="nextstepaction"]
+> [Deploying an Azure API Management service instance to multiple Azure regions](../api-management/api-management-howto-deploy-multi-region.md).
+
+> [!div class="nextstepaction"]
+> [Building for reliability](/azure/architecture/framework/resiliency/app-design) in Azure.
+
+> [!div class="nextstepaction"]
+> [Azure services and regions that support availability zones](availability-zones-service-support.md)
+
reliability Migrate App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-app-configuration.md
+
+ Title: Migrate App Configuration to a region with availability zone support
+description: Learn how to migrate Azure App Configuration to availability zone support.
+++ Last updated : 09/10/2022++++
+# Migrate App Configuration to a region with availability zone support
+
+Azure App Configuration supports Azure availability zones. This guide describes how to migrate an App Configuration store from non-availability zone support to a region with availability zone support.
+
+## Availability zone support in Azure App Configuration
+
+Azure App Configuration supports Azure availability zones to protect your application and data from single datacenter failures. All availability zone-enabled regions have a minimum of three availability zones, and each availability zone is composed of one or more datacenters equipped with independent power, cooling, and networking infrastructure. In regions where App Configuration supports availability zones, all stores have availability zones enabled by default.
++
+For more information about availability zones, go to [Regions and Availability Zones in Azure.](../reliability/availability-zones-overview.md)
+
+## App Configuration store migration
+
+### If App Configuration starts supporting availability zones in your region
+
+#### Prerequisites
+
+None
+
+#### Downtime requirements
+
+None
+
+#### Process
+
+If you created a store in a region where App Configuration didn't have availability zone support at the time and it started supporting it later, you don't need to do anything to start benefiting from the availability zone support. Your store will benefit from the availability zone support that has become available for App Configuration stores in the region.
+
+### If App Configuration doesnΓÇÖt support availability zones in your region
+
+#### Prerequisites
+
+- An Azure subscription with the Owner or Contributor role to create a new App Configuration store
+- Owner, Contributor, or App Configuration Data Owner permissions on the App Configuration store with no availability zone support.
+
+#### Downtime requirements
+
+None
+
+#### Process
+
+If App Configuration doesn't support availability zones in your region, you'll need to move your App Configuration data from this store to another store in a region where App Configuration has availability zone support.
+
+App Configuration stores are region-specific and can't be migrated across regions. To move a store to a region where App Configuration has availability zone support, you must create a new App Configuration store in the target region, then move your App Configuration data from the source store to the new target store.
+
+The following steps walk you through the process of creating a new target store and using the import/export functionality to move the configuration data from your current store to the newly created store.
+
+1. Create a target configuration store in a [region where App Configuration has availability zone support](#availability-zone-support-in-azure-app-configuration)
+1. Transfer your configuration data using the [import function](../azure-app-configuration/howto-import-export-data.md) in your target configuration store.
+1. Optionally, delete your source configuration store if you have no use for it.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Resiliency and disaster recovery](../azure-app-configuration/concept-geo-replication.md)
+
+> [!div class="nextstepaction"]
+> [Building for reliability](/azure/architecture/framework/resiliency/app-design) in Azure.
+
+> [!div class="nextstepaction"]
+> [Azure services and regions that support availability zones](availability-zones-service-support.md)
+
reliability Migrate App Gateway V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-app-gateway-v2.md
+
+ Title: Migrate Azure Application Gateway Standard and WAF v2 deployments to availability zone support
+description: Learn how to migrate your Azure Application Gateway and WAF deployments to availability zone support.
+++ Last updated : 07/28/2022+++++
+# Migrate Application Gateway and WAF deployments to availability zone support
+
+[Application Gateway Standard v2](../application-gateway/overview-v2.md) and Application Gateway with [WAF v2](../web-application-firewall/ag/ag-overview.md) supports zonal and zone redundant deployments. For more information about zone redundancy, see [Azure services and regions that support availability zones](availability-zones-service-support.md).
+
+If you previously deployed **Azure Application Gateway Standard v2** or **Azure Application Gateway Standard v2 + WAF v2** without zonal support, you must redeploy these services to enable zone redundancy. Two migration options to redeploy these services are described in this article.
+
+## Prerequisites
+
+- Your deployment must be Standard v2 or WAF v2 SKU. Earlier SKUs (Standard and WAF) don't support availability zones.
+
+## Downtime requirements
+
+Some migration options described in this article require downtime until new deployments have been completed.
+
+## Option 1: Create a separate Application Gateway and IP address
+
+This option requires you to create a separate Application Gateway deployment, using a new public IP address. Workloads are then migrated from the non-zone aware Application Gateway setup to the new one.
+
+Since you're changing the public IP address, changes to DNS configuration are required. This option also requires some changes to virtual networks and subnets.
+
+Use this option to:
+
+- Minimize downtime. If DNS records are updated to the new environment, clients will establish new connections to the new gateway with no interruption.
+- Allow for extensive testing or even a blue/green scenario.
+
+To create a separate Application Gateway, WAF (optional) and IP address:
+
+1. Go to the [Azure portal](https://portal.azure.com).
+2. Follow the steps in [Create an application gateway](../application-gateway/quick-create-portal.md#create-an-application-gateway) or [Create an application gateway with a Web Application Firewall](../web-application-firewall/ag/application-gateway-web-application-firewall-portal.md) to create a new Application Gateway v2 or Application Gateway v2 + WAF v2, respectively. You can reuse your existing Virtual Network or create a new one, but you must create a new frontend Public IP address.
+3. Verify that the application gateway and WAF are working as intended.
+4. Migrate your DNS configuration to the new public IP address.
+5. Delete the old Application gateway and WAF resources.
+
+## Option 2: Delete and redeploy Application Gateway
+
+This option doesn't require you to reconfigure your virtual network and subnets. If the public IP address for the Application Gateway is already configured for the desired end state zone awareness, you can choose to delete and redeploy the Application Gateway, and leave the Public IP address unchanged.
+
+Use this option to:
+
+- Avoid changing IP address, subnet, and DNS configurations.
+- Move workloads that are not sensitive to downtime.
+
+To delete the Application Gateway and WAF and redeploy:
+
+1. Go to the [Azure portal](https://portal.azure.com).
+2. Select **All resources**, and then select the resource group that contains the Application Gateway.
+3. Select the Application Gateway resource and then select **Delete**. Type **yes** to confirm deletion, and then click **Delete**.
+4. Follow the steps in [Create an application gateway](../application-gateway/quick-create-portal.md#create-an-application-gateway) or [Create an application gateway with a Web Application Firewall](../web-application-firewall/ag/application-gateway-web-application-firewall-portal.md) to create a new Application Gateway v2 or Application Gateway v2 + WAF v2, respectively, using the same Virtual Network, subnets, and Public IP address that you used previously.
+
+## Next steps
+
+Learn more about:
+
+> [!div class="nextstepaction"]
+> [Scaling and Zone-redundant Application Gateway v2](../application-gateway/application-gateway-autoscaling-zone-redundant.md)
+
+> [!div class="nextstepaction"]
+> [Azure services and regions that support availability zones](availability-zones-service-support.md)
reliability Migrate App Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-app-service-environment.md
+
+ Title: Migrate Azure App Service Environment to availability zone support
+description: Learn how to migrate an Azure App Service Environment to availability zone support.
+++ Last updated : 06/08/2022+++++
+# Migrate App Service Environment to availability zone support
+
+This guide describes how to migrate an App Service Environment from non-availability zone support to availability support. We'll take you through the different options for migration.
+
+> [!NOTE]
+> This article is about App Service Environment v3, which is used with Isolated v2 App Service plans. Availability zones are only supported on App Service Environment v3. If you're using App Service Environment v1 or v2 and want to use availability zones, you'll need to migrate to App Service Environment v3.
+
+Azure App Service Environment can be deployed across [availability zones (AZ)](../reliability/availability-zones-overview.md) to help you achieve resiliency and reliability for your business-critical workloads. This architecture is also known as zone redundancy.
+
+When you configure to be zone redundant, the platform automatically spreads the instances of the Azure App Service plan across all three zones in the selected region. This means that the minimum App Service Plan instance count will always be three. If you specify a capacity larger than three, and the number of instances is divisible by three, the instances are spread evenly. Otherwise, instance counts beyond 3*N are spread across the remaining one or two zones.
+
+## Prerequisites
+
+- You configure availability zones when you create your App Service Environment.
+ - All App Service plans created in that App Service Environment will need a minimum of 3 instances and those will automatically be zone redundant.
+- You can only specify availability zones when creating a **new** App Service Environment. A pre-existing App Service Environment can't be converted to use availability zones.
+- Availability zones are only supported in a [subset of regions](../app-service/environment/overview.md#regions).
+
+## Downtime requirements
+
+Downtime will be dependent on how you decide to carry out the migration. Since you can't convert pre-existing App Service Environments to use availability zones, migration will consist of a side-by-side deployment where you'll create a new App Service Environment with availability zones enabled.
+
+Downtime will depend on how you choose to redirect traffic from your old to your new availability zone enabled App Service Environment. For example, if you're using an [Application Gateway](../app-service/networking/app-gateway-with-service-endpoints.md), a [custom domain](../app-service/app-service-web-tutorial-custom-domain.md), or [Azure Front Door](../frontdoor/front-door-overview.md), downtime will be dependent on the time it takes to update those respective services with your new app's information. Alternatively, you can route traffic to multiple apps at the same time using a service such as [Azure Traffic Manager](../app-service/web-sites-traffic-manager.md) and only fully cutover to your new availability zone enabled apps when everything is deployed and fully tested. For more information on App Service Environment migration options, see [App Service Environment migration](../app-service/environment/migration-alternatives.md). If you're already using App Service Environment v3, disregard the information about migration from previous versions and focus on the app migration strategies.
+
+## Migration guidance: Redeployment
+
+### When to use redeployment
+
+If you want your App Service Environment to use availability zones, redeploy your apps into a newly created availability zone enabled App Service Environment.
+
+### Important considerations when using availability zones
+
+Traffic is routed to all of your available App Service instances. In the case when a zone goes down, the App Service platform will detect lost instances and automatically attempt to find new replacement instances and spread traffic as needed. If you have [autoscale](../app-service/manage-scale-up.md) configured, and if it decides more instances are needed, autoscale will also issue a request to App Service to add more instances. Note that [autoscale behavior is independent of App Service platform behavior](../azure-monitor/autoscale/autoscale-overview.md) and that your autoscale instance count specification doesn't need to be a multiple of three. It's also important to note there's no guarantee that requests for additional instances in a zone-down scenario will succeed since back filling lost instances occurs on a best-effort basis. The recommended solution is to create and configure your App Service plans to account for losing a zone as described in the next section.
+
+Applications that are deployed in an App Service Environment that has availability zones enabled will continue to run and serve traffic even if other zones in the same region suffer an outage. However it's possible that non-runtime behaviors including App Service plan scaling, application creation, application configuration, and application publishing may still be impacted from an outage in other Availability Zones. Zone redundancy for App Service Environments only ensures continued uptime for deployed applications.
+
+When the App Service platform allocates instances to a zone redundant App Service plan, it uses [best effort zone balancing offered by the underlying Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#zone-balancing). An App Service plan will be "balanced" if each zone has either the same number of VMs, or +/- one VM in all of the other zones used by the App Service plan.
+
+## In-region data residency
+
+A zone redundant App Service Environment will only store customer data within the region where it has been deployed. App content, settings, and secrets stored in App Service remain within the region where the zone redundant App Service Environment is deployed.
+
+### How to redeploy
+
+The following steps describe how to enable availability zones.
+
+1. To redeploy and ensure you'll be able to use availability zones, you'll need to be on the App Service footprint that supports availability zones. Create your new App Service Environment in one of the [supported regions](../app-service/environment/overview.md#regions).
+1. Ensure the zoneRedundant property (described below) is set to true when creating the new App Service Environment.
+1. Create your new App Service plans and apps in the new App Service Environment using your desired deployment method.
+
+You can create an App Service Environment with availability zones using the [Azure CLI](/cli/azure/install-azure-cli), [Azure portal](https://portal.azure.com), or an [Azure Resource Manager (ARM) template](../azure-resource-manager/templates/overview.md).
+
+To enable availability zones using the Azure CLI, include the `--zone-redundant` parameter when you create your App Service Environment.
+
+```azurecli
+az appservice ase create --resource-group MyResourceGroup --name MyAseName --zone-redundant --vnet-name MyVNet --subnet MySubnet --kind asev3 --virtual-ip-type Internal
+```
+
+To create an App Service Environment with availability zones using the Azure portal, enable the zone redundancy option during the "Create App Service Environment v3" experience on the Hosting tab.
+
+The only change needed in an Azure Resource Manager template to specify an App Service Environment with availability zones is the ***zoneRedundant*** property on the [Microsoft.Web/hostingEnvironments](/azure/templates/microsoft.web/hostingEnvironments?tabs=json) resource. The ***zoneRedundant*** property should be set to ***true***.
+
+```json
+"resources": [
+ {
+ "apiVersion": "2019-08-01",
+ "type": "Microsoft.Web/hostingEnvironments",
+ "name": "MyAppServiceEnvironment",
+ "kind": "ASEV3",
+ "location": "West US 3",
+ "properties": {
+ "name": "MyAppServiceEnvironment",
+ "location": "West US 3",
+ "dedicatedHostCount": "0",
+ "zoneRedundant": true,
+ "InternalLoadBalancingMode": 0,
+ "virtualNetwork": {
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/MyResourceGroup/providers/Microsoft.Network/virtualNetworks/MyVNet/subnets/MySubnet"
+ }
+ }
+ }
+]
+```
+
+## Pricing
+
+There's a minimum charge of nine App Service plan instances in a zone redundant App Service Environment. There's no added charge for availability zone support if you have nine or more instances. If you have fewer than nine instances (of any size) across App Service plans in the zone redundant App Service Environment, you're charged for the difference between nine and the running instance count. This difference is billed as Windows I1v2 instances.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Azure services and regions that support availability zones](availability-zones-service-support.md)
reliability Migrate App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-app-service.md
++
+ Title: Migrate Azure App Service to availability zone support
+description: Learn how to migrate Azure App Service to availability zone support.
+++ Last updated : 10/19/2022++++++
+# Migrate App Service to availability zone support
+
+This guide describes how to migrate the public multi-tenant App Service from non-availability zone support to availability support. We'll take you through the different options for migration.
+
+Azure App Service can be deployed into [availability zones (AZ)](../reliability/availability-zones-overview.md) to help you achieve resiliency and reliability for your business-critical workloads. This architecture is also known as zone redundancy.
+
+An App Service lives in an App Service plan (ASP), and the App Service plan exists in a single scale unit. App Services are zonal services, which means that App Services can be deployed using one of the following methods:
+
+- For App Services that aren't configured to be zone redundant, the VM instances are placed in a single zone that is selected by the platform in the selected region.
+
+- For App Services that are configured to be zone redundant, the platform automatically spreads the VM instances in the App Service plan across all three zones in the selected region. If a VM instance capacity larger than three is specified and the number of instances is a multiple of three (3 * N), the instances will be spread evenly. However, if the number of instances is not a multiple of three, the remainder of the instances will get spread across the remaining one or two zones.
+
+## Prerequisites
+
+Availability zone support is a property of the App Service plan. The following are the current requirements/limitations for enabling availability zones:
+
+- Both Windows and Linux are supported.
+- Requires either **Premium v2** or **Premium v3** App Service plans.
+- Minimum instance count of three is enforced.
+ - The platform will enforce this minimum count behind the scenes if you specify an instance count fewer than three.
+- Can be enabled in any of the following regions:
+ - West US 2
+ - West US 3
+ - Central US
+ - East US
+ - East US 2
+ - South Central US
+ - Canada Central
+ - Brazil South
+ - North Europe
+ - West Europe
+ - Sweden Central
+ - Germany West Central
+ - France Central
+ - UK South
+ - Japan East
+ - East Asia
+ - Southeast Asia
+ - Qatar Central
+ - Central India
+ - Australia East
+- Availability zones can only be specified when creating a **new** App Service plan. A pre-existing App Service plan can't be converted to use availability zones.
+- Availability zones are only supported in the newer portion of the App Service footprint.
+ - Currently, if you're running on Pv3, then it's possible that you're already on a footprint that supports availability zones. In this scenario, you can create a new App Service plan and specify zone redundancy.
+ - If you aren't using Pv3 or a scale unit that supports availability zones, are in an unsupported region, or are unsure, see the [migration guidance](#migration-guidance-redeployment).
+
+## Downtime requirements
+
+Downtime will be dependent on how you decide to carry out the migration. Since you can't convert pre-existing App Service plans to use availability zones, migration will consist of a side-by-side deployment where you'll create new App Service plans. Downtime will depend on how you choose to redirect traffic from your old to your new availability zone enabled App Service. For example, if you're using an [Application Gateway](../app-service/networking/app-gateway-with-service-endpoints.md), a [custom domain](../app-service/app-service-web-tutorial-custom-domain.md), or [Azure Front Door](../frontdoor/front-door-overview.md), downtime will be dependent on the time it takes to update those respective services with your new app's information. Alternatively, you can route traffic to multiple apps at the same time using a service such as [Azure Traffic Manager](../app-service/web-sites-traffic-manager.md) and only fully cutover to your new availability zone enabled apps when everything is deployed and fully tested.
+
+## Migration guidance: Redeployment
+
+### When to use redeployment
+
+If you want your App Service to use availability zones, redeploy your apps into newly created availability zone enabled App Service plans.
+
+### Important considerations when using availability zones
+
+Traffic is routed to all of your available App Service instances. In the case when a zone goes down, the App Service platform will detect lost instances and automatically attempt to find new replacement instances and spread traffic as needed. If you have [autoscale](../app-service/manage-scale-up.md) configured, and if it decides more instances are needed, autoscale will also issue a request to App Service to add more instances. Note that [autoscale behavior is independent of App Service platform behavior](../azure-monitor/autoscale/autoscale-overview.md) and that your autoscale instance count specification doesn't need to be a multiple of three. It's also important to note there's no guarantee that requests for additional instances in a zone-down scenario will succeed since back filling lost instances occurs on a best-effort basis. The recommended solution is to create and configure your App Service plans to account for losing a zone as described in the next section.
+
+Applications that are deployed in an App Service plan that has availability zones enabled will continue to run and serve traffic even if other zones in the same region suffer an outage. However it's possible that non-runtime behaviors including App Service plan scaling, application creation, application configuration, and application publishing may still be impacted from an outage in other Availability Zones. Zone redundancy for App Service plans only ensures continued uptime for deployed applications.
+
+When the App Service platform allocates instances to a zone redundant App Service plan, it uses [best effort zone balancing offered by the underlying Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#zone-balancing). An App Service plan will be "balanced" if each zone has either the same number of VMs, or +/- one VM in all of the other zones used by the App Service plan.
+
+### How to redeploy
+
+The following steps describe how to enable availability zones.
+
+1. To redeploy and ensure you'll be able to use availability zones, you'll need to be on the App Service footprint that supports availability zones. If you're already using the Pv3 SKU and are in one of the [supported regions](#prerequisites), you can move on to the next step. Otherwise, you should create a new resource group in one of the supported regions to ensure the App Service control plane can find a scale unit in the selected region that supports availability zones.
+1. Create a new App Service plan in one of the supported regions using the **new** resource group.
+1. Ensure the zoneRedundant property (described below) is set to true when creating the new App Service plan.
+1. Create your apps in the new App Service plan using your desired deployment method.
+
+You can create an App Service with availability zones using the [Azure CLI](/cli/azure/install-azure-cli), [Azure portal](https://portal.azure.com), or an [Azure Resource Manager (ARM) template](../azure-resource-manager/templates/overview.md).
+
+To enable availability zones using the Azure CLI, include the `--zone-redundant` parameter when you create your App Service plan. You can also include the `--number-of-workers` parameter to specify capacity. If you don't specify a capacity, the platform defaults to three. Capacity should be set based on the workload requirement, but no less than three. A good rule of thumb to choose capacity is to ensure sufficient instances for the application such that losing one zone of instances leaves sufficient capacity to handle expected load.
+
+```azurecli
+az appservice plan create --resource-group MyResourceGroup --name MyPlan --sku P1v2 --zone-redundant --number-of-workers 6
+```
+
+> [!TIP]
+> To decide instance capacity, you can use the following calculation:
+>
+> Since the platform spreads VMs across three zones and you need to account for at least the failure of one zone, multiply peak workload instance count by a factor of zones/(zones-1), or 3/2. For example, if your typical peak workload requires four instances, you should provision six instances: (2/3 * 6 instances) = 4 instances.
+>
+
+To create an App Service with availability zones using the Azure portal, enable the zone redundancy option during the "Create Web App" or "Create App Service Plan" experiences.
++
+The capacity/number of workers/instance count can be changed once the App Service Plan is created by navigating to the **Scale out (App Service plan)** settings.
++
+The only changes needed in an Azure Resource Manager template to specify an App Service with availability zones are the ***zoneRedundant*** property (required) and optionally the App Service plan instance count (***capacity***) on the [Microsoft.Web/serverfarms](/azure/templates/microsoft.web/serverfarms?tabs=json) resource. The ***zoneRedundant*** property should be set to ***true*** and ***capacity*** should be set based on the same conditions described previously.
+
+The Azure Resource Manager template snippet below shows the new ***zoneRedundant*** property and ***capacity*** specification.
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2018-02-01",
+ "name": "your-appserviceplan-name-here",
+ "location": "West US 3",
+ "sku": {
+ "name": "P1v3",
+ "tier": "PremiumV3",
+ "size": "P1v3",
+ "family": "Pv3",
+ "capacity": 3
+ },
+ "kind": "app",
+ "properties": {
+ "zoneRedundant": true
+ }
+ }
+]
+```
+
+## Pricing
+
+There's no additional cost associated with enabling availability zones. Pricing for a zone redundant App Service is the same as a single zone App Service. You'll be charged based on your App Service plan SKU, the capacity you specify, and any instances you scale to based on your autoscale criteria. If you enable availability zones but specify a capacity less than three, the platform will enforce a minimum instance count of three and charge you for those three instances.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn how to create and deploy ARM templates](../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md)
+
+> [!div class="nextstepaction"]
+> [ARM Quickstart Templates](https://azure.microsoft.com/resources/templates/)
+
+> [!div class="nextstepaction"]
+> [Learn how to scale up an app in Azure App Service](../app-service/manage-scale-up.md)
+
+> [!div class="nextstepaction"]
+> [Overview of autoscale in Microsoft Azure](../azure-monitor/autoscale/autoscale-overview.md)
+
+> [!div class="nextstepaction"]
+> [Manage disaster recovery](../app-service/manage-disaster-recovery.md)
reliability Migrate Cache Redis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-cache-redis.md
+
+ Title: Migrate an Azure Cache for Redis instance to availability zone support
+description: Learn how to migrate an Azure Cache for Redis instance to availability zone support.
+++ Last updated : 06/23/2022++++
+
+# Migrate an Azure Cache for Redis instance to availability zone support
+
+This guide describes how to migrate your Azure Cache for Redis instance from non-availability zone support to availability zone support.
+
+Azure Cache for Redis supports zone redundancy in its Premium, Enterprise, and Enterprise Flash tiers. A zone-redundant cache runs on VMs spread across multiple availability zone to provide high resilience and availability.
+
+Currently, the only way to convert a resource from non-availability zone support to availability zone support is to redeploy your current cache.
+
+## Prerequisites
+
+To migrate to availability zone support, you must have an Azure Cache for Redis resource in either the Premium, Enterprise, or Enterprise Flash tiers.
+
+## Downtime requirements
+
+There are multiple ways to migrate data to a new cache. Many of them require some downtime.
+
+## Migration guidance: redeployment
+
+### When to use redeployment
+
+Azure Cache for Redis currently doesnΓÇÖt allow adding availability zone support to an existing cache. The best way to convert a non-zone redundant cache to a zone redundant cache is to deploy a new cache using the availability zone configuration you need, and then migrate your data from the current cache to the new cache.
+
+### Redeployment considerations
+
+Running multiple caches simultaneously as you convert your data to the new cache creates extra expenses.
+
+### How to redeploy
+
+1. To create a new zone redundant cache that meets your requirements, follow the steps in [Enable zone redundancy for Azure Cache for Redis](../azure-cache-for-redis/cache-how-to-zone-redundancy.md).
+
+>[!TIP]
+>To ease the migration process, it is recommended that you create the cache to use the same tier, SKU, and region as your current cache.
+
+1. Migrate your data from the current cache to the new zone redundant cache. To learn the most common ways to migrate based on your requirements and constraints, see [Cache migration guide - Migration options](../azure-cache-for-redis/cache-migration-guide.md).
+
+1. Configure your application to point to the new zone redundant cache
+
+1. Delete your old cache
+
+## Next Steps
+
+Learn more about:
+
+> [!div class="nextstepaction"]
+> [Azure services and regions that support availability zones](availability-zones-service-support.md)
reliability Migrate Container Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-container-instances.md
+
+ Title: Migrate Azure Container Instances to availability zone support
+description: Learn how to migrate Azure Container Instances to availability zone support.
+++ Last updated : 07/22/2022+++++
+# Migrate Azure Container Instances to availability zone support
+
+This guide describes how to migrate Azure Container Instances from non-availability zone support to availability support.
++
+## Prerequisites
+
+* If using Azure CLI, ensure version 2.30.0 or later
+* If using PowerShell, ensure version 2.1.1-preview or later
+* If using the Java SDK, ensure version 2.9.0 or later
+* ACI API version 09-01-2021
+* Make sure the region you're migrating to supports zonal container group deployments. To view a list of supported regions, see [Resource availability for Azure Container Instances in Azure regions](../container-instances/container-instances-region-availability.md).
+
+## Considerations
+
+The following container groups don't support availability zones, and don't offer any migration guidance:
+
+- Container groups with GPU resources
+- Virtual Network injected container groups
+- Windows Server 2016 container groups
+
+## Downtime requirements
+
+Because ACI requires you to delete your existing deployment and recreate it with zonal support, the downtime is the time it takes to make a new deployment.
+
+## Migration guidance: Delete and redeploy container group
+
+To delete and redeploy a container group:
+
+1. Delete your current container group with one of the following tools:
+
+ - [Azure CLI](../container-instances/container-instances-quickstart.md#clean-up-resources)
+ - [PowerShell](../container-instances/container-instances-quickstart.md#clean-up-resources),
+ - [Portal](../container-instances/container-instances-quickstart-portal.md#clean-up-resources).
+
+ >[!NOTE]
+ >Zonal support is not supported in the Azure portal. Even if you delete your container group through the portal, you'll still need to create your new container group using CLI or Powershell.
+
+1. Follow the steps in [Deploy an Azure Container Instances (ACI) container group in an availability zone (preview)](../container-instances/availability-zones.md).
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Azure services and regions that support availability zones](availability-zones-service-support.md)
reliability Migrate Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-functions.md
++
+ Title: Migrate Azure Functions to availability zone support
+description: Learn how to migrate Azure Functions to availability zone support.
+++ Last updated : 08/29/2022+++++
+# Migrate your function app to a zone-redundant plan
+
+Availability zones support for Azure Functions is available on [Premium (Elastic Premium)](../azure-functions/functions-premium-plan.md) and [Dedicated (App Service)](../azure-functions/dedicated-plan.md) plans. A zone-redundant function app plan automatically balances its instances between availability zones for higher availability. This article describes how to migrate to the public multi-tenant Premium plan with availability zone support. For migration to zone redundancy on Dedicated plans, refer [here](migrate-app-service.md).
+
+## Downtime requirements
+
+Downtime will be dependent on how you decide to carry out the migration. Since you can't convert pre-existing Premium plans to use availability zones, migration will consist of a side-by-side deployment where you'll create new Premium plans. Downtime will depend on how you choose to redirect traffic from your old to your new availability zone enabled function app. For example, for HTTP based functions if you're using an [Application Gateway](../app-service/networking/app-gateway-with-service-endpoints.md), a [custom domain](../app-service/app-service-web-tutorial-custom-domain.md), or [Azure Front Door](../frontdoor/front-door-overview.md), downtime will be dependent on the time it takes to update those respective services with your new app's information. Alternatively, you can route traffic to multiple apps at the same time using a service such as [Azure Traffic Manager](../app-service/web-sites-traffic-manager.md) and only fully cutover to your new availability zone enabled apps when everything is deployed and fully tested. You can also [write defensive functions](../azure-functions/performance-reliability.md#write-defensive-functions) to ensure messages are not lost during the migration for non-HTTP functions.
+
+## Migration guidance: Redeployment
+
+If you want your function app to use availability zones, redeploy your app into a newly created availability zone enabled Premium function app plan.
+
+## How to redeploy
+
+The following steps describe how to enable availability zones.
+
+1. If you're already using the Premium SKU and are in one of the [supported regions](../azure-functions/azure-functions-az-redundancy.md#regional-availability), you can move on to the next step. Otherwise, you should create a new resource group in one of the supported regions.
+1. Create a Premium plan in one of the supported regions and the resource group. Ensure the [new Premium plan has zone redundancy enabled](../azure-functions/azure-functions-az-redundancy.md#how-to-deploy-a-function-app-on-a-zone-redundant-premium-plan).
+1. Create and deploy your function apps into the new Premium plan using your desired [deployment method](../azure-functions/functions-deployment-technologies.md).
+1. After testing and enabling the new function apps, you can optionally disable or delete your previous non-availability zone apps.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about the Azure Functions Premium plan](../azure-functions/functions-premium-plan.md)
+
+> [!div class="nextstepaction"]
+> [Learn about Azure Functions support for availability zone redundancy](../azure-functions/azure-functions-az-redundancy.md)
+
+> [!div class="nextstepaction"]
+> [ARM Quickstart Templates](https://azure.microsoft.com/resources/templates/)
+
+> [!div class="nextstepaction"]
+> [Azure Functions geo-disaster recovery](../azure-functions/functions-geo-disaster-recovery.md)
reliability Migrate Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-load-balancer.md
+
+ Title: Migrate Load Balancer to availability zone support
+description: Learn how to migrate Load Balancer to availability zone support.
+++ Last updated : 05/09/2022+++
+CustomerIntent: As a cloud architect/engineer, I need general guidance on migrating load balancers to using availability zones.
+
+<!-- CHANGE AUTHOR BEFORE PUBLISH -->
+
+# Migrate Load Balancer to availability zone support
+
+This guide describes how to migrate Load Balancer from non-availability zone support to availability support. We'll take you through the different options for migration.
+
+A standard load balancer supports extra abilities in regions where availability zones are available. availability zones configurations are available for both types of Standard load balancer; public and internal. A zone-redundant frontend survives zone failure by using dedicated infrastructure in all the zones simultaneously. Additionally, you can pin a frontend to a specific zone. A zonal frontend is served by dedicated infrastructure in a single zone. Regardless of the zonal configuration the backend pool can contain VMs from any zone.
+
+For a Standard Zone redundant Load Balancer, the traffic is served by a single IP address. A single frontend IP address will survive zone failure. The frontend IP may be used to reach all (non-impacted) backend pool members no matter the zone. One or more availability zones can fail, and the data path survives as long as one zone in the region remains healthy.
+
+You can choose to have a frontend guaranteed to a single zone, which is known as a zonal. This scenario means any inbound or outbound flow is served by a single zone in a region. Your frontend shares fate with the health of the zone. The data path is unaffected by failures in other zones. You can use zonal frontends to expose an IP address per Availability Zone.
+
+## Prerequisites
+- Availability zones are supported with Standard SKU for both load balancer and Public IP.
+- Basic SKU type isn't supported.
+- To create or move this resource, one should have the Network Contributor role or higher.
+
+## Downtime requirements
+
+Downtime is required. All migration scenarios require some level of downtime down to the changing of resources used by the load balancer configurations.
+## Migration option 1: Enable existing Load Balancer to use availability zones (same region)
+
+Let's say you need to enable an existing load balancer to use availability zones within the same Azure region. You can't just switch an existing Azure load balancer from non-AZ to be AZ aware. However, you won't have to redeploy a load balancer to take advantage of this migration. In order to make your load balancer AZ aware, you'll have to recreate your load balancer's frontend IP configuration using a new zonal/zone-redundant IP and re-associate any existing load balancing rules to the new frontend. Not that this migration will incur downtime as rules are re-associated.
+
+> [!NOTE]
+> It isn't required to have a load balancer for each zone, rather having a single load balancer with multiple frontends (zonal or zone redundant) associated to their respective backend pools will serve the purpose.
+
+As Frontend IP can be either zonal or zone redundant, users need to decide which option to choose based on the requirements. The following are recommendations for each:
+
+| **Frontend IP configuration** | **Recommendation** |
+| -- | -- |
+|Zonal Frontend | We recommend creating zonal frontend when backend is concentrated in a particular zone. For example, if backend instances are pinned to zone 2 then it makes sense to create Frontend IP configuration in Availability zone 2. |
+| Zone Redundant Frontend | When the resources (VMs, NICs, IP addresses, etc.) inside a backend pool are distributed across zones, then it's recommended to create Zone redundant Frontend. This will provide the high availability and ensure seamless connectivity even if a zone goes down. |
+
+## Migration option 2: Migrate Load Balancer to another region with AZs
+
+Depending on the type of load balancer you have, you'll need to follow different steps. The following sections cover migrating both external and internal load balancers
+### Migrate an Internal Load Balancer
+
+When you create an internal load balancer, a virtual network is configured as the network for the load balancer. A private IP address in the virtual network is configured as the frontend (named as LoadBalancerFrontend by default) for the load balancer. While configuring this FE IP, you can select the availability zones.
+
+Azure internal load balancers can't be moved from one region to another. We must associate the new load balancer to resources in the target region. For the migration, we can you use an Azure Resource Manager template to export the existing configuration and virtual network of an internal load balancer. We can then stage the resource in another region by exporting the load balancer and virtual network to a template, modifying the parameters to match the destination region, and then deploy the templates to the new region.
+
+To migrate an internal load balancer to availability zones across regions, see [moving internal Load Balancer across regions](../load-balancer/move-across-regions-internal-load-balancer-portal.md).
+
+#### Migrate a public Load Balancer
+
+Azure external load balancers can't be moved between regions. We need to associate the new load balancer to resources in the target region.
+To redeploy load balancer with the source configuration to a new Zone resilient region, the most suitable approach is to use an Azure Resource Manager template to export the existing configuration external load balancer. You can then stage the resource in another region by exporting the load balancer and public IP to a template, modifying the parameters to match the destination region, and then deploying the template to the new region.
+
+To migrate an internal load balancer to availability zones across regions, see [moving public Load Balancer across regions](../load-balancer/move-across-regions-external-load-balancer-portal.md).
+
+### Limitations
+- Zones can't be changed, updated, or created for the resource after creation.
+- Resources can't be updated from zonal to zone-redundant or vice versa after creation.
+
+## Next steps
+
+ To learn more about load balancers and availability zones, see:
+
+> [!div class="nextstepaction"]
+> [Load Balancer and availability zones](../load-balancer/load-balancer-standard-availability-zones.md).
reliability Migrate Monitor Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-monitor-log-analytics.md
+
+ Title: Migrate Log Analytics workspaces to availability zone support
+description: Learn how to migrate Log Analytics workspaces to availability zone support.
+++ Last updated : 07/21/2022+++++
+# Migrate Log Analytics workspaces to availability zone support
+
+This guide describes how to migrate Log Analytics workspaces from non-availability zone support to availability support. We'll take you through the different options for migration.
+
+> [!NOTE]
+> Application Insights resources can also use availability zones, but only if they are workspace-based and the workspace uses a dedicated cluster as explained below. Classic (non-workspace-based) Application Insights resources cannot use availability zones.
++
+## Prerequisites
+
+For availability zone support, your workspace must be located in one of the following supported regions:
+
+- East US 2
+- West US 2
+
+## Dedicated clusters
+
+Azure Monitor support for availability zones requires a Log Analytics workspace linked to an [Azure Monitor dedicated cluster](../azure-monitor/logs/logs-dedicated-clusters.md). Dedicated clusters are a deployment option that enables advanced capabilities for Azure Monitor Logs including availability zones.
+
+Not all dedicated clusters can use availability zones. Dedicated clusters created after mid-October 2020 can be set to support availability zones when they are created. New clusters created after that date default to be enabled for availability zones in regions where Azure Monitor supports them.
+
+## Downtime requirements
+
+There are no downtime requirements.
+
+## Migration process: Moving to a dedicated cluster
+
+### Step 1: Determine the current cluster for your workspace
+
+To determine the current workspace link status for your workspace, use [CLI, PowerShell or REST](../azure-monitor/logs/logs-dedicated-clusters.md#check-workspace-link-status) to retrieve the [cluster details](../azure-monitor/logs/logs-dedicated-clusters.md#check-cluster-provisioning-status). If the cluster uses an availability zone, then it will have a property called `isAvailabilityZonesEnabled` with a value of `true`. Once a cluster is created, this property cannot be altered.
+
+### Step 2: Create a dedicated cluster with availability zone support
+
+Move your workspace to an availability zone by [creating a new dedicated cluster](../azure-monitor/logs/logs-dedicated-clusters.md#create-a-dedicated-cluster) in a region that supports availability zones. The cluster will automatically be enabled for availability zones. Then [link your workspace to the new cluster](../azure-monitor/logs/logs-dedicated-clusters.md#link-a-workspace-to-a-cluster).
+
+> [!IMPORTANT]
+> Availability zone is defined on the cluster at creation time and canΓÇÖt be modified.
+
+Transitioning to a new cluster can be a gradual process. Don't remove the previous cluster until it has been purged of any data. For example, if your workspace retention is set 60 days, you may want to keep your old cluster running for that period before removing it.
+
+Any queries against your workspace will query both clusters as required to provide you with a single, unified result set. That means that all Azure Monitor features relying on the workspace such as workbooks and dashboards will keep getting the full, unified result set based on data from both clusters.
+
+## Billing
+There is a [cost for using a dedicated cluster](../azure-monitor/logs/logs-dedicated-clusters.md#create-a-dedicated-cluster). It requires a daily capacity reservation of 500 GB.
+
+If you already have a dedicated cluster and choose to retain it to access its data, youΓÇÖll be charged for both dedicated clusters. Starting August 4, 2021, the minimum required capacity reservation for dedicated clusters is reduced from 1000GB/Daily to 500GB/Daily, so weΓÇÖd recommend applying that minimum to your old cluster to reduce charges.
+
+The new cluster isnΓÇÖt billed during its first day to avoid double billing during configuration. Only the data ingested before the migration completes would still be billed on the date of migration.
++
+## Next steps
+
+Learn more about:
+
+> [!div class="nextstepaction"]
+> [Azure Monitor Logs Dedicated Clusters](../azure-monitor/logs/logs-dedicated-clusters.md)
+
+> [!div class="nextstepaction"]
+> [Azure Services that support Availability Zones](availability-zones-service-support.md)
reliability Migrate Recovery Services Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-recovery-services-vault.md
+
+ Title: Migrate Azure Recovery Services Vault to availability zone support
+description: Learn how to migrate your Azure Recovery Services Vault to availability zone support.
+++ Last updated : 06/24/2022+++++
+# Migrate Azure Recovery Services vault to availability zone support
+
+This article describes how to migrate Recovery Services vault from non-availability zone support to availability zone support.
+
+Recovery Services vault supports local redundancy, zone redundancy, and geo-redundancy for storage. Storage redundancy is a setting that must be configured *before* protecting any workloads. Once a workload is protected in Recovery Services vault, the setting is locked and can't be changed. To learn more about different storage redundancy options, see [Set storage redundancy](../backup/backup-create-rs-vault.md#set-storage-redundancy).
+
+To change your current Recovery Services vault to availability zone support, you need to deploy a new vault. Perform the following actions to create a new vault and migrate your existing workloads.
+
+## Prerequisites
+
+Standard SKU is supported.
+
+## Downtime requirements
+
+Because you're required to deploy a new Recovery Services vault and migrate your workloads to the new vault, some downtime is expected.
+
+## Considerations
+
+When switching recovery vaults for backup, the existing backup data is in the old recovery vault and can't be migrated to the new one. 
+
+## Migration Step: Deploy a new Recovery Services vault
+
+To change storage redundancy after the Recovery Services vault is locked in a specific configuration:
+
+1. [Deploy a new Recovery Services vault](../backup/backup-create-rs-vault.md).
+
+1. Configure the relevant storage redundancy option. Learn how to [Set storage redundancy](../backup/backup-create-rs-vault.md#set-storage-redundancy).
+
+**Choose an Azure service:**
+
+# [Azure Backup](#tab/backup)
+
+If your workloads are backed-up by the old vault and you want to re-assign them to the new vault, follow these steps:
+
+1. Stop backup for:
+
+ 1. [Virtual Machines](../backup/backup-azure-manage-vms.md#stop-protecting-a-vm).
+
+ 1. [SQL Server database in Azure VM](../backup/manage-monitor-sql-database-backup.md#stop-protection-for-a-sql-server-database).
+
+
+ 1. [Storage Files](../backup/manage-afs-backup.md#stop-protection-on-a-file-share).
+
+ 1. [SAP HANA database in Azure VM](../backup/sap-hana-db-manage.md#stop-protection-for-an-sap-hana-database).
+
+1. To unregister from old vault, follow these steps:
+
+ 1. [Virtual Machines](../backup/backup-azure-move-recovery-services-vault.md#move-an-azure-virtual-machine-to-a-different-recovery-service-vault).
+
+ 1. [SQL Server database in Azure VM](../backup/manage-monitor-sql-database-backup.md#unregister-a-sql-server-instance).
+
+ Move the SQL database on Azure VM to another resource group to completely break the association with the old vault.
+
+ 1. [Storage Files](../backup/manage-afs-backup.md#unregister-a-storage-account).
+
+ 1. [SAP HANA database in Azure VM](../backup/sap-hana-db-manage.md#unregister-an-sap-hana-instance).
+
+ Move the SAP HANA database on Azure VM to another resource group to completely break the association with the old vault.
+
+1. Configure the various backup items for protection in the new vault.
+
+>[!IMPORTANT]
+>Existing recovery points in the old vault is retained and objects can be restored from these. However, as protection is stopped, backup policy no longer applies to the retained data. As a result, recovery points won't expire through policy, but must be deleted manually. If this isn't done, recovery points are retained and indefinitely incurs cost. To avoid the cost for the remaining recovery points, see [Delete protected items in the cloud](../backup/backup-azure-delete-vault.md?tabs=portal#delete-protected-items-in-the-cloud).
+
+# [Azure Site Recovery](#tab/site-recovery)
+
+If you have any workloads in the old vault that are currently protected by Azure Site Recovery, see the following sections.
+
+## Azure to Azure replication
+
+1. Disable replication in the old vault. See [Disable protection for an Azure VM (Azure to Azure)](../site-recovery/site-recovery-manage-registration-and-protection.md#disable-protection-for-a-azure-vm-azure-to-azure).
+
+1. Enable replication in the new vault. See [Enable replication](../site-recovery/azure-to-azure-how-to-enable-replication.md#enable-replication).
+
+1. If you don't need the old Recovery Service vault, you can then delete it (provided it has no other active replications). To delete the old vault, see [Delete a Site Recovery Services vault](../site-recovery/delete-vault.md).
+
+## VMware to Azure replication
+
+Learn about [Registering a VMware configuration server with a different vault](../site-recovery/vmware-azure-manage-configuration-server.md#register-a-configuration-server-with-a-different-vault).
+
+## Physical to Azure replication
+
+Learn about [Registering a configuration server with a different vault](../site-recovery/vmware-azure-manage-configuration-server.md#register-a-configuration-server-with-a-different-vault).
++
+## Hyper-V Site to Azure replication
+
+Follow these steps:
+
+1. Unregister the server in the old vault. See [Unregister a Hyper-V host in a Hyper-V site](../site-recovery/site-recovery-manage-registration-and-protection.md#unregister-a-hyper-v-host-in-a-hyper-v-site).
+
+1. Enable replication in the new vault.
+
+## Hyper-V VM to Azure replication
+
+1. Disable replication in the old vault. See [Disable protection for a Hyper-V virtual machine (Hyper-V to Azure)](../site-recovery/site-recovery-manage-registration-and-protection.md#disable-protection-for-a-hyper-v-virtual-machine-hyper-v-to-azure).
+
+1. Enable replication in the new vault.
+
+## SCVMM to Azure replication
+
+1. Disable replication in the old vault. See [Disable protection for a Hyper-V virtual machine replicating to Azure using the System Center VMM to Azure scenario](../site-recovery/site-recovery-manage-registration-and-protection.md#disable-protection-for-a-hyper-v-virtual-machine-replicating-to-azure-using-the-system-center-vmm-to-azure-scenario).
+
+1. Enable replication in the new vault.
+++
+## Next steps
+
+Learn more about:
+
+> [!div class="nextstepaction"]
+> [Azure services and regions that support availability zones](availability-zones-service-support.md)
reliability Migrate Search Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-search-service.md
++
+ Title: Migrate Azure Cognitive Search to availability zone support
+description: Learn how to migrate Azure Cognitive Search to availability zone support.
+++ Last updated : 08/01/2022++++++
+# Migrate Azure Cognitive Search to availability zone support
+
+This guide describes how to migrate Azure Cognitive Search from non-availability zone support to availability support.
+
+Azure Cognitive Search services can take advantage of availability support [in regions that support availability zones](../search/search-performance-optimization.md#availability-zones). Services with [two or more replicas](../search/search-capacity-planning.md) in these regions created after availability support was enabled can automatically utilize availability zones. Each replica will be placed in a different availability zone within the region. If you have more replicas than availability zones, the replicas will be distributed across availability zones as evenly as possible.
+
+If a search service was created before availability zone support was enabled in its region, the search service must be recreated to take advantage of availability zone support.
+
+## Prerequisites
+
+The following are the current requirements/limitations for enabling availability zone support:
+
+- The search service must be in [a region that supports availability zones](../search/search-performance-optimization.md#availability-zones)
+- The search service must be created after availability zone support was enabled in its region.
+- The search service must have [at least two replicas](../search/search-performance-optimization.md#high-availability)
+
+## Downtime requirements
+
+Downtime will be dependent on how you decide to carry out the migration. Migration will consist of a side-by-side deployment where you'll create a new search service. Downtime will depend on how you choose to redirect traffic from your old search service to your new availability zone enabled search service. For example, if you're using [Azure Front Door](../frontdoor/front-door-overview.md), downtime will be dependent on the time it takes to update Azure Front Door with your new search service's information. Alternatively, you can route traffic to multiple search services at the same time using [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md).
+
+## Migration guidance: Recreate your search service
+
+### When to recreate your search service
+
+If you created your search service in a region that supports availability zones before this support was enabled, you'll need to recreate the search service.
+
+### How to recreate your search service
+
+1. [Create a new search service](../search/search-create-service-portal.md) in the same region as the old search service. This region should [support availability zones on or after the current date](../search/search-performance-optimization.md#availability-zones).
+
+ >[!IMPORTANT]
+ >The [free and basic tiers do not support availability zones](../search/search-sku-tier.md#feature-availability-by-tier), and so they should not be used.
+1. Add at [least two replicas to your new search service](../search/search-capacity-planning.md#add-or-reduce-replicas-and-partitions). Once the search service has at least two replicas, it automatically takes advantage of availability zone support.
+1. Migrate your data from your old search service to your new search service by rebuilding of all your search indexes from your old service.
+
+To rebuild all of your search indexes, choose one of the following two options:
+ - [Move individual indexes from your old search service to your new one](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/index-backup-restore)
+ - Rebuild indexes from an external data source if one is available.
+1. Redirect traffic from your old search service to your new search service. This may require updates to your application that uses the old search service.
+>[!TIP]
+>Services such as [Azure Front Door](../frontdoor/front-door-overview.md) and [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) help simplify this process.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn how to create and deploy ARM templates](../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md)
+
+> [!div class="nextstepaction"]
+> [ARM Quickstart Templates](https://azure.microsoft.com/resources/templates/)
+
+> [!div class="nextstepaction"]
+> [Learn about high availability in Azure Cognitive Search](../search/search-performance-optimization.md)
reliability Migrate Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-storage.md
+
+ Title: Migrate Azure Storage accounts to availability zone support
+description: Learn how to migrate your Azure storage accounts to availability zone support.
+++ Last updated : 09/27/2022+++++
+# Migrate Azure Storage accounts to availability zone support
+
+This guide describes how to migrate or convert Azure Storage accounts to add availability zone support.
+
+Azure Storage always stores multiple copies of your data so that it is protected from planned and unplanned events, including transient hardware failures, network or power outages, and massive natural disasters. Redundancy ensures that your storage account meets the Service-Level Agreement (SLA) for Azure Storage even in the face of failures.
+
+By default, data in a storage account is replicated in a single data center in the primary region. If your application must be highly available, you can convert the data in the primary region to zone-redundant storage (ZRS). ZRS takes advantage of Azure availability zones to replicate data in the primary region across three separate data centers.
+
+Azure Storage offers the following types of replication:
+
+- Locally redundant storage (LRS)
+- Zone-redundant storage (ZRS)
+- Geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS)
+- Geo-zone-redundant storage (GZRS) or read-access geo-zone-redundant storage (RA-GZRS)
+
+For an overview of each of these options, see [Azure Storage redundancy](../storage/common/storage-redundancy.md).
+
+This article describes two basic options for adding availability zone support to a storage account:
+
+- [Conversion](#option-1-conversion): If your application must be highly available, you can convert the data in the primary region to zone-redundant storage (ZRS). ZRS takes advantage of Azure availability zones to replicate data in the primary region across three separate data centers.
+- [Manual migration](#option-2-manual-migration): Manual migration gives you complete control over the migration process by allowing you to use tools such as AzCopy move to a new storage account with the desired replication settings at the time of your choosing.
+
+> [!NOTE]
+> For complete details on how to change how your storage account is replicated, see [Change how a storage account is replicated](../storage/common/redundancy-migration.md).
+
+## Prerequisites
+
+Before making any changes, review the [limitations for changing replication types](../storage/common/redundancy-migration.md#limitations-for-changing-replication-types) to make sure your storage account is eligible for migration or conversion, and to understand the options available to you. Many storage accounts can be converted directly to ZRS, while others either require a multi-step process or a manual migration. After reviewing the limitations, choose the right option in this article to convert your storage account based on:
+
+- [Storage account type](../storage/common/redundancy-migration.md#storage-account-type)
+- [Region](../storage/common/redundancy-migration.md#region)
+- [Access tier](../storage/common/redundancy-migration.md#access-tier)
+- [Protocols enabled](../storage/common/redundancy-migration.md#protocol-support)
+- [Failover status](../storage/common/redundancy-migration.md#failover-and-failback)
+
+## Downtime requirements
+
+During a conversion to ZRS, you can access data in your storage account with no loss of durability or availability. [The Azure Storage SLA](https://azure.microsoft.com/support/legal/sla/storage/) is maintained during the conversion process and there is no data loss. Service endpoints, access keys, shared access signatures, and other account options remain unchanged after the conversion.
+
+If you choose manual migration, some downtime is required, but you have more control over when the process starts and completes.
+
+## Option 1: Conversion
+
+During a conversion, you can access data in your storage account with no loss of durability or availability. [The Azure Storage SLA](https://azure.microsoft.com/support/legal/sla/storage/) is maintained during the migration process and there is no data loss associated with a conversion. Service endpoints, access keys, shared access signatures, and other account options remain unchanged after the migration.
+
+### When to perform a conversion
+
+Perform a conversion if:
+
+- You want to convert your storage account from LRS to ZRS in the primary region with no application downtime.
+- You don't need the change to be completed by a certain date. While Microsoft handles your request for conversion promptly, there's no guarantee as to when it will complete. Generally, the more data you have in your account, the longer it takes to replicate that data.
+- You want to minimize the amount of manual effort required to complete the change.
+
+### Conversion considerations
+
+Conversion can be used in most situations to add availability zone support, but in some cases you will need to use multiple steps or perform a manual migration. For example, if you also want to add or remove geo-redundancy (GRS) or read access (RA) to the secondary region, you will need to perform a two-step process. Perform the conversion to ZRS as one step and the GRS and/or RA change as a separate step. These steps can be performed in any order.
+
+A full list of things to consider can be found in [Limitations](../storage/common/redundancy-migration.md#limitations-for-changing-replication-types).
+
+### How to perform a conversion
+
+A conversion can be accomplished in one of two ways:
+
+- [A Customer-initiated conversion (preview)](#customer-initiated-conversion-preview)
+- [Request a conversion by creating a support request](#request-a-conversion-by-creating-a-support-request)
+
+#### Customer-initiated conversion (preview)
+
+> [!IMPORTANT]
+> Customer-initiated conversion is currently in preview and available in all public ZRS regions except for the following:
+>
+> - (Europe) West Europe
+> - (Europe) UK South
+> - (North America) Canada Central
+> - (North America) East US
+> - (North America) East US 2
+>
+> To opt in to the preview, see [Set up preview features in Azure subscription](../azure-resource-manager/management/preview-features.md) and specify **CustomerInitiatedMigration** as the feature name.
+>
+> This preview version is provided without a service level agreement, and might not be suitable for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Customer-initiated conversion adds a new option for customers to start a conversion. Now, instead of needing to open a support request, customers can request the conversion directly from within the Azure portal. Once initiated, the conversion could still take up to 72 hours to actually begin, but potential delays related to opening and managing a support request are eliminated.
+
+Customer-initiated conversion is only available from the Azure portal, not from PowerShell or the Azure CLI. To initiate the conversion, follow these steps:
+
+1. Navigate to your storage account in the Azure portal.
+1. Under **Data management** select **Redundancy**.
+1. Update the **Redundancy** setting.
+1. Select **Save**.
+
+ :::image type="content" source="../storage/common/media/redundancy-migration/change-replication-option.png" alt-text="Screenshot showing how to change replication option in portal." lightbox="../storage/common/media/redundancy-migration/change-replication-option.png":::
+
+#### Request a conversion by creating a support request
+
+Customers can still request a conversion by opening a support request with Microsoft.
+
+> [!IMPORTANT]
+> If you need to convert more than one storage account, create a single support ticket and specify the names of the accounts to convert on the **Additional details** tab.
+
+Follow these steps to request a conversion from Microsoft:
+
+1. In the Azure portal, navigate to a storage account that you want to convert.
+1. Under **Support + troubleshooting**, select **New Support Request**.
+1. Complete the **Problem description** tab based on your account information:
+ - **Summary**: (some descriptive text).
+ - **Issue type**: Select **Technical**.
+ - **Subscription**: Select your subscription from the drop-down.
+ - **Service**: Select **My Services**, then **Storage Account Management** for the **Service type**.
+ - **Resource**: Select a storage account to convert. If you need to specify multiple storage accounts, you can do so on the **Additional details** tab.
+ - **Problem type**: Choose **Data Migration**.
+ - **Problem subtype**: Choose **Migrate to ZRS, GZRS, or RA-GZRS**.
+
+ :::image type="content" source="../storage/common/media/redundancy-migration/request-live-migration-problem-desc-portal.png" alt-text="Screenshot showing how to request a conversion - Problem description tab.":::
+
+1. Select **Next**. The **Recommended solution** tab might be displayed briefly before it switches to the **Solutions** page. On the **Solutions** page, you can check the eligibility of your storage account(s) for conversion:
+ - **Target replication type**: (choose the desired option from the drop-down)
+ - **Storage accounts from**: (enter a single storage account name or a list of accounts separated by semicolons)
+ - Select **Submit**.
+
+ :::image type="content" source="../storage/common/media/redundancy-migration/request-live-migration-solutions-portal.png" alt-text="Screenshot showing how to check the eligibility of your storage account(s) for conversion - Solutions page.":::
+
+1. Take the appropriate action if the results indicate your storage account is not eligible for conversion. If it is eligible, select **Return to support request**.
+
+1. Select **Next**. If you have more than one storage account to migrate, then on the **Details** tab, specify the name for each account, separated by a semicolon.
+
+ :::image type="content" source="../storage/common/media/redundancy-migration/request-live-migration-details-portal.png" alt-text="Screenshot showing how to request a conversion - Additional details tab.":::
+
+1. Fill out the additional required information on the **Additional details** tab, then select **Review + create** to review and submit your support ticket. A support person will contact you to provide any assistance you may need.
+
+## Option 2: Manual migration
+
+A manual migration provides more flexibility and control than a conversion. You can use this option if you need the migration to complete by a certain date, or if conversion is [not supported for your scenario](../storage/common/redundancy-migration.md#limitations-for-changing-replication-types). Manual migration is also useful when moving a storage account to another region. See [Move an Azure Storage account to another region](../storage/common/storage-account-move.md) for more details.
+
+### When to use a manual migration
+
+Use a manual migration if:
+
+- You need the migration to be completed by a certain date.
+
+- You want to migrate your data to a ZRS storage account that's in a different region than the source account.
+
+- You want to add or remove zone-redundancy and you don't want to use the customer-initiated migration feature in preview.
+
+- Your storage account is a premium page blob or block blob account.
+
+- Your storage account includes data that's in the archive tier.
+
+### How to manually migrate Azure Storage accounts
+
+To manually migration your Azure Storage accounts:
+
+1. Create a new storage account in the primary region with zone redundant storage (ZRS) as the redundancy setting.
+
+1. Copy the data from your existing storage account to the new storage account. To perform a copy operation, use one of the following options:
+
+ - **Option 1:** Copy data by using an existing tool such as [AzCopy](../storage/common/storage-use-azcopy-v10.md), [Azure Data factory](../data-factory/connector-azure-blob-storage.md?tabs=data-factory), one of the Azure Storage client libraries, or a reliable third-party tool.
+
+ - **Option 2:** If you're familiar with Hadoop or HDInsight, you can attach both the source storage account and destination storage account to your cluster. Then, parallelize the data copy process with a tool like [DistCp](https://hadoop.apache.org/docs/r1.2.1/distcp.html).
+
+1. Determine which type of replication you need and follow the directions in [Change how a storage account is replicated](../storage/common/redundancy-migration.md).
+
+## Next steps
+
+For detailed guidance on changing the replication configuration for an Azure Storage account from any type to any other type, see:
+
+> [!div class="nextstepaction"]
+> [Change how a storage account is replicated](../storage/common/redundancy-migration.md)
+
+For more guidance on moving an Azure Storage account to another region, see:
+
+> [!div class="nextstepaction"]
+> [Move an Azure Storage account to another region](../storage/common/storage-account-move.md)
+
+Learn more about:
+
+> [!div class="nextstepaction"]
+> [Azure Storage redundancy](../storage/common/storage-redundancy.md)
+
+> [!div class="nextstepaction"]
+> [Azure services and regions that support availability zones](availability-zones-service-support.md)
reliability Migrate Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-vm.md
+
+ Title: Migrate Azure Virtual Machines and Azure Virtual Machine Scale Sets to availability zone support
+description: Learn how to migrate your Azure Virtual Machines and Virtual Machine Scale Sets to availability zone support.
+++ Last updated : 04/21/2022++++
+
+# Migrate Virtual Machines and Virtual Machine Scale Sets to availability zone support
+
+This guide describes how to migrate Virtual Machines (VMs) and Virtual Machine Scale Sets (VMSS) from non-availability zone support to availability zone support. We'll take you through the different options for migration, including how you can use availability zone support for Disaster Recovery solutions.
+
+Virtual Machine (VM) and Virtual Machine Scale Sets (VMSS) are zonal services, which means that VM resources can be deployed by using one of the following methods:
+
+- VM resources are deployed to a specific, self-selected availability zone to achieve more stringent latency or performance requirements.
+
+- VM resources are replicated to one or more zones within the region to improve the resiliency of the application and data in a High Availability (HA) architecture.
+
+When you migrate resources to availability zone support, we recommend that you select multiple zones for your new VMs and VMSS, to ensure high-availability of your compute resources.
+
+## Prerequisites
+
+To migrate to availability zone support, your VM SKUs must be available across the zones in for your region. To check for VM SKU availability, use one of the following methods:
+
+- Use PowerShell to [Check VM SKU availability](../virtual-machines/windows/create-PowerShell-availability-zone.md#check-vm-sku-availability).
+- Use the Azure CLI to [Check VM SKU availability](../virtual-machines/linux/create-cli-availability-zone.md#check-vm-sku-availability).
+- Go to [Foundational Services](availability-zones-service-support.md#an-icon-that-signifies-this-service-is-foundational-foundational-services).
+
+## Downtime requirements
+
+Because zonal VMs are created across the availability zones, all migration options mentioned in this article require downtime during deployment.
+
+## Migration Option 1: Redeployment
+
+### When to use redeployment
+
+Use the redeployment option if you have set up good Infrastructure as Code (IaC) practices to manage infrastructure. This redeployment option gives you more control and the ability to automate various processes within your deployment pipelines.
+
+### Redeployment considerations
+
+- When you redeploy your VM and VMSS resources, the underlying resources such as managed disk and IP address for the VM are created in the same availability zone. You must use a Standard SKU public IP address and load balancer to create zone-redundant network resources.
+
+- Existing managed disks without availability zone support can't be attached to a VM with availability zone support. To attach existing managed disks to a VM with availability zone support, you'll need to take a snapshot of the current disks, and then create your VM with the new managed disks attached.
+
+- For zonal deployments that require reasonably low network latency and good performance between application tier and data tier, use [proximity placement groups](../virtual-machines/co-location.md). Proximity groups can force grouping of different VM resources under a single network spine. For an example of an SAP workload that uses proximity placement groups, see [Azure proximity placement groups for optimal network latency with SAP applications](../virtual-machines/workloads/sap/sap-proximity-placement-scenarios.md)
++
+### How to redeploy
+
+If you want to migrate the data on your current managed disks when creating a new VM, follow the directions in [Migrate your managed disks](#migrate-your-managed-disks).
+
+If you only want to create new VM with new managed disks in an availability zone, see:
+
+- [Create VM using Azure CLI](../virtual-machines/linux/create-cli-availability-zone.md)
+- [Create VM using Azure PowerShell](../virtual-machines/windows/create-PowerShell-availability-zone.md)
+- [Create VM using Azure portal](../virtual-machines/create-portal-availability-zone.md?tabs=standard)
+
+To learn how to create VMSS in an availability zone, see [Create a virtual machine scale set that uses Availability Zones](../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md).
+
+### Migrate your managed disks
+
+In this section, you'll migrate the data from your current managed disks to either zone-redundant storage (ZRS) managed disks or zonal managed disks.
+
+#### Step 1: Create your snapshot
+
+The easiest and cleanest way to create a snapshot is to do so while the VM is offline. See [Create snapshots while the VM is offline](../virtual-machines/backup-and-disaster-recovery-for-azure-iaas-disks.md#create-snapshots-while-the-vm-is-offline). If you choose this approach, some downtime should be expected. To create a snapshot of your VM using the Azure portal, PowerShell, or Azure CLI, see [Create a snapshot of a virtual hard disk](../virtual-machines/snapshot-copy-managed-disk.md)
+
+If you'll be taking a snapshot of a disk that's attached to a running VM, read the guidance in [Create snapshots while the VM is running](../virtual-machines/backup-and-disaster-recovery-for-azure-iaas-disks.md#create-snapshots-while-the-vm-is-running) before proceeding.
+
+>[!NOTE]
+> The source managed disks remain intact with their current configurations and you'll continue to be billed for them. To avoid this, you must manually delete the disks once you've finished your migration and confirmed the new disks are working. For more information, see [Find and delete unattached Azure managed and unmanaged disks](../virtual-machines/windows/find-unattached-disks.md).
++
+#### Step 2: Migrate the data on your managed disks
+
+Now that you have snapshots of your original disks, you can use them to create either ZRS managed disks or zonal managed disks.
+##### Migrate your data to zonal managed disks
+
+To migrate a non-zonal managed disk to zonal:
+
+1. Create a zonal managed disk from the source disk snapshot. The zone parameter should match your zonal VM. To create a zonal managed disk from the snapshot, you can use [Azure CLI](../virtual-machines/scripts/create-managed-disk-from-snapshot.md)(example below), [PowerShell](../virtual-machines/scripts/virtual-machines-powershell-sample-create-managed-disk-from-snapshot.md), or the Azure Portal.
+
+ ```azurecli
+ az disk create --resource-group $resourceGroupName --name $diskName --location $location --zone $zone --sku $storageType --size-gb $diskSize --source $snapshotId
+ ```
+++
+##### Migrate your data to ZRS managed disks
+
+>[!IMPORTANT]
+> Zone-redundant storage (ZRS) for managed disks has some restrictions. For more information see [Limitations](../virtual-machines/disks-deploy-zrs.md?tabs=portal#limitations).
+
+1. Create a ZRS managed disk from the source disk snapshot by using the following Azure CLI snippet:
+
+ ```azurecli
+ # Create a new ZRS Managed Disks using the snapshot Id and the SKU supported
+ storageType=Premium_ZRS
+ location=westus2
+
+ az disk create --resource-group $resourceGroupName --name $diskName --sku $storageType --size-gb $diskSize --source $snapshotId
+
+ ```
+
+#### Step 3: Create a new VM with your new disks
+
+Now that you have migrated your data to ZRS managed disks or zonal managed disks, create a new VM with these new disks set as the OS and data disks:
+
+```azurecli
+
+ az vm create -g MyResourceGroup -n MyVm --attach-os-disk newZonalOSDiskCopy --attach-data-disks newZonalDataDiskCopy --os-type linux
+
+```
++
+## Migration Option 2: Azure Resource Mover
+
+### When to use Azure Resource Mover
+
+Use Azure Resource Mover for an easy way to move VMs or encrypted VMs from one region without availability zones to another with availability zone support. If you want to learn more about the benefits of using Azure Resource Mover, see [Why use Azure Resource Mover?](../resource-mover/overview.md#why-use-resource-mover).
+
+### Azure Resource Mover considerations
+
+When you use Azure Resource mover, all keys and secrets are copied from the source key vault to the newly created destination key vault in your target region. All resources related to your customer-managed keys, such as Azure Key Vaults, disk encryption sets, VMs, disks, and snapshots, must be in the same subscription and region. Azure Key VaultΓÇÖs default availability and redundancy feature can't be used as the destination key vault for the moved VM resources, even if the target region is a secondary region to which your source key vault is replicated.
+
+### How to use Azure Resource Mover
+
+To learn how to move VMs to another region, see [Move Azure VMs to an availability zone in another region](../resource-mover/move-region-availability-zone.md)
+
+To learn how to move encrypted VMs to another region, see [Tutorial: Move encrypted Azure VMs across regions](../resource-mover/tutorial-move-region-encrypted-virtual-machines.md)
+
+## Disaster Recovery Considerations
+
+Typically, availability zones are used to deploy VMs in a High Availability configuration. They may be too close to each other to serve as a Disaster Recovery solution during a natural disaster. However, there are scenarios where availability zones can be used for Disaster Recovery. To learn more, see [Using Availability Zones for Disaster Recovery](../site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md#using-availability-zones-for-disaster-recovery).
+
+The following requirements should be part of a disaster recovery strategy that helps your organization run its workloads during planned or unplanned outages across zones:
+
+- The source VM must already be a zonal VM, which means that it's placed in a logical zone.
+- You'll need to replicate your VM from one zone to another zone using Azure Site Recovery service.
+- Once your VM is replicated to another zone, you can follow steps to run a Disaster Recovery drill, fail over, reprotect, and failback.
+- To enable VM disaster recovery between availability zones, follow the instructions in [Enable Azure VM disaster recovery between availability zones](../site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md) .
+
+## Next Steps
+
+Learn more about:
+
+> [!div class="nextstepaction"]
+> [Azure services and regions that support availability zones](availability-zones-service-support.md)
reliability Migrate Workload Aks Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-workload-aks-mysql.md
+
+ Title: Migrate Azure Kubernetes Service and MySQL Flexible Server workloads to availability zone support
+description: Learn how to migrate Azure Kubernetes Service and MySQL Flexible Server workloads to availability zone support.
+++ Last updated : 08/29/2022++++
+
+# Migrate Azure Kubernetes Service (AKS) and MySQL Flexible Server workloads to availability zone support
+
+This guide describes how to migrate an Azure Kubernetes Service and MySQL Flexible Server workload to complete availability zone support across all dependent services. For complete list of all workload dependencies, see [Workload service dependencies](#workload-service-dependencies).
+
+Availability zone support for this workload must be enabled during the creation of your AKS cluster or MySQL Flexible Server. If you want availability zone support for an existing AKS cluster and MySQL Flexible Server, you'll need to redeploy those resources.
+
+This migration guidance focuses mainly on the infrastructure and availability considerations of running the following architecture on Azure:
++++
+## Workload service dependencies
+
+To provide full workload support for availability zones, each service dependency in the workload must support availability zones.
+
+There are two approaches types of availability zone supported
+
+The AKS and MySQL workload architecture consists of the following component dependencies:
+
+### Azure Kubernetes Service (AKS)
+
+- *Zonal* : The system node pool and user node pools are zonal when you pre-select the zones in which the node pools are deployed during creation time. We recommend that you pre-select all three zones for better resiliency. More user node pools that support availability zones can be added to an existing AKS cluster and by supplying a value for the `zones` parameter.
+
+- *Zone-redundant*: Kubernetes control plane components such as *etcd*, *API server*, *Scheduler*, and *Controller Manager* are automatically replicated or distributed across zones.
+
+ >[!NOTE]
+ >To enable zone-redundancy of the AKS cluster control plane components, you must define your default system node pool with zones when you create an AKS cluster. Adding more zonal node pools to an existing non-zonal AKS cluster won't make the AKS cluster zone-redundant, because that action doesn't distribute the control plane components across zones after-the-fact.
+
+### Azure Database for MySQL Flexible Server
+
+- *Zonal*: The zonal availability mode means that a standby server is always available within the same zone as the primary server. While this option reduces failover time and network latency, it's less resilient due to a single zone outage impacting both the primary and standby servers.
+
+- *Zone-redundant*: The zone-redundant availability mode means that a standby server is always available within another zone in the same region as the primary server. Two zones will be enabled for zone redundancy for the primary and standby servers. We recommend this configuration for better resiliency.
++
+### Azure Standard Load Balancer or Azure Application Gateway
+
+#### Standard Load Balancer
+To understand considerations related to Standard Load Balancer resources, see [Load Balancer and Availability Zones](../load-balancer/load-balancer-standard-availability-zones.md).
+
+- *Zone-redundant*: Choosing zone-redundancy is the recommended way to configure your Frontend IP with your existing Load Balancer. The zone-redundant front-end corresponds with the AKS cluster back-end pool, which is distributed across multiple zones.
+
+- *Zonal*: If you're pinning your node pools to specific zones such as zone 1 and 2, you can pre-select zone 1 and 2 for your Frontend IP in the existing Load Balancer. The reason why you may want to pin your node pools to specific zones could be due to the availability of specialized VM SKU series such as M-series.
+
+#### Azure Application Gateway
+
+Using the Application Gateway Ingress Controller add-on with your AKS cluster is supported only on Application Gateway v2 SKUs (Standard and WAF). To understand further considerations related to Azure Application Gateway, see [Scaling Application Gateway v2 and WAF v2](../application-gateway/application-gateway-autoscaling-zone-redundant.md).
+
+*Zonal*: To use the benefits of availability zones, we recommend that the Application Gateway resource be created in multiple zones, such as zone 1, 2, and 3. Select all three zones for best intra-region resiliency strategy. However, to correspond to your backend node pools, you may pin your node pools to specific zones by pre-selecting zone 1 and 2 during the creation of your App Gateway resource. The reason why you may want to pin your node pools to specific zones could be due to the availability of specialized VM SKU series such as `M-series`.
+
+#### Zone Redundant Storage (ZRS)
+
+- We recommend that your AKS cluster is configured with managed ZRS disks because they're zone-redundant resources. Volumes can be scheduled on all zones.
+
+- Kubernetes is aware of Azure availability zones since version 1.12. You can deploy a `PersistentVolumeClaim` object referencing an Azure Managed Disk in a multi-zone AKS cluster. Kubernetes will take care of scheduling any pod that claims this PVC in the correct availability zone.
+
+- For Azure Database for SQL, we recommend that the data and log files are hosted in zone-redundant storage (ZRS). These files are replicated to the standby server via the storage-level replication available with ZRS.
+
+#### Azure Firewall
+
+*Zonal*: To use the benefits of availability zones, we recommend that the Application Gateway resource be created in multiple zones, such as zone 1, 2, and 3. We recommend that you select all three zones for best intra-region resiliency strategy.
+
+#### Azure Bastion
+
+*Regional*: Azure Bastion is deployed within VNets or peered VNets and is associated to an Azure region. For more information, se [Bastion FAQ](../bastion/bastion-faq.md#dr).
+
+#### Azure Container Registry (ACR)
+
+*Zone-redundant*: We recommend that you create a zone-redundant registry in the Premium service tier. You can also create a zone-redundant registry replica by setting the `zoneRedundancy` property for the replica. To learn how to enable zone redundancy for your ACR, see [Enable zone redundancy in Azure Container Registry for resiliency and high availability](../container-registry/zone-redundancy.md).
+
+#### Azure Cache for Redis
+
+*Zone-redundant*: Azure Cache for Redis supports zone-redundant configurations in the Premium and Enterprise tiers. A zone-redundant cache places its nodes across different availability zones in the same region.
+
+#### Azure Active Directory (AD)
+
+*Global*: Azure AD is a global service with multiple levels of internal redundancy and automatic recoverability. Azure AD is deployed in over 30 datacenters around the world that provide availability zones where present. This number is growing rapidly as more regions are deployed.
+
+#### Azure Key Vault
+
+*Regional*: Azure Key Vault is deployed in a region. To maintain high durability of your keys and secrets, the contents of your key vault are replicated within the region and to a secondary region within the same geography.
+
+*Zone-redundant*: For Azure regions with availability zones and no region pair, Key Vault uses zone-redundant storage (ZRS) to replicate the contents of your key vault three times within the single location/region.
+
+## Workload considerations
+
+### Azure Kubernetes Service (AKS)
+
+- Pods can communicate with other pods, regardless of which node or the availability zone in which the pod lands on the node. Your application may experience higher response time if the pods are located in different availability zones. While the extra round-trip latencies between pods are expected to fall within an acceptable range for most applications, there are application scenarios which require low latency, especially for a chatty communication pattern between pods.
+
+- We recommend that you test your application to ensure it performs well across availability zones.
+
+- For performance reasons such low latency, pods can be co-located in the same data center within the same availability zone. To co-locate pods in the same data center within the same availability zone, you can create user node pools with a unique zone and proximity placement group. You can add a proximity placement group (PPG) to an existing AKS cluster by creating a new agent node pool and specifying the PPG. Use Pod Topology Spread Constraints to control how pods are spread in your AKS cluster across availability zones, nodes and regions.
+
+- After pods that require low latency communication are co-located in the same availability zone, communications between the pods aren't direct. Instead, pod communications are channeled through a service that defines a logical set of pods in your AKS cluster. Pods can be configured to talk to AKS and the communication to the service will be automatically load-balanced to all the pods that are members of the service.
+
+- To take advantage of availability zones, node pools contain underlying VMs that are zonal resources. To support applications that have different compute or storage demands, you can create user node pools with specific VM sizes when you create the user node pool.
+
+ For example, you may decide to use the `Standard_M32ms` under the `M-series` for your user nodes because the microservices in your application require high throughput, low latency, and memory optimized VM sizes that provide high vCPU counts and large amounts of memory. Depending on the deployment region, when you select the VM size in the Azure portal, you may see that this VM size is supported only in zone 1 and 2. You can accept this resiliency configuration as a trade-off for high performance.
+
+- You can't change the VM size of a node pool after you create it. For more information on node pool limitations, see [Limitations](../aks/use-multiple-node-pools.md#limitations).
+
+### Azure Database for MySQL Flexible Server
+
+The implication of deploying your node pools in specific zones, such as zone 1 and 2, is that all service dependencies of your AKS cluster must also support zone 1 and 2. In this workload architecture, your AKS cluster has a service dependency on Azure Database for MySQL Flexible Servers with zone resiliency. You would select zone 1 for your primary server and zone 2 for your standby server to be co-located with your AKS user node pools.
+++
+### Azure Cache for Redis
+
+- Azure Cache for Redis distributes nodes in a zone-redundant cache in a round-robin manner over the availability zones that you've selected.
+
+- You can't update an existing Premium cache to use zone redundancy. To use zone redundancy, you must recreate the Azure Cache for Redis.
+
+- To achieve optimal resiliency, we recommend that you create your Azure Cache for Redis with three, or more replicas so that you can distribute the replicas across three availability zones.
+++
+## Disaster recovery considerations
+
+*Availability zones* are used for better resiliency to achieve high availability of your workload within the primary region of your deployment.
+
+*Disaster Recovery* consists of recovery operations and practices defined in your business continuity plan. Your business continuity plan addresses both how your workload recovers during a disruptive event and how it fully recovers after the event. Consider extending your deployment to an alternative region.
+++
+For your application tier, please review the business continuity and disaster recovery considerations for AKS in this article.
+
+- Consider running multiple AKS clusters in alternative regions. The alternative region can use a secondary paired region. Or, where there's no region pairing for your primary region, you can select an alternative region based on your consideration for available services, capacity, geographical proximity, and data sovereignty. Please review the [Azure regions decision guide](/azure/cloud-adoption-framework/migrate/azure-best-practices/multiple-regions). Also review the [deployment stamp pattern](/azure/architecture/patterns/deployment-stamp).
+
+- You have the option of configuring active-active, active-standby, active-passive for your AKS clusters.
+
+- For your database tier, disaster recovery features include geo-redundant backups with the ability to initiate geo-restore and deploying read replicas in a different region.
+
+- During an outage, you'll need to decide whether to initiate a recovery. You'll need to initiate recovery operations only when the outage is likely to last longer than your workloadΓÇÖs recovery time objective (RTO). Otherwise, you'll wait for service recovery by checking the service status on the Azure Service Health Dashboard. On the Service Health blade of the Azure portal, you can view any notifications associated with your subscription.
+
+- When you do initiate recovery with the geo-restore feature in Azure Database for MySQL, a new database server is created using backup data that is replicated from another region.
++
+## Next Steps
+
+Learn more about:
+> [!div class="nextstepaction"]
+> [Azure Services that support Availability Zones](availability-zones-service-support.md#azure-services-with-availability-zone-support))
reliability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/overview.md
+
+ Title: Azure reliability documentation
+description: Azure reliability documentation for availability zones, cross-regional disaster recovery, availability of services for sovereign clouds, regions, and category.
+++ Last updated : 07/20/2022++++
+# Azure reliability documentation
+
+Reliability consists of two principles: resiliency and availability. The goal of reliability is to return your application to a fully functioning state after a failure occurs. The goal of availability is to provide consistent access to your application or workload be users as they need to.
+
+Azure includes built-in reliability services that you can use and manage based on your business needs. Whether itΓÇÖs a single hardware node failure, a rack level failure, a datacenter outage, or a large-scale regional outage, Azure provides solutions that improve reliability. For example, availability sets ensure that the virtual machines deployed on Azure are distributed across multiple isolated hardware nodes in a cluster. Availability zones protect customersΓÇÖ applications and data from datacenter failures across multiple physical locations within a region. **Regions** and **availability zones** are central to your application design and resiliency strategy and are discussed in greater detail later in this article.
+
+The Azure reliability documentation offers reliability guidance for Azure services. This guidance includes information on availability zone support, disaster recovery guidance, and availability of services.
+
+For more detailed information on reliability and reliability principles in Microsoft Azure services, see [Microsoft Azure Well-Architected Framework: Reliability](/azure/architecture/framework/#reliability).
++
+## Reliability requirements
+
+The required level of reliability for any Azure solution depends on several considerations. Availability and latency SLA and other business requirements drive the architectural choices and resiliency level and should be considered first. Availability requirements range from how much downtime is acceptable ΓÇô and how much it costs your business ΓÇô to the amount of money and time that you can realistically invest in making an application highly available.
+
+Building reliability systems on Azure is a **shared responsibility**. Microsoft is responsible for the reliability of the cloud platform, including its global network and data centers. Azure customers and partners are responsible for the resilience of their cloud applications, using architectural best practices based on the requirements of each workload. While Azure continually strives for highest possible resiliency in SLA for the cloud platform, you must define your own target SLAs for each workload in your solution. An SLA makes it possible to evaluate whether the architecture meets the business requirements. As you strive for higher percentages of SLA guaranteed uptime, the cost and complexity to achieve that level of availability grows. An uptime of 99.99 percent translates to about five minutes of total downtime per month. Is it worth the more complexity and cost to reach that percentage? The answer depends on the individual business requirements. While deciding final SLA commitments, understand MicrosoftΓÇÖs supported SLAs. Each Azure service has its own SLA.
+
+## Building reliability
+
+You should define your applicationΓÇÖs reliability requirements at the beginning of planning. If you know which applications don't need 100% high availability during certain periods of time, you can optimize costs during those non-critical periods. Identify the type of failures an application can experience, and the potential effect of each failure. A recovery plan should cover all critical services by finalizing recovery strategy at the individual component and the overall application level. Design your recovery strategy to protect against zonal, regional, and application-level failure. And perform testing of the end-to-end application environment to measure application reliability and recovery against unexpected failure.
+
+The following checklist covers the scope of reliability planning.
+
+| **Reliability planning** |
+| |
+| **Define** availability and recovery targets to meet business requirements. |
+| **Design** the reliability features of your applications based on the availability requirements. |
+| **Align** applications and data platforms to meet your reliability requirements. |
+| **Configure** connection paths to promote availability. |
+| **Use** availability zones and disaster recovery planning where applicable to improve reliability and optimize costs. |
+| **Ensure** your application architecture is resilient to failures. |
+| **Know** what happens if SLA requirements are not met. |
+| **Identify** possible failure points in the system; application design should tolerate dependency failures by deploying circuit breaking. |
+| **Build** applications that operate in the absence of their dependencies. |
+
+## Regions and availability zones
+
+Regions and Availability Zones are a big part of the reliability equation. Regions feature multiple, physically separate availability zones. These availability zones are connected by a high-performance network featuring less than 2ms latency between physical zones. Low latency helps your data stay synchronized and accessible when things go wrong. You can use this infrastructure strategically as you architect applications and data infrastructure that automatically replicate and deliver uninterrupted services between zones and across regions. Choose the best region for your needs based on technical and regulatory considerationsΓÇöservice capabilities, data residency, compliance requirements, latencyΓÇöand begin advancing your reliability strategy.
+
+Microsoft Azure services support availability zones and are enabled to drive your cloud operations at optimum high availability while supporting your disaster recovery and business continuity strategy needs. Choose the best region for your needs based on technical and regulatory considerationsΓÇöservice capabilities, data residency, compliance requirements, latencyΓÇöand begin advancing your reliability strategy. For more information, see [Azure regions and availability zones](availability-zones-overview.md).
+
+## Shared responsibility
+
+Building reliabile systems on Azure is a shared responsibility. Microsoft is responsible for the reliability of the cloud platform, which includes its global network and datacenters. Azure customers and partners are responsible for the reliability of their cloud applications, using architectural best practices based on the requirements of each workload. For more information, see [Business continuity management program in Azure](business-continuity-management-program.md).
+
+## Azure service dependencies
+
+Microsoft Azure services are available globally to drive your cloud operations at an optimal level. You can choose the best region for your needs based on technical and regulatory considerations: service capabilities, data residency, compliance requirements, and latency.
+
+Azure services deployed to Azure regions are listed on the [Azure global infrastructure products](https://azure.microsoft.com/global-infrastructure/services/?products=all) page. To better understand regions and Availability Zones in Azure, see [Regions and Availability Zones in Azure](availability-zones-overview.md).
+
+Azure services are built for reliability including high availability and disaster recovery. There are no services that are dependent on a single logical data center (to avoid single points of failure). Non-regional services listed on [Azure global infrastructure products](https://azure.microsoft.com/global-infrastructure/services/?products=all) are services for which there is no dependency on a specific Azure region. Non-regional services are deployed to two or more regions and if there is a regional failure, the instance of the service in another region continues servicing customers. Certain non-regional services enable customers to specify the region where the underlying virtual machine (VM) on which service runs will be deployed. For example, [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) enables customers to specify the region location where the VM resides. All Azure services that store customer data allow the customer to specify the specific regions in which their data will be stored. The exception is [Azure Active Directory (Azure AD)](https://azure.microsoft.com/services/active-directory/), which has geo placement (such as Europe or North America). For more information about data storage residency, see the [Data residency map](https://azure.microsoft.com/global-infrastructure/data-residency/).
+
+If you need to understand dependencies between Azure services to help better architect your applications and services, you can request the **Azure service dependency documentation** by contacting your Microsoft sales or customer representative. This document lists the dependencies for Azure services, including dependencies on any common major internal services such as control plane services. To obtain this documentation, you must be a Microsoft customer and have the appropriate non-disclosure agreement (NDA) with Microsoft.
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Business continuity management in Azure](business-continuity-management-program.md)
+
+> [!div class="nextstepaction"]
+> [Availability zone migration guidance](availability-zones-migration-overview.md)
+
+> [!div class="nextstepaction"]
+> [Availability of service by category](availability-service-by-category.md)
+
+> [!div class="nextstepaction"]
+> [Microsoft commitment to expand Azure availability zones to more regions](https://azure.microsoft.com/blog/our-commitment-to-expand-azure-availability-zones-to-more-regions/)
+
+> [!div class="nextstepaction"]
+> [Build solutions for high availability using availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability)
reliability Reliability Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-functions.md
+
+ Title: Reliability in Azure Functions
+description: Find out about reliability in Azure Functions
++++ Last updated : 10/07/2022++
+<!--#Customer intent: I want to understand reliability support in Azure Functions so that I can respond to and/or avoid failures in order to minimize downtime and data loss. -->
++
+# What is reliability in Azure Functions?
+
+This article describes reliability support in Azure Functions and covers both intra-regional resiliency with [availability zones](#availability-zone-support) and links to information on [cross-region resiliency with disaster recovery](#disaster-recovery-cross-region-failover). For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
+
+Availability zone support for Azure Functions is available on both Premium (Elastic Premium) and Dedicated (App Service) plans. This article focuses on zone redundancy support for Premium plans. For zone redundancy on Dedicated plans, see [Migrate App Service to availability zone support](migrate-app-service.md).
++
+## Availability zone support
+
+Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. In the case of a local zone failure, availability zones are designed so that if one zone is affected, regional services, capacity, and high availability are supported by the remaining two zones. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Availability zone service and regional support](availability-zones-service-support.md).
+
+There are three types of Azure services that support availability zones: zonal, zone-redundant, and always-available services. You can learn more about these types of services and how they promote resiliency in the [Azure services with availability zone support](availability-zones-service-support.md#azure-services-with-availability-zone-support).
+
+Azure Functions supports both [zone-redundant and zonal instances](availability-zones-service-support.md#azure-services-with-availability-zone-support).
+
+- **Zonal**. Function app instances are placed in a single zone that's selected by the platform in the selected region. A zonal function app is isolated from any outages that occur in other zones. However, if an outage impacts the specific zone chosen for the function app, the function app won't be available.
+
+- **Zone-redundant**. The function app platform automatically spreads the instances in the plan across all zones of the selected region. For example, in a region with three zones, if an instance count is larger than three and the number of instances is divisible by three, the instances are distributed evenly. Otherwise, instance counts beyond 3 * N are distributed across the remaining one or two zones. A zone redundant function app automatically distributes the instances your app runs on between the availability zones in the region. For apps running in a zone-redundant Premium plan, even as the app scales in and out, the instances the app is running on are still evenly distributed between availability zones.
+
+>[!IMPORTANT]
+>Azure Functions can run on the Azure App Service platform. In the App Service platform, plans that host Premium plan function apps are referred to as Elastic Premium plans, with SKU names like EP1. If you choose to run your function app on a Premium plan, make sure to create a plan with an SKU name that starts with "E", such as EP1. App Service plan SKU names that start with "P", such as P1V2 (Premium V2 Small plan), are actually [Dedicated hosting plans](../azure-functions/dedicated-plan.md). Because they are Dedicated and not Elastic Premium, plans with SKU names starting with "P" won't scale dynamically and may increase your costs.
+
+<!-- EDITORIAL COMMENT: Function Apps team to list other SKUs that donΓÇÖt support availability zones -->
+
+### Regional availability
+
+Zone-redundant Premium plans are available in the following regions:
+
+| Americas | Europe | Middle East | Africa | Asia Pacific |
+||-||--|-|
+| Brazil South | France Central | Qatar Central | | Australia East |
+| Canada Central | Germany West Central | | | Central India |
+| Central US | North Europe | | | China North 3 |
+| East US | Sweden Central | | | East Asia |
+| East US 2 | UK South | | | Japan East |
+| South Central US | West Europe | | | Southeast Asia |
+| West US 2 | | | | |
+| West US 3 | | | | |
+
+### Prerequisites
+
+Availability zone support is a property of the Premium plan. The following are the current requirements/limitations for enabling availability zones:
+
+- You can only enable availability zones when creating a Premium plan for your function app. You can't convert an existing Premium plan to use availability zones.
+- You must use a [zone redundant storage account (ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage) for your function app's [storage account](../azure-functions/storage-considerations.md#storage-account-requirements). If you use a different type of storage account, Functions may show unexpected behavior during a zonal outage.
+- Both Windows and Linux are supported.
+- Must be hosted on an [Elastic Premium](../azure-functions/functions-premium-plan.md) or Dedicated hosting plan. To learn how to use zone redundancy with a Dedicated plan, see [Migrate App Service to availability zone support](../availability-zones/migrate-app-service.md).
+ - Availability zone support isn't currently available for function apps on [Consumption](../azure-functions/consumption-plan.md) plans.
+- Function apps hosted on a Premium plan must have a minimum [always ready instances](../azure-functions/functions-premium-plan.md#always-ready-instances) count of three.
+ - The platform will enforce this minimum count behind the scenes if you specify an instance count fewer than three.
+- If you aren't using Premium plan or a scale unit that supports availability zones, are in an unsupported region, or are unsure, see the [migration guidance](../reliability/migrate-functions.md).
+
+### Pricing
+
+There's no additional cost associated with enabling availability zones. Pricing for a zone redundant Premium plan is the same as a single zone Premium plan. You'll be charged based on your Premium plan SKU, the capacity you specify, and any instances you scale to based on your autoscale criteria. If you enable availability zones but specify a capacity less than three, the platform will enforce a minimum instance count of three and charge you for those three instances.
+
+### Create a zone-redundant Premium plan and function app
+
+There are currently two ways to deploy a zone-redundant Premium plan and function app. You can use either the [Azure portal](https://portal.azure.com) or an ARM template.
+
+# [Azure portal](#tab/azure-portal)
+
+1. Open the Azure portal and navigate to the **Create Function App** page. Information on creating a function app in the portal can be found [here](../azure-functions/functions-create-function-app-portal.md#create-a-function-app).
+
+1. In the **Basics** page, fill out the fields for your function app. Pay special attention to the fields in the table below (also highlighted in the screenshot below), which have specific requirements for zone redundancy.
+
+ | Setting | Suggested value | Notes for Zone Redundancy |
+ | | - | -- |
+ | **Region** | Preferred region | The subscription under which this new function app is created. You must pick a region that is availability zone enabled from the [list above](#prerequisites). |
+
+ ![Screenshot of Basics tab of function app create page.](../azure-functions/media/functions-az-redundancy\azure-functions-basics-az.png)
+
+1. In the **Hosting** page, fill out the fields for your function app hosting plan. Pay special attention to the fields in the table below (also highlighted in the screenshot below), which have specific requirements for zone redundancy.
+
+ | Setting | Suggested value | Notes for Zone Redundancy |
+ | | - | -- |
+ | **Storage Account** | A [zone-redundant storage account](../azure-functions/storage-considerations.md#storage-account-requirements) | As mentioned above in the [prerequisites](#prerequisites) section, we strongly recommend using a zone-redundant storage account for your zone redundant function app. |
+ | **Plan Type** | Functions Premium | This article details how to create a zone redundant app in a Premium plan. Zone redundancy isn't currently available in Consumption plans. Information on zone redundancy on app service plans can be found [in this article](../reliability/migrate-app-service.md). |
+ | **Zone Redundancy** | Enabled | This field populates the flag that determines if your app is zone redundant or not. You won't be able to select `Enabled` unless you have chosen a region supporting zone redundancy, as mentioned in step 2. |
+
+ ![Screenshot of Hosting tab of function app create page.](../azure-functions/media/functions-az-redundancy\azure-functions-hosting-az.png)
+
+1. For the rest of the function app creation process, create your function app as normal. There are no fields in the rest of the creation process that affect zone redundancy.
+
+# [ARM template](#tab/arm-template)
+
+You can use an [ARM template](../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md) to deploy to a zone-redundant Premium plan. A guide to hosting Functions on Premium plans can be found [here](../azure-functions/functions-infrastructure-as-code.md#deploy-on-premium-plan).
+
+The only properties to be aware of while creating a zone-redundant hosting plan are the new `zoneRedundant` property and the plan's instance count (`capacity`) fields. The `zoneRedundant` property must be set to `true` and the `capacity` property should be set based on the workload requirement, but not less than `3`. Choosing the right capacity varies based on several factors and high availability/fault tolerance strategies. A good rule of thumb is to ensure sufficient instances for the application such that losing one zone of instances leaves sufficient capacity to handle expected load.
+
+> [!IMPORTANT]
+> Azure Functions apps hosted on an elastic premium, zone-redundant plan must have a minimum [always ready instance](../azure-functions/functions-premium-plan.md#always-ready-instances) count of 3. This make sure that a zone-redundant function app always has enough instances to satisfy at least one worker per zone.
+
+Below is an ARM template snippet for a zone-redundant, Premium plan showing the `zoneRedundant` field and the `capacity` specification.
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2021-01-15",
+ "name": "<YOUR_PLAN_NAME>",
+ "location": "<YOUR_REGION_NAME>",
+ "sku": {
+ "name": "EP1",
+ "tier": "ElasticPremium",
+ "size": "EP1",
+ "family": "EP",
+ "capacity": 3
+ },
+ "kind": "elastic",
+ "properties": {
+ "perSiteScaling": false,
+ "elasticScaleEnabled": true,
+ "maximumElasticWorkerCount": 20,
+ "isSpot": false,
+ "reserved": false,
+ "isXenon": false,
+ "hyperV": false,
+ "targetWorkerCount": 0,
+ "targetWorkerSizeId": 0,
+ "zoneRedundant": true
+ }
+ }
+]
+```
+
+To learn more about these templates, see [Automate resource deployment in Azure Functions](../azure-functions/functions-infrastructure-as-code.md).
+++
+After the zone-redundant plan is created and deployed, any function app hosted on your new plan is considered zone-redundant.
+
+### Migrate your function app to a zone-redundant plan
+
+Azure Function Apps currently doesn't support in-place migration of existing function apps instances. For information on how to migrate the public multi-tenant Premium plan from non-availability zone to availability zone support, see [Migrate App Service to availability zone support](../reliability/migrate-functions.md).
+
+### Zone down experience
+
+All available function app instances of zone-redundant function apps are enabled and processing events. When a zone goes down, Functions detect lost instances and automatically attempts to find new replacement instances if needed. Elastic scale behavior still applies. However, in a zone-down scenario there's no guarantee that requests for additional instances can succeed, since back-filling lost instances occurs on a best-effort basis.
+Applications that are deployed in an availability zone enabled Premium plan continue to run even when other zones in the same region suffer an outage. However, it's possible that non-runtime behaviors could still be impacted from an outage in other availability zones. These impacted behaviors can include Premium plan scaling, application creation, application configuration, and application publishing. Zone redundancy for Premium plans only guarantees continued uptime for deployed applications.
+
+When Functions allocates instances to a zone redundant Premium plan, it uses best effort zone balancing offered by the underlying Azure Virtual Machine Scale Sets. A Premium plan is considered balanced when each zone has either the same number of VMs (┬▒ 1 VM) in all of the other zones used by the Premium plan.
+
+## Disaster recovery: cross region failover
+
+When entire Azure regions or datacenters experience downtime, your mission-critical code needs to continue processing in a different region. See [Azure Functions geo-disaster recovery and high availability](../azure-functions/functions-geo-disaster-recovery.md) for guidance on how to setup a cross region failover
reliability Sovereign Cloud China https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/sovereign-cloud-china.md
+
+ Title: Availability of services for Microsoft Azure operated by 21Vianet
+description: Learn how services are supported for Microsoft Azure operated by 21Vianet
+++ Last updated : 10/27/2022+++++
+# Availability of services for Microsoft Azure operated by 21Vianet
+
+Microsoft Azure operated by 21Vianet (Azure China) is a physically separated instance of cloud services located in China. It's independently operated and transacted by Shanghai Blue Cloud Technology Co., Ltd. ("21Vianet"), a wholly owned subsidiary of Beijing 21Vianet Broadband Data Center Co., Ltd..
++
+## Service availability
+
+Microsoft's goal for Azure in China is to match service availability in Azure. For service availability for Azure in China, see [Products available by China regions](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=all&regions=china-non-regional,china-east,china-east-2,china-east-3,china-north,china-north-2,china-north-3&rar=true).
+
+### AI + machine learning
+
+This section outlines variations and considerations when using Azure Bot Service, Azure Machine Learning, and Cognitive Services.
+
+| Product | Unsupported, limited, and/or modified features | Notes |
+||--||
+|Azure Machine learning| See [Azure Machine Learning feature availability across Azure in China cloud regions](../machine-learning/reference-machine-learning-cloud-parity.md#azure-china-21vianet). | |
+| Cognitive
+| Cognitive
++
+### Media
+
+This section outlines variations and considerations when using Media services.
+
+| Product | Unsupported, limited, and/or modified features | Notes |
+||--||
+| Azure Media Services | For Azure Media Services v3 feature variations in Azure in China, see [Azure Media Services v3 clouds and regions availability](/azure/media-services/latest/azure-clouds-regions#china). |
+
+### Networking
+
+This section outlines variations and considerations when using Networking services.
+
+| Product | Unsupported, limited, and/or modified features | Notes |
+||--||
+| Private Link| <li>For Private Link services availability, see [Azure Private Link availability](../private-link/availability.md).<li>For Private DNS zone names, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md#government). |
++
+## Azure in China REST endpoints
+
+The table below lists API endpoints in Azure vs. Azure in China for accessing and managing some of the more common services.
+
+For IP rangers for Azure in China, download [Azure Datacenter IP Ranges in China](https://www.microsoft.com/download/confirmation.aspx?id=57062).
+
+| Service category | Azure Global | Azure in China |
+|-|-|-|
+| Azure (in general) | \*.windows.net | \*.chinacloudapi.cn |
+| Azure Active Directory | `https://login.microsoftonline.com` | `https://login.chinacloudapi.cn` |
+| Azure App Configuration | \*.azconfig.io | \*.azconfig.azure.cn |
+| Azure compute | \*.cloudapp.net | \*.chinacloudapp.cn |
+| Azure data | `https://{location}.experiments.azureml.net` | `https://{location}.experiments.ml.azure`.cn |
+| Azure storage | \*.blob.core.windows.net \*.queue.core.windows.net \*.table.core.windows.net \*.dfs.core.windows.net | \*.blob.core.chinacloudapi.cn \*.queue.core.chinacloudapi.cn \*.table.core.chinacloudapi.cn \*.dfs.core.chinacloudapi.cn|
+| Azure management| `https://management.azure.com/` | `https://management.chinacloudapi.cn/` |
+| Azure service management | https://management.core.windows.net | [https://management.core.chinacloudapi.cn](https://management.core.chinacloudapi.cn/) |
+| Azure Resource Manager | [https://management.azure.com](https://management.azure.com/) | [https://management.chinacloudapi.cn](https://management.chinacloudapi.cn/) |
+| Azure portal | [https://portal.azure.com](https://portal.azure.com/) | [https://portal.azure.cn](https://portal.azure.cn/) |
+| SQL Database | \*.database.windows.net | \*.database.chinacloudapi.cn |
+| SQL Azure DB management API | [https://management.database.windows.net](https://management.database.windows.net/) | [https://management.database.chinacloudapi.cn](https://management.database.chinacloudapi.cn/) |
+| Azure Service Bus | \*.servicebus.windows.net | \*.servicebus.chinacloudapi.cn |
+| Azure SignalR Service| \*.service.signalr.net | \*.signalr.azure.cn |
+| Azure Time Series Insights | \*.timeseries.azure.com \*.insights.timeseries.azure.cn | \*.timeseries.azure.cn \*.insights.timeseries.azure.cn |
+| Azure Access Control Service | \*.accesscontrol.windows.net | \*.accesscontrol.chinacloudapi.cn |
+| Azure HDInsight | \*.azurehdinsight.net | \*.azurehdinsight.cn |
+| SQL DB import/export service endpoint | |  1. China East [https://sh1prod-dacsvc.chinacloudapp.cn/dacwebservice.svc](https://sh1prod-dacsvc.chinacloudapp.cn/dacwebservice.svc) <br>2. China North [https://bj1prod-dacsvc.chinacloudapp.cn/dacwebservice.svc](https://bj1prod-dacsvc.chinacloudapp.cn/dacwebservice.svc) |
+| MySQL PaaS | | \*.mysqldb.chinacloudapi.cn |
+| Azure Service Fabric cluster | \*.cloudapp.azure.com | \*.chinaeast.chinacloudapp.cn |
+| Azure Spring Cloud| \*.azuremicroservices.io | \*.microservices.azure.cn |
+| Azure Active Directory (Azure AD) | \*.onmicrosoft.com | \*.partner.onmschina.cn |
+| Azure AD logon | [https://login.microsoftonline.com](https://login.windows.net/) | [https://login.partner.microsoftonline.cn](https://login.chinacloudapi.cn/) |
+| Microsoft Graph | [https://graph.microsoft.com](https://graph.microsoft.com/) | [https://microsoftgraph.chinacloudapi.cn](https://microsoftgraph.chinacloudapi.cn/) |
+| Azure Cognitive Services | <https://api.projectoxford.ai/face/v1.0> | <https://api.cognitive.azure.cn/face/v1.0> |
+| Azure Bot Services | <\*.botframework.com> | <\*.botframework.azure.cn> |
+| Azure Key Vault API | \*.vault.azure.net | \*.vault.azure.cn |
+| Sign in with PowerShell: <br>- Azure classic portal <br>- Azure Resource Manager <br>- Azure AD| - Add-AzureAccount<br>- Connect-AzureRmAccount <br> - Connect-msolservice |  - Add-AzureAccount -Environment AzureChinaCloud <br> - Connect-AzureRmAccount -Environment AzureChinaCloud <br>- Connect-msolservice -AzureEnvironment AzureChinaCloud |
+
+### Application Insights
+
+> [!NOTE]
+> Codeless agent/extension based monitoring for Azure App Services is **currently not supported**. Snapshot Debugger is also not currently available.
+
+### SDK endpoint modifications
+
+In order to send data from Application Insights in this region, you will need to modify the default endpoint addresses that are used by the Application Insights SDKs. Each SDK requires slightly different modifications.
+
+### .NET with applicationinsights.config
+
+```xml
+<ApplicationInsights>
+ ...
+ <TelemetryModules>
+ <Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse.QuickPulseTelemetryModule, Microsoft.AI.PerfCounterCollector">
+ <QuickPulseServiceEndpoint>https://quickpulse.applicationinsights.azure.cn/QuickPulseService.svc</QuickPulseServiceEndpoint>
+ </Add>
+ </TelemetryModules>
+ ...
+ <TelemetryChannel>
+ <EndpointAddress>https://dc.applicationinsights.azure.cn/v2/track</EndpointAddress>
+ </TelemetryChannel>
+ ...
+ <ApplicationIdProvider Type="Microsoft.ApplicationInsights.Extensibility.Implementation.ApplicationId.ApplicationInsightsApplicationIdProvider, Microsoft.ApplicationInsights">
+ <ProfileQueryEndpoint>https://dc.applicationinsights.azure.cn/api/profiles/{0}/appId</ProfileQueryEndpoint>
+ </ApplicationIdProvider>
+ ...
+</ApplicationInsights>
+```
+
+### .NET Core
+
+Modify the appsettings.json file in your project as follows to adjust the main endpoint:
+
+```json
+"ApplicationInsights": {
+ "InstrumentationKey": "instrumentationkey",
+ "TelemetryChannel": {
+ "EndpointAddress": "https://dc.applicationinsights.azure.cn/v2/track"
+ }
+ }
+```
+
+The values for Live Metrics and the Profile Query Endpoint can only be set via code. To override the default values for all endpoint values via code, make the following changes in the `ConfigureServices` method of the `Startup.cs` file:
+
+```csharp
+using Microsoft.ApplicationInsights.Extensibility.Implementation.ApplicationId;
+using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse; //place at top of Startup.cs file
+
+ services.ConfigureTelemetryModule<QuickPulseTelemetryModule>((module, o) => module.QuickPulseServiceEndpoint="https://quickpulse.applicationinsights.azure.cn/QuickPulseService.svc");
+
+ services.AddSingleton(new ApplicationInsightsApplicationIdProvider() { ProfileQueryEndpoint = "https://dc.applicationinsights.azure.cn/api/profiles/{0}/appId" });
+
+ services.AddSingleton<ITelemetryChannel>(new ServerTelemetryChannel() { EndpointAddress = "https://dc.applicationinsights.azure.cn/v2/track" });
+
+ //place in ConfigureServices method. If present, place this prior to services.AddApplicationInsightsTelemetry("instrumentation key");
+```
+
+### Java
+
+Modify the applicationinsights.xml file to change the default endpoint address.
+
+```xml
+<?xml version="1.0" encoding="utf-8"?>
+<ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings">
+ <InstrumentationKey>ffffeeee-dddd-cccc-bbbb-aaaa99998888</InstrumentationKey>
+ <TelemetryModules>
+ <Add type="com.microsoft.applicationinsights.web.extensibility.modules.WebRequestTrackingTelemetryModule"/>
+ <Add type="com.microsoft.applicationinsights.web.extensibility.modules.WebSessionTrackingTelemetryModule"/>
+ <Add type="com.microsoft.applicationinsights.web.extensibility.modules.WebUserTrackingTelemetryModule"/>
+ </TelemetryModules>
+ <TelemetryInitializers>
+ <Add type="com.microsoft.applicationinsights.web.extensibility.initializers.WebOperationIdTelemetryInitializer"/>
+ <Add type="com.microsoft.applicationinsights.web.extensibility.initializers.WebOperationNameTelemetryInitializer"/>
+ <Add type="com.microsoft.applicationinsights.web.extensibility.initializers.WebSessionTelemetryInitializer"/>
+ <Add type="com.microsoft.applicationinsights.web.extensibility.initializers.WebUserTelemetryInitializer"/>
+ <Add type="com.microsoft.applicationinsights.web.extensibility.initializers.WebUserAgentTelemetryInitializer"/>
+ </TelemetryInitializers>
+ <!--Add the following Channel value to modify the Endpoint address-->
+ <Channel type="com.microsoft.applicationinsights.channel.concrete.inprocess.InProcessTelemetryChannel">
+ <EndpointAddress>https://dc.applicationinsights.azure.cn/v2/track</EndpointAddress>
+ </Channel>
+</ApplicationInsights>
+```
+
+### Spring Boot
+
+Modify the `application.properties` file and add:
+
+```yaml
+azure.application-insights.channel.in-process.endpoint-address= https://dc.applicationinsights.azure.cn/v2/track
+```
+
+### Node.js
+
+```javascript
+var appInsights = require("applicationinsights");
+appInsights.setup('INSTRUMENTATION_KEY');
+appInsights.defaultClient.config.endpointUrl = "https://dc.applicationinsights.azure.cn/v2/track"; // ingestion
+appInsights.defaultClient.config.profileQueryEndpoint = "https://dc.applicationinsights.azure.cn/api/profiles/{0}/appId"; // appid/profile lookup
+appInsights.defaultClient.config.quickPulseHost = "https://quickpulse.applicationinsights.azure.cn/QuickPulseService.svc"; //live metrics
+appInsights.Configuration.start();
+```
+
+The endpoints can also be configured through environment variables:
+
+```
+Instrumentation Key: ΓÇ£APPINSIGHTS_INSTRUMENTATIONKEYΓÇ¥
+Profile Endpoint: ΓÇ£https://dc.applicationinsights.azure.cn/api/profiles/{0}/appIdΓÇ¥
+Live Metrics Endpoint: "https://quickpulse.applicationinsights.azure.cn/QuickPulseService.svc"
+```
+
+### JavaScript
+
+```javascript
+<script type="text/javascript">
+var sdkInstance="appInsightsSDK";window[sdkInstance]="appInsights";var aiName=window[sdkInstance],aisdk=window[aiName]||function(e){function n(e){i[e]=function(){var n=arguments;i.queue.push(function(){i[e].apply(i,n)})}}var i={config:e};i.initialize=!0;var a=document,t=window;setTimeout(function(){var n=a.createElement("script");n.src=e.url||"https://az416426.vo.msecnd.net/next/ai.2.min.js",a.getElementsByTagName("script")[0].parentNode.appendChild(n)});try{i.cookie=a.cookie}catch(e){}i.queue=[],i.version=2;for(var r=["Event","PageView","Exception","Trace","DependencyData","Metric","PageViewPerformance"];r.length;)n("track"+r.pop());n("startTrackPage"),n("stopTrackPage");var o="Track"+r[0];if(n("start"+o),n("stop"+o),!(!0===e.disableExceptionTracking||e.extensionConfig&&e.extensionConfig.ApplicationInsightsAnalytics&&!0===e.extensionConfig.ApplicationInsightsAnalytics.disableExceptionTracking)){n("_"+(r="onerror"));var s=t[r];t[r]=function(e,n,a,t,o){var c=s&&s(e,n,a,t,o);return!0!==c&&i["_"+r]({message:e,url:n,lineNumber:a,columnNumber:t,error:o}),c},e.autoExceptionInstrumented=!0}return i}
+(
+ {
+ instrumentationKey:"INSTRUMENTATION_KEY",
+ endpointUrl: "https://dc.applicationinsights.azure.cn/v2/track"
+ }
+);
+window[aiName]=aisdk,aisdk.queue&&0===aisdk.queue.length&&aisdk.trackPageView({});
+</script>
+
+```
+
+## Remote Management
+
+### Azure portal
+
+You can sign in to the [Azure portal](https://portal.azure.cn/?l=en.en-us) to manage workloads in Azure China anywhere globally.
+
+### Work with administrator roles
+
+One account administrator role is created per Azure account, typically the person who signed up for or bought the Azure subscription. This role is authorized to use the [Account Center](https://account.windowsazure.cn/Home/Index/en-us) to perform management tasks.
+
+To sign in, the account administrator uses the organization ID (Org ID) created when the subscription was purchased.
+
+### Create a service administrator to manage the service deployment
+
+One service administrator role is created per Azure account, and is authorized to manage services in the Azure portal. With a new subscription, the account administrator is also the service administrator.
+
+### Create a co-administrator
+
+Account administrators can create up to 199 co-administrator roles per subscription. This role has the same access privileges as the service administrator, but can't change the association of subscriptions to Azure directories.
+
security Threat Modeling Tool Releases 73209279 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-73209279.md
+
+ Title: Microsoft Threat Modeling Tool release 09/27/2022 - Azure
+description: Documenting the release notes for the threat modeling tool release 7.3.20927.9.
++++ Last updated : 09/27/2022++
+# Threat Modeling Tool update release 7.3.20927.9 - 09/27/2022
+
+Version 7.3.20927.9 of the Microsoft Threat Modeling Tool (TMT) was released on September 27 2022 and contains the following changes:
+
+- Accessibility improvements
+- Security fixes
+
+## Known issues
+
+### Errors related to TMT7.application file deserialization
+
+#### Issue
+
+Some customers have reported receiving the following error message when downloading the Threat Modeling Tool:
+
+```
+The threat model file '$PATH\TMT7.application' could not be deserialized. File is not an actual threat model or the threat model may be corrupted.
+```
+
+This error occurs because some browsers do not natively support ClickOnce installation. In those cases the ClickOnce application file is downloaded to the user's hard drive.
+
+#### Workaround
+
+This error will continue to appear if the Threat Modeling Tool is launched by double-clicking on the TMT7.application file. However, after bypassing the error the tool will function normally. Rather than launching the Threat Modeling Tool by double-clicking the TMT7.application file, users should utilize shortcuts created in the Windows Menu during the installation to start the Threat Modeling Tool.
+
+## System requirements
+
+- Supported Operating Systems
+ - [Microsoft Windows 10 Anniversary Update](https://blogs.windows.com/windowsexperience/2016/08/02/how-to-get-the-windows-10-anniversary-update/#HTkoK5Zdv0g2F2Zq.97) or later
+- .NET Version Required
+ - [.NET 4.7.1](https://go.microsoft.com/fwlink/?LinkId=863262) or later
+- Additional Requirements
+ - An Internet connection is required to receive updates to the tool as well as templates.
+
+## Documentation and feedback
+
+- Documentation for the Threat Modeling Tool is located on [docs.microsoft.com](./threat-modeling-tool.md), and includes information [about using the tool](./threat-modeling-tool-getting-started.md).
+
+## Next steps
+
+Download the latest version of the [Microsoft Threat Modeling Tool](https://aka.ms/threatmodelingtool).
security Threat Modeling Tool Releases 73211082 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-73211082.md
+
+ Title: Microsoft Threat Modeling Tool release 11/08/2022 - Azure
+description: Documenting the release notes for the threat modeling tool release 7.3.21108.2.
++++ Last updated : 11/08/2022++
+# Threat Modeling Tool update release 7.3.21108.2 - 11/08/2022
+
+Version 7.3.21108.2 of the Microsoft Threat Modeling Tool (TMT) was released on November 8 2022 and contains the following changes:
+
+- Bug fixes
+
+## Known issues
+
+### Errors related to TMT7.application file deserialization
+
+#### Issue
+
+Some customers have reported receiving the following error message when downloading the Threat Modeling Tool:
+
+```
+The threat model file '$PATH\TMT7.application' could not be deserialized. File is not an actual threat model or the threat model may be corrupted.
+```
+
+This error occurs because some browsers do not natively support ClickOnce installation. In those cases the ClickOnce application file is downloaded to the user's hard drive.
+
+#### Workaround
+
+This error will continue to appear if the Threat Modeling Tool is launched by double-clicking on the TMT7.application file. However, after bypassing the error the tool will function normally. Rather than launching the Threat Modeling Tool by double-clicking the TMT7.application file, users should utilize shortcuts created in the Windows Menu during the installation to start the Threat Modeling Tool.
+
+## System requirements
+
+- Supported Operating Systems
+ - [Microsoft Windows 10 Anniversary Update](https://blogs.windows.com/windowsexperience/2016/08/02/how-to-get-the-windows-10-anniversary-update/#HTkoK5Zdv0g2F2Zq.97) or later
+- .NET Version Required
+ - [.NET 4.7.1](https://go.microsoft.com/fwlink/?LinkId=863262) or later
+- Additional Requirements
+ - An Internet connection is required to receive updates to the tool as well as templates.
+
+## Documentation and feedback
+
+- Documentation for the Threat Modeling Tool is located on [docs.microsoft.com](./threat-modeling-tool.md), and includes information [about using the tool](./threat-modeling-tool-getting-started.md).
+
+## Next steps
+
+Download the latest version of the [Microsoft Threat Modeling Tool](https://aka.ms/threatmodelingtool).
security Threat Modeling Tool Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases.md
The Microsoft Threat Modeling Tool is currently released as a free [click-to-dow
## Release Notes
+- [Microsoft Threat Modeling Tool GA Release Version 7.3.21108.2](threat-modeling-tool-releases-73211082.md) - November 8 2022
+- [Microsoft Threat Modeling Tool GA Release Version 7.3.20927.9](threat-modeling-tool-releases-73209279.md) - September 27 2022
- [Microsoft Threat Modeling Tool GA Release Version 7.3.00729.1](threat-modeling-tool-releases-73007291.md) - July 29 2020 - [Microsoft Threat Modeling Tool GA Release Version 7.3.00714.2](threat-modeling-tool-releases-73007142.md) - July 14 2020 - [Microsoft Threat Modeling Tool GA Release Version 7.3.00316.1](threat-modeling-tool-releases-73003161.md) - March 22 2020
sentinel Basic Logs Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/basic-logs-use-cases.md
The primary log sources used for detection often contain the metadata and contex
Event log data in Basic Logs can't be used as the primary log source for security incidents and alerts. But Basic Log event data is useful to correlate and draw conclusions when you investigate an incident or perform threat hunting.
-This topic highlights log sources to consider configuring for Basic Logs when they're stored in Log Analytics tables. Before configuring tables as Basic Logs, [compare log data plans](../azure-monitor/logs/log-analytics-workspace-overview.md#log-data-plans).
+This topic highlights log sources to consider configuring for Basic Logs when they're stored in Log Analytics tables. Before configuring tables as Basic Logs, [compare log data plans](../azure-monitor/logs/basic-logs-configure.md).
## Storage access logs for cloud providers
A new and growing source of log data is Internet of Things (IoT) connected devic
## Next steps -- [Log plans](../azure-monitor/logs/log-analytics-workspace-overview.md#log-data-plans)-- [Configure Basic Logs in Azure Monitor](../azure-monitor/logs/basic-logs-configure.md)
+- [Set a table's log data plan in Azure Monitor Logs](../azure-monitor/logs/basic-logs-configure.md)
- [Start an investigation by searching for events in large datasets (preview)](investigate-large-datasets.md)
sentinel Migration Ingestion Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-ingestion-tool.md
This article describes a set of different tools used to transfer your historical
## Azure Monitor Basic Logs/Archive
-Before you ingest data to Azure Monitor Basic Logs or Archive, for lower ingestion prices, ensure that the table you're writing to is [configured as Basic Logs](../azure-monitor/logs/basic-logs-configure.md#check-table-configuration). Review the [Azure Monitor custom log ingestion tool](#azure-monitor-custom-log-ingestion-tool) and the [direct API](#direct-api) method for Azure Monitor Basic Logs.
+Before you ingest data to Azure Monitor Basic Logs or Archive, for lower ingestion prices, ensure that the table you're writing to is [configured as Basic Logs](../azure-monitor/logs/basic-logs-configure.md#view-a-tables-log-data-plan). Review the [Azure Monitor custom log ingestion tool](#azure-monitor-custom-log-ingestion-tool) and the [direct API](#direct-api) method for Azure Monitor Basic Logs.
### Azure Monitor custom log ingestion tool
service-bus-messaging Service Bus Dotnet Get Started With Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dotnet-get-started-with-queues.md
Title: Quickstart - Use Azure Service Bus queues from .NET app
description: This quickstart shows you how to send messages to and receive messages from Azure Service Bus queues using the .NET programming language. dotnet Previously updated : 09/21/2022 Last updated : 11/08/2022 ms.devlang: csharp
In this quickstart, you will do the following steps:
4. Write a .NET console application to receive those messages from the queue. > [!NOTE]
-> This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of messages to a Service Bus queue and then receiving them. For an overview of the .NET client library, see [Azure Service Bus client library for .NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/README.md). For more samples, see [Service Bus .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples).
+> - This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of messages to a Service Bus queue and then receiving them. For an overview of the .NET client library, see [Azure Service Bus client library for .NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/README.md). For more samples, see [Service Bus .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples).
## Prerequisites
If you're new to the service, see [Service Bus overview](service-bus-messaging-o
- **Azure subscription**. To use Azure services, including Azure Service Bus, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/dotnet). - **Visual Studio 2022**. The sample application makes use of new features that were introduced in C# 10. You can still use the Service Bus client library with previous C# language versions, but the syntax may vary. To use the latest syntax, we recommend that you install .NET 6.0 or higher and set the language version to `latest`. If you're using Visual Studio, versions before Visual Studio 2022 aren't compatible with the tools needed to build C# 10 projects.
+## [Connection String](#tab/connection-string)
+
+## [Passwordless](#tab/passwordless)
+++ [!INCLUDE [service-bus-create-queue-portal](./includes/service-bus-create-queue-portal.md)] +
+> [!IMPORTANT]
+> Note down the connection string to the namespace, the queue name. You'll use them later in this tutorial.
+ ## Send messages to the queue This section shows you how to create a .NET console application to send messages to a Service Bus queue.
This section shows you how to create a .NET console application to send messages
Install-Package Azure.Messaging.ServiceBus ``` - ## Add code to send messages to the queue 1. Replace the contents of `Program.cs` with the following code. The important steps are outlined below, with additional information in the code comments.
- ### [Passwordless (Recommended)](#tab/passwordless)
+ ### [Connection string](#tab/connection-string)
> [!IMPORTANT]
- > Per the `TODO` comment, update the placeholder values in the code snippets with the values from the Service Bus you created.
+ > Update placeholder values (`<NAMESPACE-CONNECTION-STRING>` and `<QUEUE-NAME>`) in the code snippet with actual values you noted down earlier.
- * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the passwordless `DefaultAzureCredential` object. `DefaultAzureCredential` will automatically discover and use the credentials of your Visual Studio login to authenticate to Azure Service Bus.
+ * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string.
* Invokes the [CreateSender](/dotnet/api/azure.messaging.servicebus.servicebusclient.createsender) method on the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object to create a [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender) object for the specific Service Bus queue. * Creates a [ServiceBusMessageBatch](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch) object by using the [ServiceBusSender.CreateMessageBatchAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.createmessagebatchasync) method. * Add messages to the batch using the [ServiceBusMessageBatch.TryAddMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch.tryaddmessage).
This section shows you how to create a .NET console application to send messages
```csharp using Azure.Messaging.ServiceBus;
- using Azure.Identity;
-
- // name of your Service Bus queue
+ // the client that owns the connection and can be used to create senders and receivers ServiceBusClient client;
This section shows you how to create a .NET console application to send messages
// of the application, which is best practice when messages are being published or read // regularly. //
- // Set the transport type to AmqpWebSockets so that the ServiceBusClient uses the port 443.
- // If you use the default AmqpTcp, ensure that ports 5671 and 5672 are open.
- var clientOptions = new ServiceBusClientOptions
+ // set the transport type to AmqpWebSockets so that the ServiceBusClient uses the port 443.
+ // If you use the default AmqpTcp, you will need to make sure that the ports 5671 and 5672 are open
+
+ // TODO: Replace the <NAMESPACE-CONNECTION-STRING> and <QUEUE-NAME> placeholders
+ var clientOptions = new ServiceBusClientOptions()
{ TransportType = ServiceBusTransportType.AmqpWebSockets };
- //TODO: Replace the "<NAMESPACE-NAME>" and "<QUEUE-NAME>" placeholders.
- client = new ServiceBusClient(
- "<NAMESPACE-NAME>.servicebus.windows.net",
- new DefaultAzureCredential(),
- clientOptions);
+ client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>", clientOptions);
sender = client.CreateSender("<QUEUE-NAME>"); // create a batch
This section shows you how to create a .NET console application to send messages
Console.WriteLine("Press any key to end the application"); Console.ReadKey(); ```
-
- ### [Connection string](#tab/connection-string)
+
+ ### [Passwordless](#tab/passwordless)
> [!IMPORTANT] > Per the `TODO` comment, update the placeholder values in the code snippets with the values from the Service Bus you created.
- * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string.
+ * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the passwordless `DefaultAzureCredential` object. `DefaultAzureCredential` will automatically discover and use the credentials of your Visual Studio login to authenticate to Azure Service Bus.
* Invokes the [CreateSender](/dotnet/api/azure.messaging.servicebus.servicebusclient.createsender) method on the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object to create a [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender) object for the specific Service Bus queue. * Creates a [ServiceBusMessageBatch](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch) object by using the [ServiceBusSender.CreateMessageBatchAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.createmessagebatchasync) method. * Add messages to the batch using the [ServiceBusMessageBatch.TryAddMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch.tryaddmessage).
This section shows you how to create a .NET console application to send messages
```csharp using Azure.Messaging.ServiceBus;-
+ using Azure.Identity;
+
+ // name of your Service Bus queue
// the client that owns the connection and can be used to create senders and receivers ServiceBusClient client;
This section shows you how to create a .NET console application to send messages
// of the application, which is best practice when messages are being published or read // regularly. //
- // set the transport type to AmqpWebSockets so that the ServiceBusClient uses the port 443.
- // If you use the default AmqpTcp, you will need to make sure that the ports 5671 and 5672 are open
-
- // TODO: Replace the <NAMESPACE-CONNECTION-STRING> and <QUEUE-NAME> placeholders
- var clientOptions = new ServiceBusClientOptions()
+ // Set the transport type to AmqpWebSockets so that the ServiceBusClient uses the port 443.
+ // If you use the default AmqpTcp, ensure that ports 5671 and 5672 are open.
+ var clientOptions = new ServiceBusClientOptions
{ TransportType = ServiceBusTransportType.AmqpWebSockets };
- client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>", clientOptions);
+ //TODO: Replace the "<NAMESPACE-NAME>" and "<QUEUE-NAME>" placeholders.
+ client = new ServiceBusClient(
+ "<NAMESPACE-NAME>.servicebus.windows.net",
+ new DefaultAzureCredential(),
+ clientOptions);
sender = client.CreateSender("<QUEUE-NAME>"); // create a batch
This section shows you how to create a .NET console application to send messages
Console.WriteLine("Press any key to end the application"); Console.ReadKey(); ```
-
6. Build the project, and ensure that there are no errors.
In this section, you'll create a .NET console application that receives messages
### Add the NuGet packages to the project
-### [Passwordless (Recommended)](#tab/passwordless)
+### [Connection String](#tab/connection-string)
1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
-1. Run the following command to install the **Azure.Messaging.ServiceBus** and **Azure.Identity** NuGet packages:
+1. Run the following command to install the **Azure.Messaging.ServiceBus** NuGet package:
```powershell Install-Package Azure.Messaging.ServiceBus
- Install-Package Azure.Identity
``` :::image type="content" source="media/service-bus-dotnet-get-started-with-queues/package-manager-console.png" alt-text="Screenshot showing QueueReceiver project selected in the Package Manager Console.":::
-### [Connection String](#tab/connection-string)
+### [Passwordless](#tab/passwordless)
1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
-1. Run the following command to install the **Azure.Messaging.ServiceBus** NuGet package:
+1. Run the following command to install the **Azure.Messaging.ServiceBus** and **Azure.Identity** NuGet packages:
```powershell Install-Package Azure.Messaging.ServiceBus
+ Install-Package Azure.Identity
``` :::image type="content" source="media/service-bus-dotnet-get-started-with-queues/package-manager-console.png" alt-text="Screenshot showing QueueReceiver project selected in the Package Manager Console.":::
In this section, you'll add code to retrieve messages from the queue.
1. Within the `Program` class, add the following code:
- ### [Passwordless (Recommended)](#tab/passwordless)
-
+ ### [Connection string](#tab/connection-string)
+
```csharp using System.Threading.Tasks;
- using Azure.Identity;
using Azure.Messaging.ServiceBus; // the client that owns the connection and can be used to create senders and receivers
In this section, you'll add code to retrieve messages from the queue.
// the processor that reads and processes messages from the queue ServiceBusProcessor processor; ```
-
- ### [Connection string](#tab/connection-string)
-
+
+ ### [Passwordless](#tab/passwordless)
+ ```csharp using System.Threading.Tasks;
+ using Azure.Identity;
using Azure.Messaging.ServiceBus; // the client that owns the connection and can be used to create senders and receivers
In this section, you'll add code to retrieve messages from the queue.
// the processor that reads and processes messages from the queue ServiceBusProcessor processor; ```
-
1. Append the following methods to the end of the `Program` class.
In this section, you'll add code to retrieve messages from the queue.
1. Append the following code to the end of the `Program` class. The important steps are outlined below, with additional information in the code comments.
- ### [Passwordless (Recommended)](#tab/passwordless)
+ ### [Connection string](#tab/connection-string)
- * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the `DefaultAzureCredential` object. `DefaultAzureCredential` will automatically discover and use the credentials of your Visual Studio login to authenticate to Azure Service Bus.
- * Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the `ServiceBusClient` object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus queue.
+ * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string.
+ * Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus queue.
* Specifies handlers for the [ProcessMessageAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processmessageasync) and [ProcessErrorAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processerrorasync) events of the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
- * Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the `ServiceBusProcessor` object.
- * When user presses a key to end the processing, invokes the [StopProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.stopprocessingasync) on the `ServiceBusProcessor` object.
-
+ * Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
+ * When user presses a key to end the processing, invokes the [StopProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.stopprocessingasync) on the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
```csharp // The Service Bus client types are safe to cache and use as a singleton for the lifetime
In this section, you'll add code to retrieve messages from the queue.
// Set the transport type to AmqpWebSockets so that the ServiceBusClient uses port 443. // If you use the default AmqpTcp, make sure that ports 5671 and 5672 are open.
- // TODO: Replace the <NAMESPACE-NAME> placeholder
+ // TODO: Replace the <NAMESPACE-CONNECTION-STRING> and <QUEUE-NAME> placeholders
var clientOptions = new ServiceBusClientOptions() { TransportType = ServiceBusTransportType.AmqpWebSockets };
- client = new ServiceBusClient(
- "<NAMESPACE-NAME>.servicebus.windows.net",
- new DefaultAzureCredential(),
- clientOptions);
+ client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>", clientOptions);
// create a processor that we can use to process the messages // TODO: Replace the <QUEUE-NAME> placeholder
In this section, you'll add code to retrieve messages from the queue.
await client.DisposeAsync(); } ```
-
- ### [Connection string](#tab/connection-string)
- * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string.
- * Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus queue.
+ ### [Passwordless](#tab/passwordless)
+
+ * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the `DefaultAzureCredential` object. `DefaultAzureCredential` will automatically discover and use the credentials of your Visual Studio login to authenticate to Azure Service Bus.
+ * Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the `ServiceBusClient` object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus queue.
* Specifies handlers for the [ProcessMessageAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processmessageasync) and [ProcessErrorAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processerrorasync) events of the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
- * Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
- * When user presses a key to end the processing, invokes the [StopProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.stopprocessingasync) on the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
+ * Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the `ServiceBusProcessor` object.
+ * When user presses a key to end the processing, invokes the [StopProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.stopprocessingasync) on the `ServiceBusProcessor` object.
+ ```csharp // The Service Bus client types are safe to cache and use as a singleton for the lifetime
In this section, you'll add code to retrieve messages from the queue.
// Set the transport type to AmqpWebSockets so that the ServiceBusClient uses port 443. // If you use the default AmqpTcp, make sure that ports 5671 and 5672 are open.
- // TODO: Replace the <NAMESPACE-CONNECTION-STRING> and <QUEUE-NAME> placeholders
+ // TODO: Replace the <NAMESPACE-NAME> placeholder
var clientOptions = new ServiceBusClientOptions() { TransportType = ServiceBusTransportType.AmqpWebSockets };
- client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>", clientOptions);
+ client = new ServiceBusClient(
+ "<NAMESPACE-NAME>.servicebus.windows.net",
+ new DefaultAzureCredential(),
+ clientOptions);
// create a processor that we can use to process the messages // TODO: Replace the <QUEUE-NAME> placeholder
In this section, you'll add code to retrieve messages from the queue.
await client.DisposeAsync(); } ```
-
1. The completed `Program` class should match the following code:
- ### [Passwordless (Recommended)](#tab/passwordless)
+ ### [Connection string](#tab/connection-string)
```csharp
- using System.Threading.Tasks;
using Azure.Messaging.ServiceBus;
- using Azure.Identity;
+ using System;
+ using System.Threading.Tasks;
// the client that owns the connection and can be used to create senders and receivers ServiceBusClient client;
In this section, you'll add code to retrieve messages from the queue.
// of the application, which is best practice when messages are being published or read // regularly. //
- // Set the transport type to AmqpWebSockets so that the ServiceBusClient uses port 443.
+ // Set the transport type to AmqpWebSockets so that the ServiceBusClient uses port 443.
// If you use the default AmqpTcp, make sure that ports 5671 and 5672 are open.-
- // TODO: Replace the <NAMESPACE-NAME> and <QUEUE-NAME> placeholders
- var clientOptions = new ServiceBusClientOptions()
+
+ // TODO: Replace the <NAMESPACE-CONNECTION-STRING> and <QUEUE-NAME> placeholders
+ var clientOptions = new ServiceBusClientOptions()
{ TransportType = ServiceBusTransportType.AmqpWebSockets };
- client = new ServiceBusClient("<NAMESPACE-NAME>.servicebus.windows.net",
- new DefaultAzureCredential(), clientOptions);
-
+ client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>", clientOptions);
+
// create a processor that we can use to process the messages // TODO: Replace the <QUEUE-NAME> placeholder processor = client.CreateProcessor("<QUEUE-NAME>", new ServiceBusProcessorOptions());
In this section, you'll add code to retrieve messages from the queue.
return Task.CompletedTask; } ```
-
- ### [Connection string](#tab/connection-string)
+
+ ### [Passwordless](#tab/passwordless)
```csharp
- using Azure.Messaging.ServiceBus;
- using System;
using System.Threading.Tasks;
+ using Azure.Messaging.ServiceBus;
+ using Azure.Identity;
// the client that owns the connection and can be used to create senders and receivers ServiceBusClient client;
In this section, you'll add code to retrieve messages from the queue.
// of the application, which is best practice when messages are being published or read // regularly. //
- // Set the transport type to AmqpWebSockets so that the ServiceBusClient uses port 443.
+ // Set the transport type to AmqpWebSockets so that the ServiceBusClient uses port 443.
// If you use the default AmqpTcp, make sure that ports 5671 and 5672 are open.
-
- // TODO: Replace the <NAMESPACE-CONNECTION-STRING> and <QUEUE-NAME> placeholders
- var clientOptions = new ServiceBusClientOptions()
+
+ // TODO: Replace the <NAMESPACE-NAME> and <QUEUE-NAME> placeholders
+ var clientOptions = new ServiceBusClientOptions()
{ TransportType = ServiceBusTransportType.AmqpWebSockets };
- client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>", clientOptions);
-
+ client = new ServiceBusClient("<NAMESPACE-NAME>.servicebus.windows.net",
+ new DefaultAzureCredential(), clientOptions);
+
// create a processor that we can use to process the messages // TODO: Replace the <QUEUE-NAME> placeholder processor = client.CreateProcessor("<QUEUE-NAME>", new ServiceBusProcessorOptions());
In this section, you'll add code to retrieve messages from the queue.
return Task.CompletedTask; } ```
-
1. Build the project, and ensure that there are no errors.
service-bus-messaging Service Bus Dotnet How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions.md
Title: Get started with Azure Service Bus topics (.NET)
description: This tutorial shows you how to send messages to Azure Service Bus topics and receive messages from topics' subscriptions using the .NET programming language. dotnet Previously updated : 10/27/2022 Last updated : 11/08/2022 ms.devlang: csharp
If you're new to the service, see [Service Bus overview](service-bus-messaging-o
- **Azure subscription**. To use Azure services, including Azure Service Bus, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/dotnet/). - **Visual Studio 2022**. The sample application makes use of new features that were introduced in C# 10. You can still use the Service Bus client library with previous C# language versions, but the syntax may vary. To use the latest syntax, we recommend that you install .NET 6.0 or higher and set the language version to `latest`. If you're using Visual Studio, versions before Visual Studio 2022 aren't compatible with the tools needed to build C# 10 projects.
+## [Connection String](#tab/connection-string)
+
+## [Passwordless](#tab/passwordless)
+++ [!INCLUDE [service-bus-create-topic-subscription-portal](./includes/service-bus-create-topic-subscription-portal.md)]
This section shows you how to create a .NET console application to send messages
Install-Package Azure.Messaging.ServiceBus ``` + ### Add code to send messages to the topic 1. Replace the contents of Program.cs with the following code. The important steps are outlined below, with additional information in the code comments.
- ## [Passwordless (Recommended)](#tab/passwordless)
+ ## [Connection String](#tab/connection-string)
+
+ > [!IMPORTANT]
+ > Update placeholder values (`<NAMESPACE-CONNECTION-STRING>` and `<TOPIC-NAME>`) in the code snippet with actual values you noted down earlier.
1. Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string to the namespace. 1. Invokes the [CreateSender](/dotnet/api/azure.messaging.servicebus.servicebusclient.createsender) method on the `ServiceBusClient` object to create a [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender) object for the specific Service Bus topic.
This section shows you how to create a .NET console application to send messages
```csharp using System.Threading.Tasks; using Azure.Messaging.ServiceBus;
- using Azure.Identity;
// the client that owns the connection and can be used to create senders and receivers ServiceBusClient client;
This section shows you how to create a .NET console application to send messages
// The Service Bus client types are safe to cache and use as a singleton for the lifetime // of the application, which is best practice when messages are being published or read // regularly.-
- //TODO: Replace the "<NAMESPACE-NAME>" and "<TOPIC-NAME>" placeholders.
- client = new ServiceBusClient(
- "<NAMESPACE-NAME>.servicebus.windows.net",
- new DefaultAzureCredential());
+ //TODO: Replace the "<NAMESPACE-CONNECTION-STRING>" and "<TOPIC-NAME>" placeholders.
+ client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>");
sender = client.CreateSender("<TOPIC-NAME>"); // create a batch
This section shows you how to create a .NET console application to send messages
Console.ReadKey(); ```
- ## [Connection String](#tab/connection-string)
+ ## [Passwordless](#tab/passwordless)
1. Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string to the namespace. 1. Invokes the [CreateSender](/dotnet/api/azure.messaging.servicebus.servicebusclient.createsender) method on the `ServiceBusClient` object to create a [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender) object for the specific Service Bus topic.
This section shows you how to create a .NET console application to send messages
```csharp using System.Threading.Tasks; using Azure.Messaging.ServiceBus;
+ using Azure.Identity;
// the client that owns the connection and can be used to create senders and receivers ServiceBusClient client;
This section shows you how to create a .NET console application to send messages
// The Service Bus client types are safe to cache and use as a singleton for the lifetime // of the application, which is best practice when messages are being published or read // regularly.
- //TODO: Replace the "<NAMESPACE-CONNECTION-STRING>" and "<TOPIC-NAME>" placeholders.
- client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>");
+
+ //TODO: Replace the "<NAMESPACE-NAME>" and "<TOPIC-NAME>" placeholders.
+ client = new ServiceBusClient(
+ "<NAMESPACE-NAME>.servicebus.windows.net",
+ new DefaultAzureCredential());
sender = client.CreateSender("<TOPIC-NAME>"); // create a batch
In this section, you'll create a .NET console application that receives messages
### Add the NuGet packages to the project
-### [Passwordless (Recommended)](#tab/passwordless)
+
+### [Connection String](#tab/connection-string)
1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
-1. Run the following command to install the **Azure.Messaging.ServiceBus** and **Azure.Identity** NuGet packages:
+1. Run the following command to install the **Azure.Messaging.ServiceBus** NuGet package:
- ```powershell
+ ```Powershell
Install-Package Azure.Messaging.ServiceBus
- Install-Package Azure.Identity
``` :::image type="content" source="media/service-bus-dotnet-get-started-with-queues/package-manager-console.png" alt-text="Screenshot showing QueueReceiver project selected in the Package Manager Console.":::
-### [Connection String](#tab/connection-string)
+### [Passwordless](#tab/passwordless)
1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
-1. Run the following command to install the **Azure.Messaging.ServiceBus** NuGet package:
+1. Run the following command to install the **Azure.Messaging.ServiceBus** and **Azure.Identity** NuGet packages:
- ```Powershell
+ ```powershell
Install-Package Azure.Messaging.ServiceBus
+ Install-Package Azure.Identity
``` :::image type="content" source="media/service-bus-dotnet-get-started-with-queues/package-manager-console.png" alt-text="Screenshot showing QueueReceiver project selected in the Package Manager Console.":::
In this section, you'll add code to retrieve messages from the subscription.
1. Replace the existing contents of `Program.cs` with the following properties and methods:
- ## [Passwordless (Recommended)](#tab/passwordless)
+
+ ## [Connection String](#tab/connection-string)
```csharp using System.Threading.Tasks; using Azure.Messaging.ServiceBus;
- using Azure.Identity;
// the client that owns the connection and can be used to create senders and receivers ServiceBusClient client;
In this section, you'll add code to retrieve messages from the subscription.
// handle received messages async Task MessageHandler(ProcessMessageEventArgs args) {
+ // TODO: Replace the <TOPIC-SUBSCRIPTION-NAME> placeholder
string body = args.Message.Body.ToString();
- Console.WriteLine($"Received: {body} from subscription.");
+ Console.WriteLine($"Received: {body} from subscription: <TOPIC-SUBSCRIPTION-NAME>");
// complete the message. messages is deleted from the subscription. await args.CompleteMessageAsync(args.Message);
In this section, you'll add code to retrieve messages from the subscription.
} ```
- ## [Connection String](#tab/connection-string)
+ ## [Passwordless](#tab/passwordless)
```csharp using System.Threading.Tasks; using Azure.Messaging.ServiceBus;
+ using Azure.Identity;
// the client that owns the connection and can be used to create senders and receivers ServiceBusClient client;
In this section, you'll add code to retrieve messages from the subscription.
// handle received messages async Task MessageHandler(ProcessMessageEventArgs args) {
- // TODO: Replace the <TOPIC-SUBSCRIPTION-NAME> placeholder
string body = args.Message.Body.ToString();
- Console.WriteLine($"Received: {body} from subscription: <TOPIC-SUBSCRIPTION-NAME>");
+ Console.WriteLine($"Received: {body} from subscription.");
// complete the message. messages is deleted from the subscription. await args.CompleteMessageAsync(args.Message);
In this section, you'll add code to retrieve messages from the subscription.
1. Append the following code to the end of `Program.cs`.
- ## [Passwordless (Recommended)](#tab/passwordless)
+ ## [Connection String](#tab/connection-string)
- * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the passwordless `DefaultAzureCredential` object.
+ > [!IMPORTANT]
+ > Update placeholder values (`<NAMESPACE-CONNECTION-STRING>`, `<TOPIC-NAME>`, `<SUBSCRIPTION-NAME>`) in the code snippet with actual values you noted down earlier.
+
+ * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string to the namespace.
* Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the `ServiceBusClient` object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus topic. * Specifies handlers for the [ProcessMessageAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processmessageasync) and [ProcessErrorAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processerrorasync) events of the `ServiceBusProcessor` object. * Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the `ServiceBusProcessor` object.
In this section, you'll add code to retrieve messages from the subscription.
// regularly. // // Create the clients that we'll use for sending and processing messages.
- // TODO: Replace the <NAMESPACE-NAME> placeholder
- client = new ServiceBusClient(
- "<NAMESPACE-NAME>.servicebus.windows.net",
- new DefaultAzureCredential());
+ // TODO: Replace the <NAMESPACE-CONNECTION-STRING> placeholder
+ client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>">);
// create a processor that we can use to process the messages // TODO: Replace the <TOPIC-NAME> and <SUBSCRIPTION-NAME> placeholders
In this section, you'll add code to retrieve messages from the subscription.
} ```
- ## [Connection String](#tab/connection-string)
+ ## [Passwordless](#tab/passwordless)
- * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string to the namespace.
+ * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the passwordless `DefaultAzureCredential` object.
* Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the `ServiceBusClient` object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus topic. * Specifies handlers for the [ProcessMessageAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processmessageasync) and [ProcessErrorAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processerrorasync) events of the `ServiceBusProcessor` object. * Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the `ServiceBusProcessor` object.
In this section, you'll add code to retrieve messages from the subscription.
// regularly. // // Create the clients that we'll use for sending and processing messages.
- // TODO: Replace the <CONNECTION-STRING-VALUE> placeholder
- client = new ServiceBusClient("<CONNECTION-STRING-VALUE>">);
+ // TODO: Replace the <NAMESPACE-NAME> placeholder
+ client = new ServiceBusClient(
+ "<NAMESPACE-NAME>.servicebus.windows.net",
+ new DefaultAzureCredential());
// create a processor that we can use to process the messages // TODO: Replace the <TOPIC-NAME> and <SUBSCRIPTION-NAME> placeholders
In this section, you'll add code to retrieve messages from the subscription.
1. Here's what your `Program.cs` should look like:
- ## [Passwordless (Recommended)](#tab/passwordless)
-
+ ## [Connection String](#tab/connection-string)
+ ```csharp using System; using System.Threading.Tasks; using Azure.Messaging.ServiceBus;
- using Azure.Identity;
// the client that owns the connection and can be used to create senders and receivers ServiceBusClient client;
In this section, you'll add code to retrieve messages from the subscription.
// regularly. // // Create the clients that we'll use for sending and processing messages.
- // TODO: Replace the <NAMESPACE-NAME> placeholder
- client = new ServiceBusClient(
- "<NAMESPACE-NAME>.servicebus.windows.net",
- new DefaultAzureCredential());
+ // TODO: Replace the <NAMESPACE-CONNECTION-STRING> placeholder
+ client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>">);
// create a processor that we can use to process the messages // TODO: Replace the <TOPIC-NAME> and <SUBSCRIPTION-NAME> placeholders
In this section, you'll add code to retrieve messages from the subscription.
} ```
- ## [Connection String](#tab/connection-string)
-
+ ## [Passwordless](#tab/passwordless)
+
```csharp using System; using System.Threading.Tasks; using Azure.Messaging.ServiceBus;
+ using Azure.Identity;
// the client that owns the connection and can be used to create senders and receivers ServiceBusClient client;
In this section, you'll add code to retrieve messages from the subscription.
// regularly. // // Create the clients that we'll use for sending and processing messages.
- // TODO: Replace the <CONNECTION-STRING-VALUE> placeholder
- client = new ServiceBusClient("<CONNECTION-STRING-VALUE>">);
+ // TODO: Replace the <NAMESPACE-NAME> placeholder
+ client = new ServiceBusClient(
+ "<NAMESPACE-NAME>.servicebus.windows.net",
+ new DefaultAzureCredential());
// create a processor that we can use to process the messages // TODO: Replace the <TOPIC-NAME> and <SUBSCRIPTION-NAME> placeholders
In this section, you'll add code to retrieve messages from the subscription.
await client.DisposeAsync(); } ```- 1. Build the project, and ensure that there are no errors.
service-fabric Service Fabric Diagnostics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-overview.md
Application monitoring tracks how features and components of your application ar
* Is my application throwing unhandled exceptions? * What is happening within the services running inside my containers?
-The great thing about application monitoring is that developers can use whatever tools and framework they'd like since it lives within the context of your application! You can learn more about the Azure solution for application monitoring with Azure Monitor - Application Insights in [Event analysis with Application Insights](service-fabric-diagnostics-event-analysis-appinsights.md).
+The great thing about application monitoring is that developers can use whatever tools and framework they'd like since it lives within the context of your application! You can learn more about the Azure solution for application monitoring with Azure Monitor Application Insights in [Event analysis with Application Insights](service-fabric-diagnostics-event-analysis-appinsights.md).
We also have a tutorial with how to [set this up for .NET Applications](service-fabric-tutorial-monitoring-aspnet.md). This tutorial goes over how to install the right tools, an example to write custom telemetry in your application, and viewing the application diagnostics and telemetry in the Azure portal.
static-web-apps Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/apis-overview.md
The following constraints apply to all API backends:
- Route rules for APIs only support [redirects](configuration.md#defining-routes) and [securing routes with roles](configuration.md#securing-routes-with-roles). - Only HTTP requests are supported for APIs. WebSocket, for example, is not supported. - The maximum duration of each API request 45 seconds.
+- Network isolated backends are not supported.
## Next steps
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
Actions are applied to the filtered blobs when the run condition is met.
Lifecycle management supports tiering and deletion of current versions, previous versions, and blob snapshots. Define at least one action for each rule.
+> [!NOTE]
+> Tiering is not yet supported in a premium block blob storage account. For all other accounts, tiering is allowed only on block blobs and not for append and page blobs.
+ | Action | Current Version | Snapshot | Previous Versions |--|--||| | tierToCool | Supported for `blockBlob` | Supported | Supported |
storage Storage Ref Azcopy Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-copy.md
description: This article provides reference information for the azcopy copy com
Previously updated : 10/22/2022 Last updated : 11/08/2022
Copies source data to a destination location. The supported directions are:
- local <-> Azure Files (Share/directory SAS authentication) - local <-> Azure Data Lake Storage Gen2 (SAS, OAuth, or SharedKey authentication) - Azure Blob (SAS or public) -> Azure Blob (SAS or OAuth authentication)
+- Azure Blob (SAS or OAuth authentication) -> Azure Blob (SAS or OAuth authentication) - See [Guidelines](./storage-use-azcopy-blobs-copy.md#guidelines).
- Azure Blob (SAS or public) -> Azure Files (SAS) - Azure Files (SAS) -> Azure Files (SAS) - Azure Files (SAS) -> Azure Blob (SAS or OAuth authentication)
storage Storage Use Azcopy Blobs Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-copy.md
description: This article contains a collection of AzCopy example commands that
Previously updated : 09/29/2022 Last updated : 11/08/2022
See the [Get started with AzCopy](storage-use-azcopy-v10.md) article to download
Apply the following guidelines to your AzCopy commands.
+- Source and destination accounts must belong to the same Azure AD tenant.
+ - Your client must have network access to both the source and destination storage accounts. To learn how to configure the network settings for each storage account, see [Configure Azure Storage firewalls and virtual networks](storage-network-security.md?toc=/azure/storage/blobs/toc.json). - If you copy to a premium block blob storage account, omit the access tier of a blob from the copy operation by setting the `s2s-preserve-access-tier` to `false` (For example: `--s2s-preserve-access-tier=false`). Premium block blob storage accounts don't support access tiers.
synapse-analytics Backup And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/backup-and-restore.md
Each snapshot creates a restore point that represents the time the snapshot star
You can either keep the restored data warehouse and the current one, or delete one of them. If you want to replace the current data warehouse with the restored data warehouse, you can rename it using [ALTER DATABASE](/sql/t-sql/statements/alter-database-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) with the MODIFY NAME option.
-To restore a data warehouse, see [Restore a dedicated SQL pool](sql-data-warehouse-restore-points.md#create-user-defined-restore-points-through-the-azure-portal).
+To restore a data warehouse, see [Restore a dedicated SQL pool (formerly SQL DW)](sql-data-warehouse-restore-points.md#create-user-defined-restore-points-through-the-azure-portal).
-To restore a deleted data warehouse, see [Restore a deleted database](sql-data-warehouse-restore-deleted-dw.md), or if the entire server was deleted, see [Restore a data warehouse from a deleted server](sql-data-warehouse-restore-from-deleted-server.md).
+To restore a deleted data warehouse, see [Restore a deleted database (formerly SQL DW)](sql-data-warehouse-restore-deleted-dw.md), or if the entire server was deleted, see [Restore a data warehouse from a deleted server (formerly SQL DW)](sql-data-warehouse-restore-from-deleted-server.md).
+
+> [!NOTE]
+> Table-level restore is not supported in dedicated SQL Pools. You can only recover an entire database from your backup, and then copy the require table(s) by using
+> - ETL tools activities such as [Copy Activity](/azure/data-factory/copy-activity-overview)
+> - Export and Import
+> - Export the data from the restored backup into your Data Lake by using CETAS [CETAS Example](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=sql-server-linux-ver16&preserve-view=true#d-use-create-external-table-as-select-exporting-data-as-parquet)
+> - Import the data by using [COPY](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest) or [Polybase](/azure/synapse-analytics/sql/load-data-overview#options-for-loading-with-polybase)
## Cross subscription restore
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
The current behavior is to prefer the ExpressRoute circuit path over hub-to-hub
* Configure multiple ExpressRoute circuits (different providers) to connect to one hub and use the hub-to-hub connectivity provided by Virtual WAN for inter-region traffic flows.
-* Contact the product team to take part in the gated public preview. In this preview, traffic between the 2 hubs traverses through the Azure Virtual WAN router in each hub and uses a hub-to-hub path instead of the ExpressRoute path (which traverses through the Microsoft Edge routers/MSEE). To use this feature during preview, email **previewpreferh2h@microsoft.com** with the Virtual WAN IDs, Subscription ID, and the Azure region. Expect a response within 48 business hours (Monday-Friday) with confirmation that the feature is enabled.
+* Configure AS-Path as the Hub Routing Preference for your Virtual Hub. This ensures traffic between the 2 hubs traverses through the Virtual hub router in each hub and uses the hub-to-hub path instead of the ExpressRoute path (which traverses through the Microsoft Edge routers). For more information, see [Configure virtual hub routing preference](howto-virtual-hub-routing-preference.md).
### When there's an ExpressRoute circuit connected as a bow-tie to a Virtual WAN hub and a non Virtual WAN VNet, what is the path for the non Virtual WAN VNet to reach the Virtual WAN hub?