Updates from: 08/28/2021 03:07:00
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
active-directory Application Provisioning Config Problem Scim Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/application-provisioning-config-problem-scim-compatibility.md
In the table below, any item marked as fixed means that the proper behavior can
| Extension attributes use dot "." notation before attribute names instead of colon ":" notation | Yes | December 18, 2018 | downgrade to customappSSO | | Patch requests for multi-value attributes contain invalid path filter syntax | Yes | December 18, 2018 | downgrade to customappSSO | | Group creation requests contain an invalid schema URI | Yes | December 18, 2018 | downgrade to customappSSO |
-| Update PATCH behavior to ensure compliance (e.g. active as boolean and proper group membership removals) | No | TBD| use preview flag |
+| Update PATCH behavior to ensure compliance (e.g. active as boolean and proper group membership removals) | No | TBD| use feature flag |
## Flags to alter the SCIM behavior Use the flags below in the tenant URL of your application in order to change the default SCIM client behavior.
Use the following URL to update PATCH behavior and ensure SCIM compliance. The f
- Requests to replace multiple attributes - Requests to remove a group member
-This behavior is currently only available when using the flag, but will become the default behavior over the next few months. Note this preview flag currently does not work with on-demand provisioning.
+This behavior is currently only available when using the flag, but will become the default behavior over the next few months. Note this feature flag currently does not work with on-demand provisioning.
* **URL (SCIM Compliant):** aadOptscim062020 * **SCIM RFC references:** * https://tools.ietf.org/html/rfc7644#section-3.5.2
active-directory Reference Claims Mapping Policy Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-claims-mapping-policy-type.md
If the source is transformation, the **TransformationID** element must be includ
The ID element identifies which property on the source provides the value for the claim. The following table lists the values of ID valid for each value of Source.
+> [!WARNING]
+> Currently, the only available multi-valued claim sources on a user object are multi-valued extension attributes which have been synced from AADConnect. Other properties, such as OtherMails and tags, are multi-valued but only one value is emitted when selected as a source.
+ #### Table 3: Valid ID values per source | Source | ID | Description |
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/sample-v2-code.md
Previously updated : 07/06/2021 Last updated : 08/27/2021
Each code sample includes a _README.md_ file that describes how to build the pro
These samples show how to write a single-page application secured with Microsoft identity platform. These samples use one of the flavors of MSAL.js. > [!div class="mx-tdCol2BreakAll"]
-> | Language/<br/>Platform | Code sample(s) <br/>on GitHub | Auth<br/> libraries | Auth flow |
+> | Language/<br/>Platform | Code sample(s) <br/>on GitHub | Auth<br/> libraries | Auth flow |
> | - | -- | - | -- |
-> | Angular | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Call .NET Core web API](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/3-Authorization-II/1-call-api)<br/>&#8226; [Call .NET Core web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/3-Authorization-II/2-call-api-b2c)<br/>&#8226; [Call Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/7-AdvancedScenarios/1-call-api-obo/README.md)<br/>&#8226; [Call .NET Core web API using PoP](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/7-AdvancedScenarios/2-call-api-pop/README.md)<br/>&#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/5-AccessControl/1-call-api-roles/README.md)<br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/5-AccessControl/2-call-api-groups/README.md)<br/>&#8226; [Deploy to Azure Storage and App Service](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/4-Deployment/README.md)| MSAL Angular | &#8226; Authorization code with PKCE<br/>&#8226; On-behalf-of (OBO) <br/>&#8226; Proof of Possession (PoP)|
-> | Blazor WebAssembly | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/WebApp-OIDC/MyOrg/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/WebApp-OIDC/B2C/README.md)<br/>&#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/WebApp-graph-user/Call-MSGraph/README.md)<br/>&#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/Deploy-to-Azure/README.md) | MSAL.js | Authorization code with PKCE |
+> | Angular | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Call .NET Core web API](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/3-Authorization-II/1-call-api)<br/>&#8226; [Call .NET Core web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/3-Authorization-II/2-call-api-b2c)<br/>&#8226; [Call Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/7-AdvancedScenarios/1-call-api-obo/README.md)<br/>&#8226; [Call .NET Core web API using PoP](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/7-AdvancedScenarios/2-call-api-pop/README.md)<br/>&#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/5-AccessControl/1-call-api-roles/README.md)<br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/5-AccessControl/2-call-api-groups/README.md)<br/>&#8226; [Deploy to Azure Storage and App Service](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/4-Deployment/README.md)| MSAL Angular | &#8226; Authorization code with PKCE<br/>&#8226; On-behalf-of (OBO) <br/>&#8226; Proof of Possession (PoP)|
+> | Blazor WebAssembly | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/WebApp-OIDC/MyOrg/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/WebApp-OIDC/B2C/README.md)<br/>&#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/WebApp-graph-user/Call-MSGraph/README.md)<br/>&#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/Deploy-to-Azure/README.md) | MSAL.js | Authorization code with PKCE |
> | JavaScript | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Call Node.js web API](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/3-Authorization-II/1-call-api/README.md)<br/>&#8226; [Call Node.js web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/3-Authorization-II/2-call-api-b2c/README.md)<br/>&#8226; [Call Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/4-AdvancedGrants/1-call-api-graph/README.md)<br/>&#8226; [Call Node.js web API via OBO and CA](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/4-AdvancedGrants/2-call-api-api-c)| MSAL.js | &#8226; Authorization code with PKCE<br/>&#8226; On-behalf-of (OBO) <br/>&#8226; Conditional Access (CA) |
-> | React | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Call Node.js web API](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/1-call-api)<br/>&#8226; [Call Node.js web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/2-call-api-b2c)<br/>&#8226; [Call Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/6-AdvancedScenarios/1-call-api-obo/README.md)<br/>&#8226; [Call Node.js web API using PoP](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/6-AdvancedScenarios/2-call-api-pop/README.md)<br/>&#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/5-AccessControl/1-call-api-roles/README.md)<br/>&#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/5-AccessControl/2-call-api-groups/README.md)<br/>&#8226; [Deploy to Azure Storage and App Service](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/4-Deployment/1-deploy-storage/README.md)<br/>&#8226; [Deploy to Azure Static Web Apps](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/4-Deployment/2-deploy-static/README.md)| MSAL React | &#8226; Authorization code with PKCE<br/>&#8226; On-behalf-of (OBO) <br/>&#8226; Conditional Access (CA)<br/>&#8226; Proof of Possession (PoP) |
+> | React | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Call Node.js web API](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/1-call-api)<br/>&#8226; [Call Node.js web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/2-call-api-b2c)<br/>&#8226; [Call Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/6-AdvancedScenarios/1-call-api-obo/README.md)<br/>&#8226; [Call Node.js web API using PoP](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/6-AdvancedScenarios/2-call-api-pop/README.md)<br/>&#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/5-AccessControl/1-call-api-roles/README.md)<br/>&#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/5-AccessControl/2-call-api-groups/README.md)<br/>&#8226; [Deploy to Azure Storage and App Service](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/4-Deployment/1-deploy-storage/README.md)<br/>&#8226; [Deploy to Azure Static Web Apps](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/4-Deployment/2-deploy-static/README.md)| MSAL React | &#8226; Authorization code with PKCE<br/>&#8226; On-behalf-of (OBO) <br/>&#8226; Conditional Access (CA)<br/>&#8226; Proof of Possession (PoP) |
## Web applications The following samples illustrate web applications that sign in users. Some samples also demonstrate the application calling Microsoft Graph, or your own web API with the user's identity. > [!div class="mx-tdCol2BreakAll"]
-> | Language/<br/>Platform | Code sample(s)<br/> on GitHub | Auth<br/> libraries | Auth flow |
+> | Language/<br/>Platform | Code sample(s)<br/> on GitHub | Auth<br/> libraries | Auth flow |
> | - | | - | -- | > | ASP.NET Core| ASP.NET Core Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/1-WebApp-OIDC/README.md) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/1-WebApp-OIDC/1-5-B2C/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-1-Call-MSGraph/README.md) <br/> &#8226; [Customize token cache](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-2-TokenCache/README.md) <br/> &#8226; [Call Graph (multi-tenant)](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-3-Multi-Tenant/README.md) <br/> &#8226; [Call Azure REST APIs](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/3-WebApp-multi-APIs/README.md) <br/> &#8226; [Protect web API](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/4-WebApp-your-API/4-1-MyOrg/README.md) <br/> &#8226; [Protect web API (B2C)](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/4-WebApp-your-API/4-2-B2C/README.md) <br/> &#8226; [Protect multi-tenant web API](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/4-WebApp-your-API/4-3-AnyOrg/Readme.md) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/5-WebApp-AuthZ/5-1-Roles/README.md) <br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/5-WebApp-AuthZ/5-2-Groups/README.md) <br/> &#8226; [Deploy to Azure Storage and App Service](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/6-Deploy-to-Azure/README.md) | &#8226; MSAL.NET<br/> &#8226; Microsoft.Identity.Web | &#8226; OpenID connect <br/> &#8226; Authorization code <br/> &#8226; On-Behalf-Of| > | Blazor | Blazor Server Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-OIDC/MyOrg) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-OIDC/B2C) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-graph-user/Call-MSGraph) <br/> &#8226; [Call web API](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-your-API/MyOrg) <br/> &#8226; [Call web API (B2C)](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-your-API/B2C) | MSAL.NET | Authorization code with PKCE|
-> | ASP.NET Core|[Advanced Token Cache Scenarios](https://github.com/Azure-Samples/ms-identity-dotnet-advanced-token-cache) | &#8226; MSAL.NET <br/> &#8226; Microsoft.Identity.Web | On-Behalf-Of (OBO) |
+> | ASP.NET Core|[Advanced Token Cache Scenarios](https://github.com/Azure-Samples/ms-identity-dotnet-advanced-token-cache) | &#8226; MSAL.NET <br/> &#8226; Microsoft.Identity.Web | On-Behalf-Of (OBO) |
> | ASP.NET Core|[Use the Conditional Access auth context to perform step\-up authentication](https://github.com/Azure-Samples/ms-identity-dotnetcore-ca-auth-context-app/blob/main/README.md) | &#8226; MSAL.NET <br/> &#8226; Microsoft.Identity.Web | Authorization code | > | ASP.NET Core|[Active Directory FS to Azure AD migration](https://github.com/Azure-Samples/ms-identity-dotnet-adfs-to-aad) | MSAL.NET | &#8226; SAML <br/> &#8226; OpenID connect |
-> | ASP.NET | &#8226; [Microsoft Graph Training Sample](https://github.com/microsoftgraph/msgraph-training-aspnetmvcapp) <br/> &#8226; [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) <br/> &#8226; [Sign in users and call Microsoft Graph with admin restricted scope](https://github.com/azure-samples/active-directory-dotnet-admin-restricted-scopes-v2) <br/> &#8226; [Quickstart: Sign in users](https://github.com/AzureAdQuickstarts/AppModelv2-WebApp-OpenIDConnect-DotNet) | MSAL.NET | &#8226; OpenID connect <br/> &#8226; Authorization code |
+> | ASP.NET | &#8226; [Microsoft Graph Training Sample](https://github.com/microsoftgraph/msgraph-training-aspnetmvcapp) <br/> &#8226; [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) <br/> &#8226; [Sign in users and call Microsoft Graph with admin restricted scope](https://github.com/azure-samples/active-directory-dotnet-admin-restricted-scopes-v2) <br/> &#8226; [Quickstart: Sign in users](https://github.com/AzureAdQuickstarts/AppModelv2-WebApp-OpenIDConnect-DotNet) | MSAL.NET | &#8226; OpenID connect <br/> &#8226; Authorization code |
> | Java </p> Spring |Azure AD Spring Boot Starter Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/3-Authorization-II/roles) <br/> &#8226; [Use Groups for access control](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/4-Deployment/deploy-to-azure-app-service) | &#8226; MSAL Java <br/> &#8226; Azure AD Boot Starter | Authorization code | > | Java </p> Servlets | Spring-less Servlet Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/3-Authorization-II/roles) <br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/4-Deployment/deploy-to-azure-app-service) | MSAL Java | Authorization code | > | Java | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-webapp)| MSAL Java | Authorization code |
-> | Java </p> Spring| Sign in users and call Microsoft Graph via OBO </p> &#8226; [Web API](https://github.com/Azure-Samples/ms-identity-java-webapi) | MSAL Java | &#8226; Authorization code <br/> &#8226; On-Behalf-Of (OBO) |
+> | Java </p> Spring| Sign in users and call Microsoft Graph via OBO </p> &#8226; [Web API](https://github.com/Azure-Samples/ms-identity-java-webapi) | MSAL Java | &#8226; Authorization code <br/> &#8226; On-Behalf-Of (OBO) |
> | Node.js </p> Express | Express web app series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/1-sign-in/README.md)<br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/2-sign-in-b2c/README.md)<br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/2-Authorization/1-call-graph/README.md)<br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/3-Deployment/README.md)<br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/1-app-roles/README.md)<br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/2-security-groups/README.md) <br/> &#8226; [Web app that sign in users](https://github.com/Azure-Samples/ms-identity-node) | MSAL Node | Authorization code | > | Python </p> Flask | Flask Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/>&#8226; [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-webapp) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) | MSAL Python | Authorization code | > | Python </p> Django | Django Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/3-Deployment/deploy-to-azure-app-service)| MSAL Python | Authorization code |
The following samples illustrate web applications that sign in users. Some sampl
The following samples show how to protect a web API with the Microsoft identity platform, and how to call a downstream API from the web API. > [!div class="mx-tdCol2BreakAll"]
->| Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow |
->| -- | -- |-- |-- |
->| ASP.NET | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-aspnet-webapi-onbehalfof) | MSAL.NET | On-Behalf-Of (OBO) |
->| Java | [Sign in users](https://github.com/Azure-Samples/ms-identity-java-webapi) | MSAL Java | On-Behalf-Of (OBO) |
->| Node.js | &#8226; [Protect a Node.js web API](https://github.com/Azure-Samples/active-directory-javascript-nodejs-webapi-v2) <br/> &#8226; [Protect a Node.js Web API with Azure AD B2C](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi) | MSAL Node | Authorization bearer |
+> | Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow |
+> | -- | -- |-- |-- |
+> | ASP.NET | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-aspnet-webapi-onbehalfof) | MSAL.NET | On-Behalf-Of (OBO) |
+> | ASP.NET Core | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/tree/master/2.%20Web%20API%20now%20calls%20Microsoft%20Graph) | MSAL.NET | On-Behalf-Of (OBO) |
+> | Java | [Sign in users](https://github.com/Azure-Samples/ms-identity-java-webapi) | MSAL Java | On-Behalf-Of (OBO) |
+> | Node.js | &#8226; [Protect a Node.js web API](https://github.com/Azure-Samples/active-directory-javascript-nodejs-webapi-v2) <br/> &#8226; [Protect a Node.js Web API with Azure AD B2C](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi) | MSAL Node | Authorization bearer |
## Desktop
-The following samples show public client desktop applications that access the Microsoft Graph API, or your own web API in the name of the user. Apart from the *Desktop (Console) with Workspace Application Manager (WAM)* sample, all these client applications use the Microsoft Authentication Library (MSAL).
+The following samples show public client desktop applications that access the Microsoft Graph API, or your own web API in the name of the user. Apart from the _Desktop (Console) with Workspace Application Manager (WAM)_ sample, all these client applications use the Microsoft Authentication Library (MSAL).
> [!div class="mx-tdCol2BreakAll"]
-> | Language/<br/>Platform | Code sample(s) <br/> on GitHub | Auth<br/> libraries | Auth flow |
-> | - | -- | - | -- |
->| .NET Core | &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/1-Calling-MSGraph/1-1-AzureAD) <br/> &#8226; [Call Microsoft Graph with token cache](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/2-TokenCache) <br/> &#8226; [Call Micrsoft Graph with custom web UI HTML](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-1-CustomHTML) <br/> &#8226; [Call Microsoft Graph with custom web browser](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-2-CustomBrowser) <br/> &#8226; [Sign in users with device code flow](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/4-DeviceCodeFlow) | MSAL.NET |&#8226; Authorization code with PKCE <br/> &#8226; Device code |
->| .NET | &#8226; [Call Microsoft Graph with daemon console](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/1-Call-MSGraph) <br/> &#8226; [Call web API with daemon console](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/blob/master/2-Call-OwnApi/README.md) | MSAL.NET | Authorization code with PKCE |
->| .NET | [Invoke protected API with integrated windows authentication](https://github.com/azure-samples/active-directory-dotnet-iwa-v2) | MSAL.NET | Integrated windows authentication |
->| ASP.NET | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/tree/master/2.%20Web%20API%20now%20calls%20Microsoft%20Graph) | MSAL.NET | Credentials grant |
->| Java | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-desktop/) | MSAL Java | Integrated windows authentication |
->| Node.js | [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-desktop) | MSAL Node | Authorization code with PKCE |
->| Powershell | [Call Microsoft Graph by signing in users using username/password](https://github.com/azure-samples/active-directory-dotnetcore-console-up-v2) | MSAL.NET | Resource owner password credentials |
->| Python | [Sign in users](https://github.com/Azure-Samples/ms-identity-python-desktop) | MSAL Python | Authorization code with PKCE |
->| Universal Window Platform (UWP) | [Call Microsoft Graph](https://github.com/azure-samples/active-directory-dotnet-native-uwp-wam) | Web account manager API | Integrated windows authentication |
->| XAML | &#8226; [Sign in users and call ASP.NET core web API](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/tree/master/1.%20Desktop%20app%20calls%20Web%20API) <br/> &#8226; [Sign in users and call Microsoft Graph](https://github.com/azure-samples/active-directory-dotnet-desktop-msgraph-v2) | MSAL.NET | Authorization code with PKCE |
+> | Language/<br/>Platform | Code sample(s) <br/> on GitHub | Auth<br/> libraries | Auth flow |
+> | - | -- | - | -- |
+> | .NET Core | &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/1-Calling-MSGraph/1-1-AzureAD) <br/> &#8226; [Call Microsoft Graph with token cache](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/2-TokenCache) <br/> &#8226; [Call Micrsoft Graph with custom web UI HTML](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-1-CustomHTML) <br/> &#8226; [Call Microsoft Graph with custom web browser](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-2-CustomBrowser) <br/> &#8226; [Sign in users with device code flow](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/4-DeviceCodeFlow) | MSAL.NET |&#8226; Authorization code with PKCE <br/> &#8226; Device code |
+> | .NET | &#8226; [Call Microsoft Graph with daemon console](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/1-Call-MSGraph) <br/> &#8226; [Call web API with daemon console](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/blob/master/2-Call-OwnApi/README.md) | MSAL.NET | Authorization code with PKCE |
+> | .NET | [Invoke protected API with integrated windows authentication](https://github.com/azure-samples/active-directory-dotnet-iwa-v2) | MSAL.NET | Integrated windows authentication |
+> | Java | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-desktop/) | MSAL Java | Integrated windows authentication |
+> | Node.js | [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-desktop) | MSAL Node | Authorization code with PKCE |
+> | Powershell | [Call Microsoft Graph by signing in users using username/password](https://github.com/azure-samples/active-directory-dotnetcore-console-up-v2) | MSAL.NET | Resource owner password credentials |
+> | Python | [Sign in users](https://github.com/Azure-Samples/ms-identity-python-desktop) | MSAL Python | Authorization code with PKCE |
+> | Universal Window Platform (UWP) | [Call Microsoft Graph](https://github.com/azure-samples/active-directory-dotnet-native-uwp-wam) | Web account manager API | Integrated windows authentication |
+> | Windows Presentation Foundation (WPF) | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/tree/master/2.%20Web%20API%20now%20calls%20Microsoft%20Graph) | MSAL.NET | Authorization code with PKCE |
+> | XAML | &#8226; [Sign in users and call ASP.NET core web API](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/tree/master/1.%20Desktop%20app%20calls%20Web%20API) <br/> &#8226; [Sign in users and call Microsoft Graph](https://github.com/azure-samples/active-directory-dotnet-desktop-msgraph-v2) | MSAL.NET | Authorization code with PKCE |
## Mobile The following samples show public client mobile applications that access the Microsoft Graph API, or your own web API in the name of the user. These client applications use the Microsoft Authentication Library (MSAL). > [!div class="mx-tdCol2BreakAll"]
->| Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow |
->| -- | -- |-- |-- |
->| iOS | &#8226; [Call Microsoft Graph native](https://github.com/Azure-Samples/ms-identity-mobile-apple-swift-objc) <br/> &#8226; [Call Microsoft Graph with Azure AD nxoauth](https://github.com/azure-samples/active-directory-ios-native-nxoauth2-v2) | MSAL iOS | Authorization code with PKCE |
->| Java | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-android-java) | MSAL Android | Authorization code with PKCE |
->| Kotlin | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-android-kotlin) | MSAL Android | Authorization code with PKCE |
->| Xamarin | &#8226; [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/active-directory-xamarin-native-v2/tree/main/1-Basic) <br/>&#8226; [Sign in users with broker and call Microsoft Graph](https://github.com/Azure-Samples/active-directory-xamarin-native-v2/tree/main/2-With-broker) | MSAL.NET | Authorization code with PKCE |
+> | Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow |
+> | -- | -- |-- |-- |
+> | iOS | &#8226; [Call Microsoft Graph native](https://github.com/Azure-Samples/ms-identity-mobile-apple-swift-objc) <br/> &#8226; [Call Microsoft Graph with Azure AD nxoauth](https://github.com/azure-samples/active-directory-ios-native-nxoauth2-v2) | MSAL iOS | Authorization code with PKCE |
+> | Java | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-android-java) | MSAL Android | Authorization code with PKCE |
+> | Kotlin | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-android-kotlin) | MSAL Android | Authorization code with PKCE |
+> | Xamarin | &#8226; [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/active-directory-xamarin-native-v2/tree/main/1-Basic) <br/>&#8226; [Sign in users with broker and call Microsoft Graph](https://github.com/Azure-Samples/active-directory-xamarin-native-v2/tree/main/2-With-broker) | MSAL.NET | Authorization code with PKCE |
## Service / daemon The following samples show an application that accesses the Microsoft Graph API with its own identity (with no user). > [!div class="mx-tdCol2BreakAll"]
->| Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow |
->| -- | -- |-- |-- |
->| ASP.NET| &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/1-Call-MSGraph) <br/> &#8226; [Call web API](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/2-Call-OwnApi)<br/> &#8226; [Call own web API](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/4-Call-OwnApi-Pop) <br/> &#8226; [Using managed identity and Azure key vault](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/3-Using-KeyVault) <br/> &#8226; [Multi-tenant with Microsoft identity platform endpoint](https://github.com/Azure-Samples/ms-identity-aspnet-daemon-webapp) | MSAL.NET | Client credentials grant|
->| Java | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-daemon)| MSAL Java| Client credentials grant|
->| Node.js | [Sign in users and call web API](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-console) | MSAL Node | Client credentials grant |
->| Python | &#8226; [Call Microsoft Graph with secret](https://github.com/Azure-Samples/ms-identity-python-daemon/tree/master/1-Call-MsGraph-WithSecret) <br/> &#8226; [Call Microsoft Graph with certificate](https://github.com/Azure-Samples/ms-identity-python-daemon/tree/master/2-Call-MsGraph-WithCertificate) | MSAL Python| Client credentials grant|
+> | Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow |
+> | -- | -- |-- |-- |
+> | ASP.NET| &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/1-Call-MSGraph) <br/> &#8226; [Call web API](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/2-Call-OwnApi)<br/> &#8226; [Call own web API](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/4-Call-OwnApi-Pop) <br/> &#8226; [Using managed identity and Azure key vault](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/3-Using-KeyVault) <br/> &#8226; [Multi-tenant with Microsoft identity platform endpoint](https://github.com/Azure-Samples/ms-identity-aspnet-daemon-webapp) | MSAL.NET | Client credentials grant|
+> | Java | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-daemon)| MSAL Java| Client credentials grant|
+> | Node.js | [Sign in users and call web API](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-console) | MSAL Node | Client credentials grant |
+> | Python | &#8226; [Call Microsoft Graph with secret](https://github.com/Azure-Samples/ms-identity-python-daemon/tree/master/1-Call-MsGraph-WithSecret) <br/> &#8226; [Call Microsoft Graph with certificate](https://github.com/Azure-Samples/ms-identity-python-daemon/tree/master/2-Call-MsGraph-WithCertificate) | MSAL Python| Client credentials grant|
## Azure Functions as web APIs The following samples show how to protect an Azure Function using HttpTrigger and exposing a web API with the Microsoft identity platform, and how to call a downstream API from the web API. > [!div class="mx-tdCol2BreakAll"]
->| Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow |
->| -- | -- |-- |-- |
->| .NET | [.NET Azure function web API secured by Azure AD](https://github.com/Azure-Samples/ms-identity-dotnet-webapi-azurefunctions) | MSAL.NET | Authorization code |
->| Node.js | [Node.js Azure function web API secured by Azure AD](https://github.com/Azure-Samples/ms-identity-nodejs-webapi-azurefunctions) | MSAL Node | Authorization bearer |
->| Node.js | [Call Microsoft Graph API on behalf of a user](https://github.com/Azure-Samples/ms-identity-nodejs-webapi-onbehalfof-azurefunctions) | MSAL Node| On-Behalf-Of (OBO)|
->| Python | [Python Azure function web API secured by Azure AD](https://github.com/Azure-Samples/ms-identity-python-webapi-azurefunctions) | MSAL Python | Authorization code |
+> | Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow |
+> | -- | -- |-- |-- |
+> | .NET | [.NET Azure function web API secured by Azure AD](https://github.com/Azure-Samples/ms-identity-dotnet-webapi-azurefunctions) | MSAL.NET | Authorization code |
+> | Node.js | [Node.js Azure function web API secured by Azure AD](https://github.com/Azure-Samples/ms-identity-nodejs-webapi-azurefunctions) | MSAL Node | Authorization bearer |
+> | Node.js | [Call Microsoft Graph API on behalf of a user](https://github.com/Azure-Samples/ms-identity-nodejs-webapi-onbehalfof-azurefunctions) | MSAL Node| On-Behalf-Of (OBO)|
+> | Python | [Python Azure function web API secured by Azure AD](https://github.com/Azure-Samples/ms-identity-python-webapi-azurefunctions) | MSAL Python | Authorization code |
-## Headless
+## Headless
The following sample shows a public client application running on a device without a web browser. The app can be a command-line tool, an app running on Linux or Mac, or an IoT application. The sample features an app accessing the Microsoft Graph API, in the name of a user who signs-in interactively on another device (such as a mobile phone). This client application uses the Microsoft Authentication Library (MSAL). > [!div class="mx-tdCol2BreakAll"]
->| Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow |
->| -- | -- |-- |-- |
->| .NET core | [Invoke protected API from text-only device](https://github.com/azure-samples/active-directory-dotnetcore-devicecodeflow-v2) | MSAL.NET | Device code|
->| Java | [Sign in users and invoke protected API](https://github.com/Azure-Samples/ms-identity-java-devicecodeflow) | MSAL Java | Device code |
->| Python | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-devicecodeflow) | MSAL Python | Device code |
+> | Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow |
+> | -- | -- |-- |-- |
+> | .NET core | [Invoke protected API from text-only device](https://github.com/azure-samples/active-directory-dotnetcore-devicecodeflow-v2) | MSAL.NET | Device code|
+> | Java | [Sign in users and invoke protected API](https://github.com/Azure-Samples/ms-identity-java-devicecodeflow) | MSAL Java | Device code |
+> | Python | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-devicecodeflow) | MSAL Python | Device code |
-## Multi-tenant SaaS
+## Multi-tenant SaaS
-The following samples show how to configure your application to accept sign-ins from any Azure Active Directory (Azure AD) tenant. Configuring your application to be *multi-tenant* means that you can offer a **Software as a Service** (SaaS) application to many organizations, allowing their users to be able to sign-in to your application after providing consent.
+The following samples show how to configure your application to accept sign-ins from any Azure Active Directory (Azure AD) tenant. Configuring your application to be _multi-tenant_ means that you can offer a **Software as a Service** (SaaS) application to many organizations, allowing their users to be able to sign-in to your application after providing consent.
> [!div class="mx-tdCol2BreakAll"]
->| Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow |
->| -- | -- |-- |-- |
->| ASP.NET Core | [ASP.NET Core MVC web application calls Microsoft Graph API](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/2-WebApp-graph-user/2-3-Multi-Tenant) | MSAL.NET | OpenID connect |
->| ASP.NET Core | [ASP.NET Core MVC web application calls ASP.NET Core Web API](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/4-WebApp-your-API/4-3-AnyOrg) | MSAL.NET | Authorization code |
+> | Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow |
+> | -- | -- |-- |-- |
+> | ASP.NET Core | [ASP.NET Core MVC web application calls Microsoft Graph API](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/2-WebApp-graph-user/2-3-Multi-Tenant) | MSAL.NET | OpenID connect |
+> | ASP.NET Core | [ASP.NET Core MVC web application calls ASP.NET Core Web API](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/4-WebApp-your-API/4-3-AnyOrg) | MSAL.NET | Authorization code |
## Next steps
active-directory V2 Oauth2 Implicit Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-implicit-grant-flow.md
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| `state` | recommended |A value included in the request that will also be returned in the token response. It can be a string of any content that you wish. A randomly generated unique value is typically used for [preventing cross-site request forgery attacks](https://tools.ietf.org/html/rfc6749#section-10.12). The state is also used to encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. | | `nonce` | required |A value included in the request, generated by the app, that will be included in the resulting id_token as a claim. The app can then verify this value to mitigate token replay attacks. The value is typically a randomized, unique string that can be used to identify the origin of the request. Only required when an id_token is requested. | | `prompt` | optional |Indicates the type of user interaction that is required. The only valid values at this time are 'login', 'none', 'select_account', and 'consent'. `prompt=login` will force the user to enter their credentials on that request, negating single-sign on. `prompt=none` is the opposite - it will ensure that the user isn't presented with any interactive prompt whatsoever. If the request can't be completed silently via single-sign on, the Microsoft identity platform will return an error. `prompt=select_account` sends the user to an account picker where all of the accounts remembered in the session will appear. `prompt=consent` will trigger the OAuth consent dialog after the user signs in, asking the user to grant permissions to the app. |
-| `login_hint` | Optional | You can use this parameter to pre-fill the username and email address field of the sign-in page for the user, if you know the username ahead of time. Often, apps use this parameter during reauthentication, after already extracting the `login_hint` [optional claim](active-directory-optional-claims.md) from an earlier sign-in. |
+| `login_hint` | optional | You can use this parameter to pre-fill the username and email address field of the sign-in page for the user, if you know the username ahead of time. Often, apps use this parameter during reauthentication, after already extracting the `login_hint` [optional claim](active-directory-optional-claims.md) from an earlier sign-in. |
| `domain_hint` | optional |If included, it will skip the email-based discovery process that user goes through on the sign in page, leading to a slightly more streamlined user experience. This parameter is commonly used for Line of Business apps that operate in a single tenant, where they will provide a domain name within a given tenant, forwarding the user to the federation provider for that tenant. Note that this hint prevents guests from signing into this application, and limits the use of cloud credentials like FIDO. | At this point, the user will be asked to enter their credentials and complete the authentication. The Microsoft identity platform will also ensure that the user has consented to the permissions indicated in the `scope` query parameter. If the user has consented to **none** of those permissions, it will ask the user to consent to the required permissions. For more info, see [permissions, consent, and multi-tenant apps](v2-permissions-and-consent.md).
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft 365 A5 for Faculty | M365EDU_A5_FACULTY | e97c048c-37a4-45fb-ab50-922fbf07a370 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>OFFICE_FORMS_PLAN_3 (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>INSIDER_RISK_MANAGEMENT (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Active Directory Premium P1(41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Common Data Service - O365 P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Common Data Service for Teams_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics -(Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 ΓÇô Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/> Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan 3) (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (Plan 3) (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Power Virtual Agents for Office 365 P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>Microsoft Communications Compliance(41fcdd7d-4733-4863-9cf4-c65b83ce2d f4)<br/>Microsoft Insider Risk Management (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | MICROSOFT 365 A5 FOR STUDENTS | M365EDU_A5_STUDENT | 46c119d4-0379-4a9d-85e4-97c66d3f909e | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>OFFICE_FORMS_PLAN_3 (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Advanced Threat Protection (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Flow for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>MICROSOFT DEFENDER FOR ENDPOINT (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Forms (Plan 3) (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Microsoft 365 A5 for students use benefit | M365EDU_A5_STUUSEBNFT | 31d57bc7-3a05-4867-ab53-97a17835a411 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>UNIVERSAL_PRINT_NO_SEEDING (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Common Data Service - O365 P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection and Governance Analytics ΓÇô Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Universal Print Without Seeding (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
-| Microsoft 365 A5 without Audio Conferencing for students use benefit | M365EDU_A5_NOPSTNCONF_STUUSEBNFT | 81441ae1-0b31-4185-a6c0-32b6b84d419f| AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b <br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7 <br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>UNIVERSAL_PRINT_NO_SEEDING (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Common Data Service - O365 P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection and Governance Analytics - Premium) (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Universal Print Without Seeding (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
+| Microsoft 365 A5 without Audio Conferencing for students use benefit | M365EDU_A5_NOPSTNCONF_STUUSEBNFT | 81441ae1-0b31-4185-a6c0-32b6b84d419f| AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>UNIVERSAL_PRINT_NO_SEEDING (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Common Data Service - O365 P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection and Governance Analytics - Premium) (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Universal Print Without Seeding (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
| MICROSOFT 365 APPS FOR BUSINESS | O365_BUSINESS | cdd28e44-67e3-425e-be4c-737fab2899d3 | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | MICROSOFT 365 APPS FOR BUSINESS | SMB_BUSINESS | b214fe43-f5a3-4703-beeb-fa97188220fc | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | MICROSOFT 365 APPS FOR ENTERPRISE | OFFICESUBSCRIPTION | c2273bd0-dff7-4215-9ef5-2c7bcfb06425 | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) |
-| MICROSOFT 365 AUDIO CONFERENCING FOR GCC | MCOMEETADV_GOC | 2d3091c7-0712-488b-b3d8-6b97bde6a1f5 | ECHANGE_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MICROSOFT 365 AUDIO CONFERENCING FOR GOVERNMENT (f544b08d-1645-4287-82de-8d91f37c02a1) |
+| MICROSOFT 365 AUDIO CONFERENCING FOR GCC | MCOMEETADV_GOC | 2d3091c7-0712-488b-b3d8-6b97bde6a1f5 | EXCHANGE_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MICROSOFT 365 AUDIO CONFERENCING FOR GOVERNMENT (f544b08d-1645-4287-82de-8d91f37c02a1) |
| MICROSOFT 365 BUSINESS BASIC | O365_BUSINESS_ESSENTIALS | 3b555118-da6a-4418-894f-7df1e2096870 | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | MICROSOFT 365 BUSINESS BASIC | SMB_BUSINESS_ESSENTIALS | dab7782a-93b1-4074-8bb1-0e61318bea0b | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | | MICROSOFT 365 BUSINESS STANDARD | O365_BUSINESS_PREMIUM | f245ecc8-75af-4f8e-b61f-27d8114de5f3 | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>O365_SB_Relationship_Management (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)| To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OUTLOOK CUSTOMER MANAGER (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| MICROSOFT DEFENDER FOR ENDPOINT | WIN_DEF_ATP | 111046dd-295b-4d6d-9724-d52ac90bd1f2 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT DEFENDER FOR ENDPOINT (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | | Microsoft Defender for Endpoint Server | MDATP_Server | 509e8ab6-0274-4cda-bcbd-bd164fd562c4 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | | MICROSOFT DYNAMICS CRM ONLINE BASIC | CRMPLAN2 | 906af65a-2970-46d5-9b58-4e9aa50f0657 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>CRMPLAN2 (bf36ca64-95c6-4918-9275-eb9f4ce2c04f)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT DYNAMICS CRM ONLINE BASIC (bf36ca64-95c6-4918-9275-eb9f4ce2c04f)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) |
-| Microsoft Defender for Identity | ATA | 98defdf7-f6c1-44f5-a1f6-943b6764e7a5 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318 ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ADALLOM_FOR_AATP (61d18b02-6889-479f-8f36-56e6e0fe5792) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>SecOps Investigation for MDI (61d18b02-6889-479f-8f36-56e6e0fe5792) |
+| Microsoft Defender for Identity | ATA | 98defdf7-f6c1-44f5-a1f6-943b6764e7a5 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ADALLOM_FOR_AATP (61d18b02-6889-479f-8f36-56e6e0fe5792) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>SecOps Investigation for MDI (61d18b02-6889-479f-8f36-56e6e0fe5792) |
| Microsoft Defender for Office 365 (Plan 2) GCC | THREAT_INTELLIGENCE_GOV | 56a59ffb-9df1-421b-9e61-8b568583474d | MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>ATP_ENTERPRISE_GOV (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>THREAT_INTELLIGENCE_GOV (900018f1-0cdb-4ecb-94d4-90281760fdc6) | Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Defender for Office 365 (Plan 1) for Government (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>Microsoft Defender for Office 365 (Plan 2) for Government (900018f1-0cdb-4ecb-94d4-90281760fdc6) | | MICROSOFT DYNAMICS CRM ONLINE | CRMSTANDARD | d17b27af-3f49-4822-99f9-56a661538792 | CRMSTANDARD (f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MDM_SALES_COLLABORATION (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>NBPROFESSIONALFORCRM (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | MICROSOFT DYNAMICS CRM ONLINE PROFESSIONAL(f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT DYNAMICS MARKETING SALES COLLABORATION - ELIGIBILITY CRITERIA APPLY (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>MICROSOFT SOCIAL ENGAGEMENT PROFESSIONAL - ELIGIBILITY CRITERIA APPLY (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | MS IMAGINE ACADEMY | IT_ACADEMY_AD | ba9a34de-4489-469d-879c-0f0f145321cd | IT_ACADEMY_AD (d736def0-1fde-43f0-a5be-e3f8b2de6e41) | MS IMAGINE ACADEMY (d736def0-1fde-43f0-a5be-e3f8b2de6e41) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Power Apps Plan 2 Trial | POWERAPPS_VIRAL | dcb1a3ae-b33f-4487-846a-a640262fadf4 | DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2_VIRAL (50e68c76-46c6-4674-81f9-75456511b170)<br/>FLOW_P2_VIRAL_REAL (d20bfa21-e9ae-43fc-93c2-20783f0840c3)<br/>POWERAPPS_P2_VIRAL (d5368ca3-357e-4acb-9c21-8495fb025d1f) | Common Data Service ΓÇô VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow Free (50e68c76-46c6-4674-81f9-75456511b170)<br/>Flow P2 Viral (d20bfa21-e9ae-43fc-93c2-20783f0840c3)<br/>PowerApps Trial (d5368ca3-357e-4acb-9c21-8495fb025d1f) | | MICROSOFT POWER AUTOMATE PLAN 2 | FLOW_P2 | 4755df59-3f73-41ab-a249-596ad72b5504 | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2 (56be9436-e4b2-446c-bb7f-cc15d16cca4d) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Automate (Plan 2) (56be9436-e4b2-446c-bb7f-cc15d16cca4d) | | MICROSOFT INTUNE SMB | INTUNE_SMB | e6025b08-2fa5-4313-bd0a-7e5ffca32958 | AAD_SMB (de377cbc-0019-4ec2-b77c-3f223947e102)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>INTUNE_SMBIZ (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/> | AZURE ACTIVE DIRECTORY (de377cbc-0019-4ec2-b77c-3f223947e102)<br/> EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/> MICROSOFT INTUNE (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/> MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
-| Microsoft Power Apps Plan 2 (Qualified Offer) | POWERFLOW_P2 | ddfae3e3-fcb2-4174-8ebd-3023cb213c8b | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWERAPPS_P2 (00527d7f-d5bc-4c2a-8d1e-6c0de2410c81)<br/>FLOW_P2 (56be9436-e4b2-446c-bb7f-cc15d16cca4d) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/> Power Apps (Plan 2) (00527d7f-d5bc-4c2a-8d1e-6c0de2410c81)<br/>Power Automate (Plan 2) (56be9436-e4b2-446c-bb7f-cc15d16cca4d) |
+| Microsoft Power Apps Plan 2 (Qualified Offer) | POWERFLOW_P2 | ddfae3e3-fcb2-4174-8ebd-3023cb213c8b | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWERAPPS_P2 (00527d7f-d5bc-4c2a-8d1e-6c0de2410c81)<br/>FLOW_P2 (56be9436-e4b2-446c-bb7f-cc15d16cca4d) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/> Power Apps (Plan 2) (00527d7f-d5bc-4c2a-8d1e-6c0de2410c81)<br/>Power Automate (Plan 2) (56be9436-e4b2-446c-bb7f-cc15d16cca4d) |
| MICROSOFT STREAM | STREAM | 1f2f344a-700d-42c9-9427-5cea1d5d7ba6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFTSTREAM (acffdce6-c30f-4dc2-81c0-372e33c515ec) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT STREAM (acffdce6-c30f-4dc2-81c0-372e33c515ec) | | Microsoft Stream Plan 2 | STREAM_P2 | ec156933-b85b-4c50-84ec-c9e5603709ef | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>STREAM_P2 (d3a458d0-f10d-48c2-9e44-86f3f684029e) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Stream Plan 2 (d3a458d0-f10d-48c2-9e44-86f3f684029e) | |Microsoft Stream Storage Add-On (500 GB) | STREAM_STORAGE | 9bd7c846-9556-4453-a542-191d527209e8 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>STREAM_STORAGE (83bced11-77ce-4071-95bd-240133796768) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Stream Storage Add-On (83bced11-77ce-4071-95bd-240133796768) | | MICROSOFT TEAMS (FREE) | TEAMS_FREE | 16ddbbfc-09ea-4de2-b1d7-312db6112d70 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCOFREE (617d9209-3b90-4879-96e6-838c42b2701d)<br/>TEAMS_FREE (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS_FREE_SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCO FREE FOR MICROSOFT TEAMS (FREE) (617d9209-3b90-4879-96e6-838c42b2701d)<br/>MICROSOFT TEAMS (FREE) (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINT KIOSK (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS FREE SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD (FIRSTLINE) (36b29273-c6d0-477a-aca6-6fbe24f538e3) |
-| MICROSOFT TEAMS EXPLORATORY | TEAMS_EXPLORATORY | 710779e8-3d4a-4c88-adb9-386c958d1fdf | CDS_O365_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECTWORKMANAGEMENT(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>DESKLESS (s8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | COMMON DATA SERVICE FOR TEAMS_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INSIGHTS BY MYANALYTICS (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MICROSOFT PLANNER (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MICROSOFT TEAMS (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICE MOBILE APPS FOR OFFICE 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWER APPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>POWER AUTOMATE FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER VIRTUAL AGENTS FOR OFFICE 365 P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINT STANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD (PLAN 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653 |
+| MICROSOFT TEAMS EXPLORATORY | TEAMS_EXPLORATORY | 710779e8-3d4a-4c88-adb9-386c958d1fdf | CDS_O365_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECTWORKMANAGEMENT(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>DESKLESS (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | COMMON DATA SERVICE FOR TEAMS_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INSIGHTS BY MYANALYTICS (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MICROSOFT PLANNER (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MICROSOFT TEAMS (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICE MOBILE APPS FOR OFFICE 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWER APPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>POWER AUTOMATE FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER VIRTUAL AGENTS FOR OFFICE 365 P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINT STANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD (PLAN 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653 |
| Microsoft Teams Rooms Standard | MEETING_ROOM | 6070a4c8-34c6-4937-8dfb-39bbc6397a60 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) | | Microsoft Threat Experts - Experts on Demand | EXPERTS_ON_DEMAND | 9fa2f157-c8e4-4351-a3f2-ffa506da1406 | EXPERTS_ON_DEMAND (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) | Microsoft Threat Experts - Experts on Demand (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) | | Multi-Geo Capabilities in Office 365 | OFFICE365_MULTIGEO | 84951599-62b7-46f3-9c9d-30551b2ad607 | EXCHANGEONLINE_MULTIGEO (897d51f1-2cfa-4848-9b30-469149f5e68e)<br/>SHAREPOINTONLINE_MULTIGEO (735c1d98-dd3f-4818-b4ed-c8052e18e62d)<br/>TEAMSMULTIGEO (41eda15d-6b52-453b-906f-bc4a5b25a26b) | Exchange Online Multi-Geo (897d51f1-2cfa-4848-9b30-469149f5e68e)<br/>SharePoint Multi-Geo (735c1d98-dd3f-4818-b4ed-c8052e18e62d)<br/>Teams Multi-Geo (41eda15d-6b52-453b-906f-bc4a5b25a26b) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Office 365 E3_USGOV_DOD | ENTERPRISEPACK_USGOV_DOD | b107e5a3-3e60-4c0d-a184-a7e4395eb44c | EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS_AR_DOD (fd500458-c24c-478e-856c-a6067a8376cd)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)| Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams for DOD (AR) (fd500458-c24c-478e-856c-a6067a8376cd)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office Online (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SharePoint Online (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | | Office 365 E3_USGOV_GCCHIGH | ENTERPRISEPACK_USGOV_GCCHIGH | aea38a85-9bd5-4981-aa00-616b411205bf | EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS_AR_GCCHIGH (9953b155-8aef-4c56-92f3-72b0487fce41)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams for GCCHigh (AR) (9953b155-8aef-4c56-92f3-72b0487fce41)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office Online (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SharePoint Online (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | | OFFICE 365 E4 | ENTERPRISEWITHSCAL | 1392051d-0cb9-4b7a-88d5-621fee5e8711 | BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>FORMS_PLAN_E3 (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MCOVOICECONF (27216c54-caf8-4d0d-97e2-517afb5c08f6)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW FOR OFFICE 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>MICROSOFT FORMS (PLAN E3) (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 3) (27216c54-caf8-4d0d-97e2-517afb5c08f6)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS FOR OFFICE 365(c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) |
-| Office 365 E5 | ENTERPRISEPREMIUM | c7df2760-2c81-4ef7-b578-5b5392b571df | DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>GRAPH_CONNECTORS_SEARCH_INDEX (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6- 95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Common Data Service - O365 P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Common Data Service for Teams_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Graph Connectors Search with Index (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 ΓÇô Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>M365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Power Virtual Agents for Office 365 P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) |
+| Office 365 E5 | ENTERPRISEPREMIUM | c7df2760-2c81-4ef7-b578-5b5392b571df | DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>GRAPH_CONNECTORS_SEARCH_INDEX (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Common Data Service - O365 P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Common Data Service for Teams_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Graph Connectors Search with Index (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 ΓÇô Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>M365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Power Virtual Agents for Office 365 P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) |
| OFFICE 365 E5 WITHOUT AUDIO CONFERENCING | ENTERPRISEPREMIUM_NOPSTNCONF | 26d45bd9-adf1-46cd-a9e1-51e9a5524128 | ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | OFFICE 365 CLOUD APP SECURITY (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>POWER BI PRO (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>OFFICE 365 ADVANCED EDISCOVERY (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>EXCHANGE ONLINE (PLAN 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW FOR OFFICE 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>MICROSOFT FORMS (PLAN E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS FOR OFFICE 365 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>OFFICE 365 ADVANCED THREAT PROTECTION (PLAN 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | OFFICE 365 F3 | DESKLESSPACK | 4b585984-651b-448a-9e53-3b10f069cf7f | BPOS_S_TODO_FIRSTLINE (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>CDS_O365_F1 (90db65a7-bf11-4904-a79f-ef657605145b)<br/>DESKLESS (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>DYN365_CDS_O365_F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>FLOW_O365_S1 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>FORMS_PLAN_K (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>INTUNE_365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>KAIZALA_O365_P1 (73b2a583-6a59-42e3-8e83-54db46bc3278)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_S1 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>POWER_VIRTUAL_AGENTS_O365_F1 (ba2fdb48-290b-4632-b46a-e4ecc58ac11a)<br/>PROJECT_O365_F3 (7f6f28c2-34bb-4d4b-be36-48ca2e77e1ec)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WHITEBOARD_FIRSTLINE_1 (36b29273-c6d0-477a-aca6-6fbe24f538e3)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | COMMON DATA SERVICE - O365 F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>COMMON DATA SERVICE FOR TEAMS_F1 (90db65a7-bf11-4904-a79f-ef657605145b)<br/>EXCHANGE ONLINE KIOSK (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>FLOW FOR OFFICE 365 K1 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>MICROSOFT AZURE RIGHTS MANAGEMENT SERVICE (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>MICROSOFT FORMS (PLAN F1) (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>MICROSOFT KAIZALA PRO PLAN 1 (73b2a583-6a59-42e3-8e83-54db46bc3278)<br/>MICROSOFT PLANNER (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>MICROSOFT STREAM FOR O365 K SKU (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICE MOBILE APPS FOR OFFICE 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWER VIRTUAL AGENTS FOR OFFICE 365 F1 (ba2fdb48-290b-4632-b46a-e4ecc58ac11a)<br/>POWERAPPS FOR OFFICE 365 K1 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>PROJECT FOR OFFICE (PLAN F) (7f6f28c2-34bb-4d4b-be36-48ca2e77e1ec)<br/>SHAREPOINT KIOSK (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 1) (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TO-DO (FIRSTLINE) (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>YAMMER ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | OFFICE 365 G3 GCC | ENTERPRISEPACK_GOV | 535a3a29-c5f0-42fe-8215-d3b9e1f38c4a | RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>DYN365_CDS_O365_P2_GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>CDS_O365_P2_GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS_GOV_E3 (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2_GOV (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>OFFICESUBSCRIPTION_GOV (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>STREAM_O365_E3_GOV (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWERAPPS_O365_P2_GOV (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>FLOW_O365_P2_GOV (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94) | AZURE RIGHTS MANAGEMENT (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>COMMON DATA SERVICE - O365 P2 GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>COMMON DATA SERVICE FOR TEAMS_P2 GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE PLAN 2G (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS FOR GOVERNMENT (PLAN E3) (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô PREMIUM (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô STANDARD (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>INFORMATION PROTECTION FOR OFFICE 365 ΓÇô STANDARD (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INSIGHTS BY MYANALYTICS FOR GOVERNMENT (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>MICROSOFT 365 APPS FOR ENTERPRISE G (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MICROSOFT BOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MICROSOFT STREAM FOR O365 FOR GOVERNMENT (E3) (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>MICROSOFT TEAMS FOR GOVERNMENT (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE 365 PLANNER FOR GOVERNMENT (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>OFFICE FOR THE WEB (GOVERNMENT) (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWER APPS FOR OFFICE 365 FOR GOVERNMENT (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>POWER AUTOMATE FOR OFFICE 365 FOR GOVERNMENT (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINT PLAN 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) FOR GOVERNMENT (a31ef4a2-f787-435e-8335-e47eb0cafc94) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Power BI Premium Per User Add-On | PBI_PREMIUM_PER_USER_ADDON | de376a03-6e5b-42ec-855f-093fb50b8ca5 | BI_AZURE_P3 (0bf3c642-7bb5-4ccc-884e-59d09df0266c) | Power BI Premium Per User (0bf3c642-7bb5-4ccc-884e-59d09df0266c) | | Power BI Premium Per User Dept | PBI_PREMIUM_PER_USER_DEPT | f168a3fb-7bcf-4a27-98c3-c235ea4b78b4 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P3 (0bf3c642-7bb5-4ccc-884e-59d09df0266c)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Premium Per User (0bf3c642-7bb5-4ccc-884e-59d09df0266c)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) | | Power BI Pro | POWER_BI_PRO | f8a1db68-be16-40ed-86d5-cb42ce701560 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) |
-| Power BI Pro CE | POWER_BI_PRO_CE | 420af87e-8177-4146-a780-3786adaffbca | EXCHANGE_S_FOUNDATION( 113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) |
+| Power BI Pro CE | POWER_BI_PRO_CE | 420af87e-8177-4146-a780-3786adaffbca | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) |
| Power BI Pro Dept | POWER_BI_PRO_DEPT | 3a6a908c-09c5-406a-8170-8ebb63c42882 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) | | Power Virtual Agent | VIRTUAL_AGENT_BASE | e4e55366-9635-46f4-a907-fc8c3b5ec81f | CDS_VIRTUAL_AGENT_BASE (0a0a23fa-fea1-4195-bb89-b4789cb12f7f)<br/>FLOW_VIRTUAL_AGENT_BASE (4b81a949-69a1-4409-ad34-9791a6ec88aa)<br/>VIRTUAL_AGENT_BASE (f6934f16-83d3-4f3b-ad27-c6e9c187b260) | Common Data Service for Virtual Agent Base (0a0a23fa-fea1-4195-bb89-b4789cb12f7f)<br/>Power Automate for Virtual Agent (4b81a949-69a1-4409-ad34-9791a6ec88aa)<br/>Virtual Agent Base (f6934f16-83d3-4f3b-ad27-c6e9c187b260) | | Power Virtual Agents Viral Trial | CCIBOTS_PRIVPREV_VIRAL | 606b54a9-78d8-4298-ad8b-df6ef4481c80 | DYN365_CDS_CCI_BOTS (cf7034ed-348f-42eb-8bbd-dddeea43ee81)<br/>CCIBOTS_PRIVPREV_VIRAL (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>FLOW_CCI_BOTS (5d798708-6473-48ad-9776-3acc301c40af) | Common Data Service for CCI Bots (cf7034ed-348f-42eb-8bbd-dddeea43ee81)<br/>Dynamics 365 AI for Customer Service Virtual Agents Viral (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>Flow for CCI Bots (5d798708-6473-48ad-9776-3acc301c40af) |
active-directory Reference Connect Pta Version History https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-pta-version-history.md
Here are related topics:
- [User sign-in with Azure AD Pass-through Authentication](how-to-connect-pta.md) - [Azure AD Pass-through Authentication agent installation](how-to-connect-pta-quick-start.md)
+## 1.5.2482.0
+### Release Status:
+07/07/2021: Released for download
+
+### New features and improvements
+
+- Upgraded the packages/libraries to newer versions signed using SHA-256RSA.
+ ## 1.5.1742.0 ### Release Status: 04/09/2020: Released for download
active-directory Add Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/add-application-portal.md
Previously updated : 07/23/2021 Last updated : 08/21/2021
To add an application to your Azure AD tenant:
1. In the [Azure portal](https://portal.azure.com), in the **Azure services** pane select **Enterprise applications**. The **All applications** pane opens and displays a random sample of the applications in your Azure AD tenant. 2. In the **Enterprise applications** pane, select **New application**. 3. The **Browse Azure AD Gallery** pane opens and displays tiles for cloud platforms, on-premises applications, and featured applications. Applications listed in the **Featured applications** section have icons indicating whether they support federated single sign-on (SSO) and provisioning.
-4. Switch back to the legacy app galley experience: In the banner at the top of the **Add an application page**, select the link that says **You're in the new and improved app gallery experience. Click here to switch back to the legacy app gallery experience**.
![Search for an app by name or category](media/add-application-portal/browse-gallery.png)
-5. You can browse the gallery for the application you want to add, or search for the application by entering its name in the search box. Then select the application from the results.
-6. The next step depends on the way the developer of the application implemented single sign-on (SSO). Single sign-on can be implemented by app developers in four ways. The four ways are SAML, OpenID Connect, Password, and Linked. When you add an app, you can choose to filter and see only apps using a particular SSO implementation as shown in the screenshot. For example, a popular standard to implement SSO is called Security Assertion Markup Language (SAML). Another standard that is popular is called OpenId Connect (OIDC). The way you configure SSO with these standards is different so take note of the type of SSO that is implemented by the app that you are adding.
+4. You can browse the gallery for the application you want to add, or search for the application by entering its name in the search box. Then select the application from the results.
+5. The next step depends on the way the developer of the application implemented single sign-on (SSO). Single sign-on can be implemented by app developers in four ways. The four ways are SAML, OpenID Connect, Password, and Linked. When you add an app, you can choose to filter and see only apps using a particular SSO implementation as shown in the screenshot. For example, a popular standard to implement SSO is called Security Assertion Markup Language (SAML). Another standard that is popular is called OpenId Connect (OIDC). The way you configure SSO with these standards is different so take note of the type of SSO that is implemented by the app that you are adding.
- If the developer of the app used the **OIDC standard** for SSO then select **Sign Up**. A setup page appears. Next, go to the quickstart on setting up OIDC-based single sign-on. :::image type="content" source="media/add-application-portal/sign-up-oidc-sso.png" alt-text="Screenshot shows adding an OIDC-based SSO app.":::
active-directory Delete Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/delete-application-portal.md
Title: 'Quickstart: Delete an application from your tenant'
description: This quickstart uses the Azure portal to delete an application from your Azure Active Directory (Azure AD) tenant. -+
active-directory Protect Against Consent Phishing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/protect-against-consent-phishing.md
+
+ Title: Protecting against consent phishing | Azure AD app management
+description: Learn ways of mitigating against app-based consent phishing attacks using Azure AD.
++++++++ Last updated : 08/09/2021++++
+#Customer intent: As a developer, I want to learn how to protect against app-based consent phishing attacks so I can protect my users from malicious threat actors.
++
+# Protecting against consent phishing
+
+Productivity is no longer confined to private networks, and work has shifted dramatically toward cloud services. While cloud applications enable employees to be productive remotely, attackers can also use application-based attacks to gain access to valuable organization data. You may be familiar with attacks focused on users, such as email phishing or credential compromise. ***Consent phishing*** is another threat vector to be aware of.
+This article explores what consent phishing is, what Microsoft does to protect you, and what steps organizations can take to stay safe.
+
+## What is consent phishing?
+
+Consent phishing attacks trick users into granting permissions to malicious cloud apps. These malicious apps can then gain access to usersΓÇÖ legitimate cloud services and data. Unlike credential compromise, *threat actors* who perform consent phishing will target users who can grant access to their personal or organizational data directly. The consent screen displays all permissions the app receives. Because the application is hosted by a legitimate provider (such as MicrosoftΓÇÖs identity platform), unsuspecting users accept the terms or hit ΓÇÿ*Accept*ΓÇÖ, which grants a malicious application the requested permissions to the userΓÇÖs or organization's data.
++
+*An example of an OAuth app that is requesting access to a wide variety of permissions.*
+
+## Mitigating consent phishing attacks using Azure AD
+
+Admins, users, or Microsoft security researchers may flag OAuth applications that appear to behave suspiciously. A flagged application will be reviewed by Microsoft to determine whether the app violates the terms of service. If a violation is confirmed, Azure AD will disable the application and prevent further use across all Microsoft services.
+
+When Azure AD disables an OAuth application, a few things happen:
+- The malicious application and related service principals are placed into a fully disabled state. Any new token requests or requests for refresh tokens will be denied, but existing access tokens will still be valid until their expiration.
+- We surface the disabled state through an exposed property called *disabledByMicrosoftStatus* on the related [application](/graph/api/resources/application?view=graph-rest-1.0&preserve-view=true) and [service principal](/graph/api/resources/serviceprincipal?view=graph-rest-1.0&preserve-view=true) resource types in Microsoft Graph.
+- Global admins who may have had a user in their organization that consented to an application before disablement by Microsoft should receive an email reflecting the action taken and recommended steps they can take to investigate and improve their security posture.
+
+## Recommended response and remediation
+
+If your organization has been impacted by an application disabled by Microsoft, we recommend these immediate steps to keep your environment secure:
+
+1. Investigate the application activity for the disabled application, including:
+ - The delegated permissions or application permissions requested by the application.
+ - The Azure AD audit logs for activity by the application and sign-in activity for users authorized to use the application.
+1. Review and implement the [guidance on defending against illicit consent grants](/microsoft-365/security/office-365-security/detect-and-remediate-illicit-consent-grants?view=o365-worldwide&preserve-view=true) in Microsoft cloud products, including auditing permissions and consent for the disabled application or any other suspicious apps found during review.
+1. Implement best practices for hardening against consent phishing, described below.
++
+## Best practices for hardening against consent phishing attacks
+
+At Microsoft, we want to put admins in control by providing the right insights and capabilities to control how applications are allowed and used within organizations. While attackers will never rest, there are steps organizations can take to improve their security posture. Some best practices to follow include:
+
+* Educate your organization on how our permissions and consent framework works
+ - Understand the data and the permissions an application is asking for and understand howΓÇ»[permissions and consent](../develop/v2-permissions-and-consent.md) work within our platform.
+ - Ensure administrators know how toΓÇ»[manage and evaluate consent requests](./manage-consent-requests.md).
+ - Routinely [audit apps and consented permissions](/azure/security/fundamentals/steps-secure-identity#audit-apps-and-consented-permissions) in your organization to ensure applications that are used are accessing only the data they need and adhering to the principles of least privilege.
+* Know how to spot and block common consent phishing tactics
+ - Check for poor spelling and grammar. If an email message or the applicationΓÇÖs consent screen has spelling and grammatical errors, itΓÇÖs likely a suspicious application. In that case, you can report it directly on the [consent prompt](../develop/application-consent-experience.md#building-blocks-of-the-consent-prompt) with the ΓÇ£*Report it here*ΓÇ¥ link and Microsoft will investigate if it is a malicious application and disable it, if confirmed.
+ - DonΓÇÖt rely on app names and domain URLs as a source of authenticity. Attackers like to spoof app names and domains that make it appear to come from a legitimate service or company to drive consent to a malicious app. Instead validate the source of the domain URL and use applications from [verified publishers](../develop/publisher-verification-overview.md) when possible.
+ - Block [consent phishing emails with Microsoft Defender for Office 365](/microsoft-365/security/office-365-security/set-up-anti-phishing-policies?view=o365-worldwide&preserve-view=true#impersonation-settings-in-anti-phishing-policies-in-microsoft-defender-for-office-365) by protecting against phishing campaigns where an attacker is impersonating a known user in your organization.
+ - Configure Microsoft cloud app security policies such as [activity policies](/cloud-app-security/user-activity-policies), [anomaly detection](/cloud-app-security/anomaly-detection-policy), and [OAuth app policies](/cloud-app-security/app-permission-policy) to help manage and take action on abnormal application activity in to your organization.
+ - Investigate and hunt for consent phishing attacks by following the guidance on [advanced hunting with Microsoft 365 Defender](/microsoft-365/security/defender/advanced-hunting-overview?view=o365-worldwide&preserve-view=true).
+* Allow access to apps you trust and protect against those you donΓÇÖt trust
+ - Use applications that have been publisher verified. [Publisher verification](../develop/publisher-verification-overview.md) helps admins and end users understand the authenticity of application developers through a Microsoft supported vetting process.
+ - [Configure user consent settings](./configure-user-consent.md?tabs=azure-portal) to allow users to only consent to specific applications you trust, such as application developed by your organization or from verified publishers.
+ - Create proactive [app governance](/microsoft-365/compliance/app-governance-manage-app-governance?view=o365-worldwide&preserve-view=true) policies to monitor third-party app behavior on the Microsoft 365 platform to address common suspicious app behaviors.
+
+## Next steps
+
+* [App consent grant investigation](/security/compass/incident-response-playbook-app-consent)
+* [Managing access to apps](./what-is-access-management.md)
+* [Restrict user consent operations in Azure AD](/azure/security/fundamentals/steps-secure-identity#restrict-user-consent-operations)
advisor Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/advisor/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Advisor description: Sample Azure Resource Graph queries for Azure Advisor showing use of resource types and tables to access Azure Advisor related resources and properties. Previously updated : 08/09/2021 Last updated : 08/27/2021
aks Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/azure-ad-rbac.md
Last updated 03/17/2021
# Control access to cluster resources using Kubernetes role-based access control and Azure Active Directory identities in Azure Kubernetes Service
-Azure Kubernetes Service (AKS) can be configured to use Azure Active Directory (AD) for user authentication. In this configuration, you sign in to an AKS cluster using an Azure AD authentication token. You can also configure Kubernetes role-based access control (Kubernetes RBAC) to limit access to cluster resources based a user's identity or group membership.
+Azure Kubernetes Service (AKS) can be configured to use Azure Active Directory (AD) for user authentication. In this configuration, you sign in to an AKS cluster using an Azure AD authentication token. Once authenticated, you can use the built-in Kubernetes role-based access control (Kubernetes RBAC) to manage access to namespaces and cluster resources based a user's identity or group membership.
-This article shows you how to use Azure AD group membership to control access to namespaces and cluster resources using Kubernetes RBAC in an AKS cluster. Example groups and users are created in Azure AD, then Roles and RoleBindings are created in the AKS cluster to grant the appropriate permissions to create and view resources.
+This article shows you how to control access using Kubernetes RBAC in an AKS cluster based on Azure AD group membership. Example groups and users are created in Azure AD, then Roles and RoleBindings are created in the AKS cluster to grant the appropriate permissions to create and view resources.
## Before you begin
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/configure-azure-cni.md
The IP address plan for an AKS cluster consists of a virtual network, at least o
| Virtual network | The Azure virtual network can be as large as /8, but is limited to 65,536 configured IP addresses. Consider all your networking needs, including communicating with services in other virtual networks, before configuring your address space. For example, if you configure too large of an address space, you may run into issues with overlapping other address spaces within your network.| | Subnet | Must be large enough to accommodate the nodes, pods, and all Kubernetes and Azure resources that might be provisioned in your cluster. For example, if you deploy an internal Azure Load Balancer, its front-end IPs are allocated from the cluster subnet, not public IPs. The subnet size should also take into account upgrade operations or future scaling needs.<p />To calculate the *minimum* subnet size including an additional node for upgrade operations: `(number of nodes + 1) + ((number of nodes + 1) * maximum pods per node that you configure)`<p/>Example for a 50 node cluster: `(51) + (51 * 30 (default)) = 1,581` (/21 or larger)<p/>Example for a 50 node cluster that also includes provision to scale up an additional 10 nodes: `(61) + (61 * 30 (default)) = 1,891` (/21 or larger)<p>If you don't specify a maximum number of pods per node when you create your cluster, the maximum number of pods per node is set to *30*. The minimum number of IP addresses required is based on that value. If you calculate your minimum IP address requirements on a different maximum value, see [how to configure the maximum number of pods per node](#configure-maximumnew-clusters) to set this value when you deploy your cluster. | | Kubernetes service address range | This range should not be used by any network element on or connected to this virtual network. Service address CIDR must be smaller than /12. You can reuse this range across different AKS clusters. |
-| Kubernetes DNS service IP address | IP address within the Kubernetes service address range that will be used by cluster service discovery. Don't use the first IP address in your address range, such as .1. The first address in your subnet range is used for the *kubernetes.default.svc.cluster.local* address. |
+| Kubernetes DNS service IP address | IP address within the Kubernetes service address range that will be used by cluster service discovery. Don't use the first IP address in your address range. The first address in your subnet range is used for the *kubernetes.default.svc.cluster.local* address. |
| Docker bridge address | The Docker bridge network address represents the default *docker0* bridge network address present in all Docker installations. While *docker0* bridge is not used by AKS clusters or the pods themselves, you must set this address to continue to support scenarios such as *docker build* within the AKS cluster. It is required to select a CIDR for the Docker bridge network address because otherwise Docker will pick a subnet automatically, which could conflict with other CIDRs. You must pick an address space that does not collide with the rest of the CIDRs on your networks, including the cluster's service CIDR and pod CIDR. Default of 172.17.0.1/16. You can reuse this range across different AKS clusters. | ## Maximum pods per node
When you create an AKS cluster, the following parameters are configurable for Az
Although it's technically possible to specify a service address range within the same virtual network as your cluster, doing so is not recommended. Unpredictable behavior can result if overlapping IP ranges are used. For more information, see the [FAQ](#frequently-asked-questions) section of this article. For more information on Kubernetes services, see [Services][services] in the Kubernetes documentation.
-**Kubernetes DNS service IP address**: The IP address for the cluster's DNS service. This address must be within the *Kubernetes service address range*. Don't use the first IP address in your address range, such as .1. The first address in your subnet range is used for the *kubernetes.default.svc.cluster.local* address.
+**Kubernetes DNS service IP address**: The IP address for the cluster's DNS service. This address must be within the *Kubernetes service address range*. Don't use the first IP address in your address range. The first address in your subnet range is used for the *kubernetes.default.svc.cluster.local* address.
**Docker Bridge address**: The Docker bridge network address represents the default *docker0* bridge network address present in all Docker installations. While *docker0* bridge is not used by AKS clusters or the pods themselves, you must set this address to continue to support scenarios such as *docker build* within the AKS cluster. It is required to select a CIDR for the Docker bridge network address because otherwise Docker will pick a subnet automatically which could conflict with other CIDRs. You must pick an address space that does not collide with the rest of the CIDRs on your networks, including the cluster's service CIDR and pod CIDR.
aks Coredns Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/coredns-custom.md
metadata:
namespace: kube-system data: test.server: | # you may select any name here, but it must end with the .server file extension
- <domain to be rewritten>.com:53 {
- errors
- cache 30
- rewrite name substring <domain to be rewritten>.com default.svc.cluster.local
- kubernetes cluster.local in-addr.arpa ip6.arpa {
- pods insecure
- fallthrough in-addr.arpa ip6.arpa
- }
- forward . /etc/resolv.conf # you can redirect this to a specific DNS server such as 10.0.0.10, but that server must be able to resolve the rewritten domain name
- }
+ <domain to be rewritten>.com:53 {
+ log
+ errors
+ rewrite stop {
+ name regex (.*)\.<domain to be rewritten>.com {1}.default.svc.cluster.local
+ answer name (.*)\.default\.svc\.cluster\.local {1}.<domain to be rewritten>.com
+ }
+ forward . /etc/resolv.conf # you can redirect this to a specific DNS server such as 10.0.0.10, but that server must be able to resolve the rewritten domain name
+}
``` > [!IMPORTANT]
aks Ingress Internal Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-internal-ip.md
This article uses [Helm 3][helm] to install the NGINX ingress controller on a [s
This article also requires that you are running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr].
+In addition, this article assumes you have an existing AKS cluster with an [integrated ACR][aks-integrated-acr].
## Import the images used by the Helm chart into your ACR
-This article uses the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart], which relies on three container images. Use `az acr import` to import those images into your ACR.
+Often when using an AKS cluster with a private network, it is a requirement to manage the provenance of the container images used within the cluster. See [Best practices for container image management and security in Azure Kubernetes Service (AKS)][aks-container-best-practices] for more information. To support this requirement, and for completeness, the examples in this article rely on importing the three container images used by the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart] into your ACR.
+
+Use `az acr import` to import these images into your ACR.
```azurecli REGISTRY_NAME=<REGISTRY_NAME>
You can also:
[aks-supported versions]: supported-kubernetes-versions.md [ingress-nginx-helm-chart]: https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx [aks-integrated-acr]: cluster-container-registry-integration.md?tabs=azure-cli#create-a-new-aks-cluster-with-acr-integration
-[acr-helm]: ../container-registry/container-registry-helm-repos.md
+[acr-helm]: ../container-registry/container-registry-helm-repos.md
+[aks-container-best-practices]: operator-best-practices-container-image-management.md
aks Managed Aad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/managed-aad.md
# AKS-managed Azure Active Directory integration
-AKS-managed Azure AD integration is designed to simplify the Azure AD integration experience, where users were previously required to create a client app, a server app, and required the Azure AD tenant to grant Directory Read permissions. In the new version, the AKS resource provider manages the client and server apps for you.
+AKS-managed Azure AD integration simplifies the Azure AD integration process. Previously, users were required to create a client and server app, and required the Azure AD tenant to grant Directory Read permissions. In the new version, the AKS resource provider manages the client and server apps for you.
## Azure AD authentication overview
Learn more about the Azure AD integration flow on the [Azure Active Directory in
* AKS-managed Azure AD integration can't be disabled * Changing a AKS-managed Azure AD integrated cluster to legacy AAD is not supported
-* non-Kubernetes RBAC enabled clusters aren't supported for AKS-managed Azure AD integration
+* Clusters without Kubernetes RBAC enabled aren't supported for AKS-managed Azure AD integration
* Changing the Azure AD tenant associated with AKS-managed Azure AD integration isn't supported ## Prerequisites
Use [these instructions](https://kubernetes.io/docs/tasks/tools/install-kubectl/
## Before you begin
-For your cluster, you need an Azure AD group. This group is needed as admin group for the cluster to grant cluster admin permissions. You can use an existing Azure AD group, or create a new one. Record the object ID of your Azure AD group.
+For your cluster, you need an Azure AD group. This group will be registered as an admin group on the cluster to grant cluster admin permissions. You can use an existing Azure AD group, or create a new one. Record the object ID of your Azure AD group.
```azurecli-interactive # List existing groups in the directory
Once the cluster is created, you can start accessing it.
## Access an Azure AD enabled cluster
-You'll need the [Azure Kubernetes Service Cluster User](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-user-role) built-in role to do the following steps.
+Before you access the cluster using an Azure AD defined group, you'll need the [Azure Kubernetes Service Cluster User](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-user-role) built-in role.
Get the user credentials to access the cluster:
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
aks Update Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/update-credentials.md
Now continue on to [update AKS cluster with new service principal credentials](#
## Update AKS cluster with new service principal credentials > [!IMPORTANT]
-> For large clusters, updating the AKS cluster with a new service principal may take a long time to complete.
+> For large clusters, updating the AKS cluster with a new service principal may take a long time to complete. Consider reviewing and customizing the [node surge upgrade settings][node-surge-upgrade] to minimize disruption during cluster updates and upgrades.
Regardless of whether you chose to update the credentials for the existing service principal or create a service principal, you now update the AKS cluster with your new credentials using the [az aks update-credentials][az-aks-update-credentials] command. The variables for the *--service-principal* and *--client-secret* are used:
In this article, the service principal for the AKS cluster itself and the AAD In
[az-ad-sp-credential-list]: /cli/azure/ad/sp/credential#az_ad_sp_credential_list [az-ad-sp-credential-reset]: /cli/azure/ad/sp/credential#az_ad_sp_credential_reset [node-image-upgrade]: ./node-image-upgrade.md
+[node-surge-upgrade]: upgrade-cluster.md#customize-node-surge-upgrade
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-azure-ad-pod-identity.md
az aks pod-identity add --resource-group myResourceGroup --cluster-name myAKSClu
> [!NOTE] > When you enable pod-managed identity on your AKS cluster, an AzurePodIdentityException named *aks-addon-exception* is added to the *kube-system* namespace. An AzurePodIdentityException allows pods with certain labels to access the Azure Instance Metadata Service (IMDS) endpoint without being intercepted by the node-managed identity (NMI) server. The *aks-addon-exception* allows AKS first-party addons, such as AAD pod-managed identity, to operate without having to manually configure an AzurePodIdentityException. Optionally, you can add, remove, and update an AzurePodIdentityException using `az aks pod-identity exception add`, `az aks pod-identity exception delete`, `az aks pod-identity exception update`, or `kubectl`.
+> The "POD_IDENTITY_NAME" has to be a valid [DNS subdomain name] as defined in [RFC 1123].
> [!NOTE] > When you assign the pod identity by using `pod-identity add`, the Azure CLI attempts to grant the Managed Identity Operator role over the pod identity (*IDENTITY_RESOURCE_ID*) to the cluster identity.
For more information on managed identities, see [Managed identities for Azure re
[az-group-create]: /cli/azure/group#az_group_create [az-identity-create]: /cli/azure/identity#az_identity_create [az-managed-identities]: ../active-directory/managed-identities-azure-resources/overview.md
-[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
+[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
+[RFC 1123]: https://tools.ietf.org/html/rfc1123
+[DNS subdomain name]: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
api-management Api Management Howto Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-policies.md
Title: Policies in Azure API Management | Microsoft Docs
-description: Learn how to create, edit, and configure policies in API Management. See code examples and view additional available resources.
+description: Learn how to create, edit, and configure policies in API Management. See code examples and other available resources.
documentationcenter: ''
na Previously updated : 11/29/2017 Last updated : 08/25/2021 # Policies in Azure API Management
-In Azure API Management (APIM), policies are a powerful capability of the system that allow the publisher to change the behavior of the API through configuration. Policies are a collection of Statements that are executed sequentially on the request or response of an API. Popular Statements include format conversion from XML to JSON and call rate limiting to restrict the amount of incoming calls from a developer. Many more policies are available out of the box.
+In Azure API Management, API publishers can change API behavior through configuration using policies. Policies are a collection of statements executed sequentially on the request or response of an API. Popular statements include:
-Policies are applied inside the gateway which sits between the API consumer and the managed API. The gateway receives all requests and usually forwards them unaltered to the underlying API. However a policy can apply changes to both the inbound request and outbound response.
+* Format conversion from XML to JSON.
+* Call rate limiting to restrict the number of incoming calls from a developer.
-Policy expressions can be used as attribute values or text values in any of the API Management policies, unless the policy specifies otherwise. Some policies such as the [Control flow][Control flow] and [Set variable][Set variable] policies are based on policy expressions. For more information, see [Advanced policies][Advanced policies] and [Policy expressions][Policy expressions].
+Many more policies are available out of the box.
+
+Policies are applied inside the gateway between the API consumer and the managed API. While the gateway receives requests and forwards them, unaltered, to the underlying API, a policy can apply changes to both the inbound request and outbound response.
+
+Unless the policy specifies otherwise, policy expressions can be used as attribute values or text values in any of the API Management policies. Some policies are based on policy expressions, such as the [Control flow][Control flow] and [Set variable][Set variable]. For more information, see the [Advanced policies][Advanced policies] and [Policy expressions][Policy expressions] articles.
## <a name="sections"> </a>Understanding policy configuration
-The policy definition is a simple XML document that describes a sequence of inbound and outbound statements. The XML can be edited directly in the definition window. A list of statements is provided to the right and statements applicable to the current scope are enabled and highlighted.
+Policy definitions are simple XML documents that describe a sequence of inbound and outbound statements. You can edit the XML directly in the definition window, which also provides:
+* A list of statements to the right.
+* Statements applicable to the current scope enabled and highlighted.
-Clicking an enabled statement will add the appropriate XML at the location of the cursor in the definition view.
+Clicking an enabled statement will add the appropriate XML at the cursor in the definition view.
> [!NOTE]
-> If the policy that you want to add is not enabled, ensure that you are in the correct scope for that policy. Each policy statement is designed for use in certain scopes and policy sections. To review the policy sections and scopes for a policy, check the **Usage** section for that policy in the [Policy Reference][Policy Reference].
->
->
+> If the policy that you want to add is not enabled, ensure that you are in the correct scope for that policy. Each policy statement is designed for use in certain scopes and policy sections. To review the policy sections and scopes for a policy, check the **Usage** section in the [Policy Reference][Policy Reference].
-The configuration is divided into `inbound`, `backend`, `outbound`, and `on-error`. The series of specified policy statements is executed in order for a request and a response.
+The configuration is divided into `inbound`, `backend`, `outbound`, and `on-error`. This series of specified policy statements is executed in order for a request and a response.
```xml <policies>
The configuration is divided into `inbound`, `backend`, `outbound`, and `on-erro
</policies> ```
-If there is an error during the processing of a request, any remaining steps in the `inbound`, `backend`, or `outbound` sections are skipped and execution jumps to the statements in the `on-error` section. By placing policy statements in the `on-error` section you can review the error by using the `context.LastError` property, inspect and customize the error response using the `set-body` policy, and configure what happens if an error occurs. There are error codes for built-in steps and for errors that may occur during the processing of policy statements. For more information, see [Error handling in API Management policies](./api-management-error-handling-policies.md).
+If an error occurs during the processing of a request:
+* Any remaining steps in the `inbound`, `backend`, or `outbound` sections are skipped.
+* Execution jumps to the statements in the `on-error` section.
+
+By placing policy statements in the `on-error` section, you can:
+* Review the error using the `context.LastError` property.
+* Inspect and customize the error response using the `set-body` policy.
+* Configure what happens if an error occurs.
+
+For more information, see [Error handling in API Management policies](./api-management-error-handling-policies.md) for error codes for:
+* Built-in steps
+* Errors that may occur during the processing of policy statements.
## <a name="scopes"> </a>How to configure policies
See [Policy samples](./policy-reference.md) for more code examples.
### Apply policies specified at different scopes
-If you have a policy at the global level and a policy configured for an API, then whenever that particular API is used both policies will be applied. API Management allows for deterministic ordering of combined policy statements via the `base` element.
+If you have a policy at the global level and a policy configured for an API, both policies will be applied whenever that particular API is used. API Management allows for deterministic ordering of combined policy statements via the `base` element.
```xml <policies>
If you have a policy at the global level and a policy configured for an API, the
</policies> ```
-In the example policy definition above, the `cross-domain` statement would execute before any higher policies which would in turn, be followed by the `find-and-replace` policy.
+In the example policy definition above:
+* The `cross-domain` statement would execute before any higher policies.
+* The `find-and-replace` policy would execute after any higher policies.
+
+>[!NOTE]
+> If you remove the `<base />` tag at the API scope, only policies configured at the API scope will be applied. Neither product nor global scope policies would be applied.
### Restrict incoming requests
api-management Api Management Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-subscriptions.md
na Previously updated : 11/14/2018 Last updated : 08/27/2021 # Subscriptions in Azure API Management
-Subscriptions are an important concept in Azure API Management. They're the most common way for API consumers to get access to APIs published through an API Management instance. This article provides an overview of the concept.
+In Azure API Management, subscriptions are the most common way for API consumers to access APIs published through an API Management instance. This article provides an overview of the concept.
## What are subscriptions?
-When you publish APIs through API Management, it's easy and common to secure access to those APIs by using subscription keys. Developers who need to consume the published APIs must include a valid subscription key in HTTP requests when they make calls to those APIs. Otherwise, the calls are rejected immediately by the API Management gateway. They aren't forwarded to the back-end services.
+By publishing APIs through API Management, you can easily secure API access using subscription keys. Consume the published APIs by including a valid subscription key in the HTTP requests when calling to those APIs. Without a valid subscription key, the calls will:
+* Be rejected immediately by the API Management gateway.
+* Not be forwarded to the back-end services.
-To get a subscription key for accessing APIs, a subscription is required. A subscription is essentially a named container for a pair of subscription keys. Developers who need to consume the published APIs can get subscriptions. And they don't need approval from API publishers. API publishers can also create subscriptions directly for API consumers.
+To access APIs, you'll need a subscription and a subscription key. A *subscription* is a named container for a pair of subscription keys.
+
+Regularly regenerating keys is a common security precaution, so most Azure products requiring a subscription key will generate keys in pairs. Each application using the service can switch from *key A* to *key B* and regenerate key A with minimal disruption, and vice versa.
+
+In addition,
+
+* Developers can get subscriptions without approval from API publishers.
+* API publishers can create subscriptions directly for API consumers.
> [!TIP] > API Management also supports other mechanisms for securing access to APIs, including the following examples:
Subscriptions can be associated with various scopes: product, all APIs, or an in
### Subscriptions for a product
-Traditionally, subscriptions in API Management were always associated with a single [API product](api-management-terminology.md) scope. Developers found the list of products on the Developer Portal. Then they'd submit subscription requests for the products they wanted to use. After a subscription request is approved, either automatically or by API publishers, the developer can use the keys in it to access all APIs in the product. At present, developer portal only shows the product-scope subscriptions under user profile section.
+Traditionally, subscriptions in API Management were associated with a single [API product](api-management-terminology.md) scope. Developers:
+* Found the list of products on the developer portal.
+* Submitted subscription requests for the products they wanted to use.
+* Use the keys in those subscriptions (approved either automatically or by API publishers) to access all APIs in the product.
+ * You can access APIs with or without the subscription key regardless of subscription scope (product, global, or API).
+
+Currently, the developer portal only shows the product scope subscriptions under the **User Profile** section.
+
+> [!NOTE]
+> If you are using an API-scoped subscription key, any *policies* configured at the product scope are not applied to that subscription.
![Product subscriptions](./media/api-management-subscriptions/product-subscription.png)
Traditionally, subscriptions in API Management were always associated with a sin
### Subscriptions for all APIs or an individual API
-When we introduced the [Consumption](https://aka.ms/apimconsumptionblog) tier of API Management, we made a few changes to streamline key management:
-- First, we added two more subscription scopes: all APIs and a single API. The scope of subscriptions is no longer limited to an API product. It's now possible to create keys that grant access to an API, or all APIs within an API Management instance, without needing to create a product and add the APIs to it first. Also, each API Management instance now comes with an immutable, all-APIs subscription. This subscription makes it easier and more straightforward to test and debug APIs within the test console.
+With the addition of the [Consumption](https://aka.ms/apimconsumptionblog) tier of API Management, subscription key management become more streamlined.
+
+#### Two more subscription scopes
+
+Now that subscription scopes are no longer limited to an API product, you can create keys that grant access to either:
+* a single API, or
+* All APIs within an API Management instance.
+
+You won't need to create a product before adding APIs to it.
+
+Each API Management instance now comes with an immutable, all-APIs subscription. This subscription makes it easier and more straightforward to test and debug APIs within the test console.
+
+#### Standalone subscriptions
+
+API Management now allows *standalone* subscriptions. You no longer need to associate subscriptions with a developer account. This feature proves useful in scenarios similar to several developers or teams sharing a subscription.
+
+Creating a subscription without assigning an owner makes it a standalone subscription. To grant developers and the rest of your team access to the standalone subscription key, either:
+* Manually share the subscription key.
+* Use a custom system to make the subscription key available to your team.
-- Second, API Management now allows **standalone** subscriptions. Subscriptions are no longer required to be associated with a developer account. This feature is useful in scenarios such as when several developers or teams share a subscription.
+#### Creating subscriptions in Azure portal
-- Finally, API publishers can now [create subscriptions](api-management-howto-create-subscriptions.md) directly in the Azure portal:
+API publishers can now [create subscriptions](api-management-howto-create-subscriptions.md) directly in the Azure portal:
- ![Flexible subscriptions](./media/api-management-subscriptions/flexible-subscription.png)
+![Flexible subscriptions](./media/api-management-subscriptions/flexible-subscription.png)
## Next steps Get more information on API Management:
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
api-management Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
api-management Websocket Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/websocket-api.md
Title: Import a WebSocket API using the Azure portal | Microsoft Docs
description: Learn how API Management supports WebSocket, add a WebSocket API, and WebSocket limitations. --++ Previously updated : 06/02/2021 Last updated : 08/25/2021
In this article, you will:
## Prerequisites - An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).-- A [WebSocket API](https://www.websocket.org/echo.html).
+- A WebSocket API.
## WebSocket passthrough
Per the [WebSocket protocol](https://tools.ietf.org/html/rfc6455), when a client
|-|-| | Display name | The name by which your WebSocket API will be displayed. | | Name | Raw name of the WebSocket API. Automatically populates as you type the display name. |
- | WebSocket URL | The base URL with your websocket name. For example: ws://example.com/your-socket-name |
+ | WebSocket URL | The base URL with your websocket name. For example: *ws://example.com/your-socket-name* |
+ | URL scheme | Accept the default |
+ | API URL suffix| Add a URL suffix to identify this specific API in this API Management instance. It has to be unique in this APIM instance. |
| Products | Associate your WebSocket API with a product to publish it. | | Gateways | Associate your WebSocket API with existing gateways. |
Per the [WebSocket protocol](https://tools.ietf.org/html/rfc6455), when a client
1. Repeat preceding steps to test different payloads. 1. When testing is complete, select **Disconnect**.
+## View metrics and logs
+
+Use standard API Management and Azure Monitor features to [monitor](api-management-howto-use-azure-monitor.md) WebSocket APIs:
+
+* View API metrics in Azure Monitor
+* Optionally enable diagnostic settings to collect and view API Management gateway logs, which include WebSocket API operations
+
+For example, the following screenshot shows recent WebSocket API responses with code `101` from the **ApiManagementGatewayLogs** table. These results indicate the successful switch of the requests from TCP to the WebSocket protocol.
++ ## Limitations WebSocket APIs are available and supported in public preview through Azure portal, Management API, and Azure Resource Manager. Below are the current restrictions of WebSocket support in API Management:
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
automation Quickstart Create Automation Account Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/quickstart-create-automation-account-template.md
Title: 'Create an Automation account - Azure template'
+ Title: Create an Azure Automation account using a Resource Manager template
description: This article shows how to create an Automation account by using the Azure Resource Manager template. Previously updated : 07/20/2021 Last updated : 08/27/2021 - mvc - subject-armqs - mode-arm
-# Customer intent: I want to create an Automation account by using an Azure Resource Manager template so that I can automate processes with runbooks.
-# Create an Automation account by using ARM template
+# Create an Azure Automation account using a Resource Manager template
-Azure Automation delivers a cloud-based automation and configuration service that supports consistent management across your Azure and non-Azure environments. This article shows you how to deploy an Azure Resource Manager template (ARM template) that creates an Automation account. Using an ARM template takes fewer steps compared to other deployment methods.
+Azure Automation delivers a cloud-based automation and configuration service that supports consistent management across your Azure and non-Azure environments. This article shows you how to deploy an Azure Resource Manager template (ARM template) that creates an Automation account. Using an ARM template takes fewer steps compared to other deployment methods. The JSON template specifies default values for parameters that would likely be used as a standard configuration in your environment. You can store the template in an Azure storage account for shared access in your organization. For more information about working with templates, see [Deploy resources with ARM templates and the Azure CLI](../azure-resource-manager/templates/deploy-cli.md).
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.automation%2F101-automation%2Fazuredeploy.json)
-
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-## Review the template
-
-This sample template performs the following:
+The sample template does the following steps:
* Automates the creation of an Azure Monitor Log Analytics workspace. * Automates the creation of an Azure Automation account. * Links the Automation account to the Log Analytics workspace. * Adds sample Automation runbooks to the account.
->[!NOTE]
->Creation of the Automation Run As account is not supported when you're using an ARM template. To create a Run As account manually from the portal or with PowerShell, see [Create Run As account](create-run-as-account.md).
-
-After you complete these steps, you need to [configure diagnostic settings](automation-manage-send-joblogs-log-analytics.md) for your Automation account to send runbook job status and job streams to the linked Log Analytics workspace.
+> [!NOTE]
+> Creation of the Automation Run As account is not supported when you're using an ARM template. To create a Run As account manually from the portal or with PowerShell, see [Create Run As account](create-run-as-account.md).
-The template used in this article is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-automation/).
--
-### API versions
-
-The following table lists the API version for the resources used in this example.
-
-| Resource | Resource type | API version |
-|:|:|:|
-| [Workspace](/azure/templates/microsoft.operationalinsights/workspaces) | workspaces | 2020-03-01-preview |
-| [Automation account](/azure/templates/microsoft.automation/automationaccounts) | automation | 2020-01-13-preview |
-| [Workspace Linked services](/azure/templates/microsoft.operationalinsights/workspaces/linkedservices) | workspaces | 2020-03-01-preview |
-
-### Before you use the template
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-The JSON parameters template is configured for you to specify:
+## Prerequisites
-* The name of the workspace.
-* The region to create the workspace in.
-* The name of the Automation account.
-* The region to create the Automation account in.
+If you're new to Azure Automation and Azure Monitor, it's important that you understand the configuration details. The understanding can help you avoid errors when you try to create, configure, and use a Log Analytics workspace linked to your new Automation account.
-The following parameters in the template are set with a default value for the Log Analytics workspace:
+* Review [additional details](../azure-monitor/logs/resource-manager-workspace.md#create-a-log-analytics-workspace) to fully understand workspace configuration options, such as access control mode, pricing tier, retention, and capacity reservation level.
-* *sku* defaults to the per GB pricing tier released in the April 2018 pricing model.
-* *dataRetention* defaults to 30 days.
+* Review [workspace mappings](how-to/region-mappings.md) to specify the supported regions inline or in a parameter file. Only certain regions are supported for linking a Log Analytics workspace and an Automation account in your subscription.
->[!WARNING]
->If you want to create or configure a Log Analytics workspace in a subscription that has opted into the April 2018 pricing model, the only valid Log Analytics pricing tier is *PerGB2018*.
->
+* If you're new to Azure Monitor Logs and haven't deployed a workspace already, review the [workspace design guidance](../azure-monitor/logs/design-logs-deployment.md). This document will help you learn about access control, and help you understand the recommended design implementation strategies for your organization.
-The JSON template specifies a default value for the other parameters that would likely be used as a standard configuration in your environment. You can store the template in an Azure storage account for shared access in your organization. For more information about working with templates, see [Deploy resources with ARM templates and the Azure CLI](../azure-resource-manager/templates/deploy-cli.md).
+## Review the template
-If you're new to Azure Automation and Azure Monitor, it's important that you understand the following configuration details. They can help you avoid errors when you try to create, configure, and use a Log Analytics workspace linked to your new Automation account.
+The template used in this article is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-automation/).
-* Review [additional details](../azure-monitor/logs/resource-manager-workspace.md#create-a-log-analytics-workspace) to fully understand workspace configuration options, such as access control mode, pricing tier, retention, and capacity reservation level.
-* Review [workspace mappings](how-to/region-mappings.md) to specify the supported regions inline or in a parameter file. Only certain regions are supported for linking a Log Analytics workspace and an Automation account in your subscription.
+The Azure resources defined in the template:
-* If you're new to Azure Monitor logs and have not deployed a workspace already, you should review the [workspace design guidance](../azure-monitor/logs/design-logs-deployment.md). It will help you to learn about access control, and understand the design implementation strategies we recommend for your organization.
+* [**Microsoft.OperationalInsights/workspaces**](/azure/templates/microsoft.operationalinsights/workspaces): creates an Azure Log Analytics workspace.
+* [**Microsoft.Automation/automationAccounts**](/azure/templates/microsoft.automation/automationaccounts): creates an Azure Automation account.
+* [**Microsoft.Automation/automationAccounts/runbooks**](/azure/templates/microsoft.automation/automationaccounts/runbooks): creates an Azure Automation account runbook.
## Deploy the template
-1. Select the following image to sign in to Azure and open a template. The template creates an Azure Automation account, a Log Analytics workspace, and links the Automation account to the workspace.
+1. Select the **Deploy to Azure** button below to sign in to Azure and open the ARM template.
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.automation%2F101-automation%2Fazuredeploy.json)
-2. Enter the values.
-
- When you attempt to run the ARM template from PowerShell, CLI, or the Templates feature in the portal, if the `_artifactsLocation` parameter is not properly set, you will receive an error message similar to the following:
-
+1. Enter or select the following values:
+
+ |Property |Description |
+ |||
+ |Subscription |From the drop-down list, select your Azure subscription.|
+ |Resource group |From the drop-down list, select your existing resource group, or select **Create new**.|
+ |Region |This value will autopopulate.|
+ |Workspace name |Enter a name for your new Log Analytics Workspace.|
+ |Sku | Defaults to the per GB pricing tier released in the April 2018 pricing model. If you want to create or configure a Log Analytics workspace in a subscription that has opted into the April 2018 pricing model, the only valid Log Analytics pricing tier is `PerGB2018`.|
+ |Data retention |Defaults to 30 days.|
+ |Location |The value will autopopulate with the location used for the resource group.|
+ |Automation Account name | Enter a name for your new Automation account.|
+ |Sample graphical runbook name | Leave as is.|
+ |Sample graphical runbook description | Leave as is.|
+ |Sample PowerShell runbook name | Leave as is.|
+ |Sample PowerShell runbook description | Leave as is.|
+ |Sample Python2Runbook name |Leave as is.|
+ |Sample Python2Runbook description |Leave as is.|
+ |_artifacts Location |Leave as is.<sup>*</sup> URI to artifacts location.|
+ |_artifacts Location Sas Token | Leave blank. The sasToken required to access `_artifactsLocation`. When the template is deployed using the accompanying scripts, a `sasToken` will be automatically generated.|
+
+ <sup>*</sup> When you attempt to run the ARM template from PowerShell, CLI, or the Templates feature in the portal, if the `_artifactsLocation` parameter isn't properly set, you'll receive an error message similar to the following:
+
`"message": "Deployment template validation failed: 'The template resource '_artifactsLocation' at line '96' and column '31' is not valid: The language expression property 'templateLink' doesn't exist, available properties are 'template, templateHash, parameters, mode, debugSetting, provisioningState'.. Please see https://aka.ms/arm-template-expressions for usage details.'."`-
- To prevent this, when running from the Templates feature in the portal, specify the following for the `_artifactsLocation` parameter - `https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.automation/101-automation/azuredeploy.json`.
-
+
+ To prevent this error, when running from the Templates feature in the portal, specify the following value for the `_artifactsLocation` parameter - `https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.automation/101-automation/azuredeploy.json`.
+
When you run from PowerShell, include the parameter and its value `-TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.automation/101-automation/azuredeploy.json`.-
+
When you run from Azure CLI, include the parameter and its value - `--template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.automation/101-automation/azuredeploy.json`.-
+
For reference about PowerShell/CLI, see the following - [Create Azure Automation account (microsoft.com)](https://azure.microsoft.com/resources/templates/101-automation/) under the **Use the template** section.
-3. The deployment can take a few minutes to finish. When completed, the output is similar to the following:
+1. Select **Review + Create** and then **Create**. The deployment can take a few minutes to finish. When completed, the output is similar to the following image:
![Example result when deployment is complete](media/quickstart-create-automation-account-template/template-output.png) ## Review deployed resources
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-2. In the Azure portal, open the Automation account you just created.
+1. Once the deployment completes, you'll receive a **Deployment succeeded** notification with a **Go to resource** link. Your **Resource group** page will list your new resources. From the list, select your new Automation account.
-3. From the left-pane, select **Runbooks**. On the **Runbooks** page, listed are three tutorial runbooks created with the Automation account.
+1. From the left-side, under **Process Automation**, select **Runbooks**. The **Runbooks** page lists the three sample runbooks created with the Automation account.
![Tutorial runbooks created with Automation account](./media/quickstart-create-automation-account-template/automation-sample-runbooks.png)
-4. From the left-pane, select **Linked workspace**. On the **Linked workspace** page, it shows the Log Analytics workspace you specified earlier linked to your Automation account.
+1. From the left-side, under **Related Resources**, select **Linked workspace**. The **Linked workspace** page shows the Log Analytics workspace you specified earlier that is linked to your Automation account.
![Automation account linked to the Log Analytics workspace](./media/quickstart-create-automation-account-template/automation-account-linked-workspace.png)
-## Clean up resources
-
-When you no longer need them, unlink the Automation account from the Log Analytics workspace, and then delete the Automation account and workspace.
- ## Next steps
-In this article, you created an Automation account, a Log Analytics workspace, and linked them together.
-
-To learn more, continue to the tutorials for Azure Automation.
-
-> [!div class="nextstepaction"]
-> [Azure Automation tutorials](learn/automation-tutorial-runbook-graphical.md)
+[Configure diagnostic settings](automation-manage-send-joblogs-log-analytics.md) for your Automation account to send runbook job status and job streams to the linked Log Analytics workspace.
automation Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
automation Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/whats-new-archive.md
+
+ Title: Archive for What's new in Azure Automation
+description: The What's new release notes in the Overview section of this content set contains six months of activity. Thereafter, the items are removed from the main article and put into this article.
++ Last updated : 08/27/2021+++
+# Archive for What's new in Azure Automation?
+
+The primary [What's new in Azure Automation?](whats-new.md) article contains updates for the last six months, while this article contains all the older information.
+
+What's new in Azure Automation? provides you with information about:
+
+- The latest releases
+- Known issues
+- Bug fixes
+++
+## January 2021
+
+### Support for Automation and State Configuration declared GA in Switzerland West
+
+**Type:** New feature
+
+Automation account and State Configuration availability in the Switzerland West region. For more information, read the [announcement](https://azure.microsoft.com/updates/azure-automation-in-switzerland-west-region/).
+
+### Added Python 3 script to import module with multiple dependencies
+
+**Type:** New feature
+
+The script is available for download from our [GitHub repository](https://github.com/azureautomation/runbooks/blob/master/Utility/Python/import_py3package_from_pypi.py).
+
+### Hybrid Runbook Worker role support for Centos 8.x/RHEL 8.x/SLES 15
+
+**Type.** New feature
+
+The Hybrid Runbook Worker feature supports CentOS 8.x, REHL 8.x, and SLES 15 distributions for only process automation on Hybrid Runbook Workers. See [Supported operating systems](automation-linux-hrw-install.md#supported-linux-operating-systems) for updates to the documentation to reflect these changes.
+
+### Update Management and Change Tracking availability in Australia East, East Asia, West US, and Central US regions
+
+**Type:** New feature
+
+Automation account, Change Tracking and Inventory, and Update Management are available in Australia East, East Asia, West US, and Central US regions.
+
+### Introduced public preview of Python 3 runbooks in US Government cloud
+
+**Type:** New feature
+Azure Automation introduces public preview support of Python 3 cloud and hybrid runbook execution in US Government cloud regions. For more information, see the [announcement](https://azure.microsoft.com/updates/azure-automation-python-3-public-preview/).
+
+### Azure Automation runbooks moved from TechNet Script Center to GitHub
+
+**Type:** Plan for change
+
+The TechNet Script Center is retiring and all runbooks hosted in the Runbook gallery have been moved to our [Automation GitHub organization](https://github.com/azureautomation). For more information, read [Azure Automation Runbooks moving to GitHub](https://techcommunity.microsoft.com/t5/azure-governance-and-management/azure-automation-runbooks-moving-to-github/ba-p/2039337).
+
+## December 2020
+
+### Azure Automation and Update Management Private Link GA
+
+**Type:** New feature
+
+Azure Automation and Update Management support announced as GA for Azure global and Government clouds. Azure Automation enabled Private Link support to secure execution of a runbook on a hybrid worker role, using Update Management to patch machines, invoking a runbook through a webhook, and using State Configuration service to keep your machines complaint. For more information, read [Azure Automation Private Link support](https://azure.microsoft.com/updates/azure-automation-private-link)
+
+### Azure Automation classified as Grade-C certified on Accessibility
+
+**Type:** New feature
+
+Accessibility features of Microsoft products help agencies address global accessibility requirements. On the [blog announcement](https://cloudblogs.microsoft.com/industry-blog/government/2018/09/11/accessibility-conformance-reports/) page, search for **Azure Automation** to read the Accessibility conformance report for the Automation service.
+
+### Support for Automation and State Configuration GA in UAE North
+
+**Type:** New feature
+
+Automation account and State Configuration availability in the UAE North region. For more information, read [announcement](https://azure.microsoft.com/updates/azure-automation-in-uae-north-region/).
+
+### Support for Automation and State Configuration GA in Germany West Central
+
+**Type:** New feature
+
+Automation account and State Configuration availability in Germany West region. For more information, read [announcement](https://azure.microsoft.com/updates/azure-automation-in-germany-west-central-region/).
+
+### DSC support for Oracle 6 and 7
+
+**Type:** New feature
+
+Manage Oracle Linux 6 and 7 machines with Automation State Configuration. See [Supported Linux distros](https://github.com/Azure/azure-linux-extensions/tree/master/DSC#4-supported-linux-distributions) for updates to the documentation to reflect these changes.
+
+### Public Preview for Python3 runbooks in Automation
+
+**Type:** New feature
+
+Azure Automation now supports Python 3 cloud and hybrid runbook execution in public preview in all regions in Azure global cloud. For more information, see the [announcement]((https://azure.microsoft.com/updates/azure-automation-python-3-public-preview/).
+
+## November 2020
+
+### DSC support for Ubuntu 18.04
+
+**Type:** New feature
+
+See [Supported Linux Distros](https://github.com/Azure/azure-linux-extensions/tree/master/DSC#4-supported-linux-distributions) for updates to the documentation reflecting these changes.
+
+## October 2020
+
+### Support for Automation and State Configuration GA in Switzerland North
+
+**Type:** New feature
+
+Automation account and State Configuration availability in Switzerland North. For more information, read [announcement](https://azure.microsoft.com/updates/azure-automation-in-switzerland-north-region/).
+
+### Support for Automation and State Configuration GA in Brazil South East
+
+**Type:** New feature
+
+Automation account and State Configuration availability in Brazil South East. For more information, read [announcement](https://azure.microsoft.com/updates/azure-automation-in-brazil-southeast-region/).
+
+### Update Management availability in South Central US
+
+**Type:** New feature
+
+Azure Automation region mapping updated to support Update Management feature in South Central US region. See [Supported region mapping](how-to/region-mappings.md#supported-mappings) for updates to the documentation to reflect this change.
+
+## September 2020
+
+### Start/Stop VMs during off-hours runbooks updated to use Azure Az modules
+
+**Type:** New feature
+
+Start/Stop VM runbooks have been updated to use Az modules in place of Azure Resource Manager modules. See [Start/Stop VMs during off-hours](automation-solution-vm-management.md) overview for updates to the documentation to reflect these changes.
+
+## August 2020
+
+### Published the DSC extension to support Azure Arc
+
+**Type:** New feature
+
+Use Azure Automation State Configuration to centrally store configurations and maintain the desired state of hybrid connected machines enabled through the Azure Arc enabled servers DSC VM extension. For more information, read [Arc enabled servers VM extensions overview](../azure-arc/servers/manage-vm-extensions.md).
+
+### July 2020
+
+### Introduced Public Preview of Private Link support in Automation
+
+**Type:** New feature
+
+Use Azure Private Link to securely connect virtual networks to Azure Automation using private endpoints. For more information, read the [announcement](https://azure.microsoft.com/updates/public-preview-private-link-azure-automation-is-now-available/).
+
+### Hybrid Runbook Worker support for Windows Server 2008 R2
+
+**Type:** New feature
+
+Automation Hybrid Runbook Worker supports the Windows Server 2008 R2 operating system. See [Supported operating systems](automation-windows-hrw-install.md#supported-windows-operating-system) for updates to the documentation to reflect these changes.
+
+### Update Management support for Windows Server 2008 R2
+
+**Type:** New feature
+
+Update Management supports assessing and patching the Windows Server 2008 R2 operating system. See [Supported operating systems](update-management/operating-system-requirements.md) for updates to the documentation to reflect these changes.
+
+### Automation diagnostic logs schema update
+
+**Type:** New feature
+
+Changed the schema of Azure Automation log data in the Log Analytics service. To learn more, see [Forward Azure Automation job data to Azure Monitor logs](automation-manage-send-joblogs-log-analytics.md#filter-job-status-output-converted-into-a-json-object).
+
+### Azure Lighthouse supports Automation Update Management
+
+**Type:** New feature
+
+Azure Lighthouse enables delegated resource management with Update Management for service providers and customers. Read more [here](https://azure.microsoft.com/blog/how-azure-lighthouse-enables-management-at-scale-for-service-providers/).
+
+## June 2020
+
+### Automation and Update Management availability in the US Gov Arizona region
+
+**Type:** New feature
+
+Automation account and Update Management are available in US Gov Arizona. For more information, see [announcement](https://azure.microsoft.com/updates/azure-automation-generally-available-in-usgov-arizona-region/).
+
+### Hybrid Runbook Worker onboarding script updated to use Az modules
+
+**Type:** New feature
+
+The New-OnPremiseHybridWorker runbook has been updated to support Az modules. For more information, see the package in the [PowerShell Gallery](https://www.powershellgallery.com/packages/New-OnPremiseHybridWorker/1.7).
+
+### Update Management availability in China East 2
+
+**Type:** New feature
+
+Azure Automation region mapping updated to support Update Management feature in China East 2 region. See [Supported region mapping](how-to/region-mappings.md#supported-mappings) for updates to the documentation to reflect this change.
+
+## May 2020
+
+### Updated Automation service DNS records from region-specific to Automation account-specific URLs
+
+**Type:** New feature
+
+Azure Automation DNS records have been updated to support Private Links. For more information, read the [announcement](https://azure.microsoft.com/updates/azure-automation-updateddns-records/).
+
+### Added capability to keep Automation runbooks and DSC scripts encrypted by default
+
+**Type:** New feature
+
+In addition to improve security of assets, runbooks, and DSC scripts are also encrypted to enhance Azure Automation security.
+
+## April 2020
+
+### Retirement of the Automation watcher task
+
+**Type:** Plan for change
+
+Azure Logic Apps is now the recommended and supported way to monitor for events, schedule recurring tasks, and trigger actions. There will be no further investments in Watcher task functionality. To learn more, see [Schedule and run recurring automated tasks with Logic Apps](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md).
+
+## March 2020
+
+### Support for Impact Level 5 (IL5) compute isolation in Azure commercial and Government cloud
+
+**Type:**
+
+Azure Automation Hybrid Runbook Worker can be used in Azure Government to support Impact Level 5 workloads. To learn more, see our [documentation](automation-hybrid-runbook-worker.md#support-for-impact-level-5-il5).
+
+## February 2020
+
+### Introduced support for Azure virtual network service tags
+
+**Type:** New feature
+
+Automation support of service tags allows or denies the traffic for the Automation service, for a subset of scenarios. To learn more, see the [documentation](automation-hybrid-runbook-worker.md#service-tags).
+
+### Enable TLS 1.2 support for Azure Automation service
+
+**Type:** Plan for change
+
+Azure Automation fully supports TLS 1.2 and all client calls (through webhooks, DSC nodes, and hybrid worker). TLS 1.1 and TLS 1.0 are still supported for backward compatibility with older clients until customers standardize and fully migrate to TLS 1.2.
+
+## January 2020
+
+### Introduced Public Preview of customer-managed keys for Azure Automation
+
+**Type:** New feature
+
+Customers can manage and secure encryption of Azure Automation assets using their own managed keys. For more information, see [Use of customer-managed keys](automation-secure-asset-encryption.md#use-of-customer-managed-keys-for-an-automation-account).
+
+### Retirement of Azure Service Management (ASM) REST APIs for Azure Automation
+
+**Type:** Retire
+
+Azure Service Management (ASM) REST APIs for Azure Automation will be retired and no longer supported after January 30, 2020. To learn more, see the [announcement](https://azure.microsoft.com/updates/azure-automation-service-management-rest-apis-are-being-retired-april-30-2019/).
+
+## Next steps
+
+If you'd like to contribute to Azure Automation documentation, see the [Docs Contributor Guide](/contribute/).
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/whats-new.md
description: Significant updates to Azure Automation updated each month.
Previously updated : 08/17/2021 Last updated : 08/27/2021
Azure Automation receives improvements on an ongoing basis. To stay up to date w
- Known issues - Bug fixes
-This page is updated monthly, so revisit it regularly.
+This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Automation](whats-new-archive.md).
## August 2021
Customers should evaluate and plan for migration from Azure Automation State Con
## July 2021
-### Preview Support for User Assigned Managed Identities
+### Preview support for user-assigned managed identity
**Type:** New feature
-Azure Automation now supports [User Assigned Managed Identities](automation-secure-asset-encryption.md) for cloud jobs in Azure public , Gov & China regions. Read the [announcement](https://azure.microsoft.com/updates/azure-automation-user-assigned-identities/) for more information.
+Azure Automation now supports [user-assigned Managed Identities](automation-secure-asset-encryption.md) for cloud jobs in Azure global, Azure Government, and Azure China regions. Read the [announcement](https://azure.microsoft.com/updates/azure-automation-user-assigned-identities/) for more information.
### General Availability of customer-managed keys for Azure Automation
Start/Stop VMs during off-hours (v1) will deprecate on May 21, 2022. Customers s
**Type:** New feature
-Region mapping have been updated to support Update Management & Change Tracking in Norway East, UAE North, North Central US, Brazil South, and Korea Central. For more information, see [Supported mappings](./how-to/region-mappings.md#supported-mappings).
+Region mapping have been updated to support Update Management and Change Tracking in Norway East, UAE North, North Central US, Brazil South, and Korea Central. For more information, see [Supported mappings](./how-to/region-mappings.md#supported-mappings).
-### Support for System Assigned Managed Identities
+### Support for system-assigned Managed Identities
**Type:** New feature
-Azure Automation now supports [System Assigned Managed Identities](./automation-security-overview.md#managed-identities-preview) for cloud and Hybrid jobs in Azure public and Gov regions. Read the [announcement](https://azure.microsoft.com/updates/azure-automation-system-assigned-managed-identities/) for more information.
+Azure Automation now supports [system-assigned Managed Identities](./automation-security-overview.md#managed-identities-preview) for cloud and Hybrid jobs in Azure global and Azure Government regions. Read the [announcement](https://azure.microsoft.com/updates/azure-automation-system-assigned-managed-identities/) for more information.
## March 2021
For more information, see [Azure Policy reference](./policy-reference.md).
**Type:** New feature
-Use Process Automation and State configuration capabilities in South India. Read the [announcement](https://azure.microsoft.com/updates/azure-automation-in-south-india-region/) for more information.
+Use Process Automation and State Configuration feature in South India. Read the [announcement](https://azure.microsoft.com/updates/azure-automation-in-south-india-region/) for more information.
### Support for Automation and State Configuration declared GA in UK West **Type:** New feature
-Use Process Automation and State configuration capabilities in UK West. For more information, read [announcement](https://azure.microsoft.com/updates/azure-automation-in-uk-west-region/).
+Use Process Automation and State Configuration feature in UK West. For more information, read [announcement](https://azure.microsoft.com/updates/azure-automation-in-uk-west-region/).
### Support for Automation and State Configuration declared GA in UAE Central **Type:** New feature
-Use Process Automation and State configuration capabilities in UAE Central. Read the [announcement](https://azure.microsoft.com/updates/azure-automation-in-uae-central-region/) for more information.
+Use Process Automation and State Configuration feature in UAE Central. Read the [announcement](https://azure.microsoft.com/updates/azure-automation-in-uae-central-region/) for more information.
### Support for Automation and State Configuration available in Australia Central 2, Norway West, and France South
For more information, see [Use a webhook from an ARM template](./automation-webh
See the [full list](./update-management/operating-system-requirements.md) of supported Linux operating systems for more details.
-### In-region data residency support for Brazil South and South East Asia
+### In-region data residency support for Brazil South and South East Asia
**Type:** New feature
You can use the new Azure Policy compliance rule to allow creation of jobs, webh
Automation Update Management feature is available in East US, France Central, and North Europe regions. See [Supported region mapping](how-to/region-mappings.md) for updates to the documentation reflecting this change.
-## January 2021
-
-### Support for Automation and State Configuration declared GA in Switzerland West
-
-**Type:** New feature
-
-Automation account and State Configuration availability in the Switzerland West region. For more information, read the [announcement](https://azure.microsoft.com/updates/azure-automation-in-switzerland-west-region/).
-
-### Added Python 3 script to import module with multiple dependencies
-
-**Type:** New feature
-
-The script is available for download from our [GitHub repository](https://github.com/azureautomation/runbooks/blob/master/Utility/Python/import_py3package_from_pypi.py).
-
-### Hybrid Runbook Worker role support for Centos 8.x/RHEL 8.x/SLES 15
-
-**Type.** New feature
-
-The Hybrid Runbook Worker feature supports CentOS 8.x, REHL 8.x, and SLES 15 distributions for only process automation on Hybrid Runbook Workers. See [Supported operating systems](automation-linux-hrw-install.md#supported-linux-operating-systems) for updates to the documentation to reflect these changes.
-
-### Update Management and Change Tracking availability in Australia East, East Asia, West US, and Central US regions
-
-**Type:** New feature
-
-Automation account, Change Tracking and Inventory, and Update Management are available in Australia East, East Asia, West US, and Central US regions.
-
-### Introduced public preview of Python 3 runbooks in US Government cloud
-
-**Type:** New feature
-Azure Automation introduces public preview support of Python 3 cloud and hybrid runbook execution in US Government cloud regions. For more information, see the [announcement](https://azure.microsoft.com/updates/azure-automation-python-3-public-preview/).
-
-### Azure Automation runbooks moved from TechNet Script Center to GitHub
-
-**Type:** Plan for change
-
-The TechNet Script Center is retiring and all runbooks hosted in the Runbook gallery have been moved to our [Automation GitHub organization](https://github.com/azureautomation). For more information, read [Azure Automation Runbooks moving to GitHub](https://techcommunity.microsoft.com/t5/azure-governance-and-management/azure-automation-runbooks-moving-to-github/ba-p/2039337).
-
-## December 2020
-
-### Azure Automation and Update Management Private Link GA
-
-**Type:** New feature
-
-Azure Automation and Update Management support announced as GA for Azure global and Government clouds. Azure Automation enabled Private Link support to secure execution of a runbook on a hybrid worker role, using Update Management to patch machines, invoking a runbook through a webhook, and using State Configuration service to keep your machines complaint. For more information, read [Azure Automation Private Link support](https://azure.microsoft.com/updates/azure-automation-private-link)
-
-### Azure Automation classified as Grade-C certified on Accessibility
-
-**Type:** New feature
-
-Accessibility features of Microsoft products help agencies address global accessibility requirements. On the [blog announcement](https://cloudblogs.microsoft.com/industry-blog/government/2018/09/11/accessibility-conformance-reports/) page, search for **Azure Automation** to read the Accessibility conformance report for the Automation service.
-
-### Support for Automation and State Configuration GA in UAE North
-
-**Type:** New feature
-
-Automation account and State Configuration availability in the UAE North region. For more information, read [announcement](https://azure.microsoft.com/updates/azure-automation-in-uae-north-region/).
-
-### Support for Automation and State Configuration GA in Germany West Central
-
-**Type:** New feature
-
-Automation account and State Configuration availability in Germany West region. For more information, read [announcement](https://azure.microsoft.com/updates/azure-automation-in-germany-west-central-region/).
-
-### DSC support for Oracle 6 and 7
-
-**Type:** New feature
-
-Manage Oracle Linux 6 and 7 machines with Automation State Configuration. See [Supported Linux distros](https://github.com/Azure/azure-linux-extensions/tree/master/DSC#4-supported-linux-distributions) for updates to the documentation to reflect these changes.
-
-### Public Preview for Python3 runbooks in Automation
-
-**Type:** New feature
-
-Azure Automation now supports Python 3 cloud and hybrid runbook execution in public preview in all regions in Azure global cloud. For more information, see the [announcement]((https://azure.microsoft.com/updates/azure-automation-python-3-public-preview/).
-
-## November 2020
-
-### DSC support for Ubuntu 18.04
-
-**Type:** New feature
-
-See [Supported Linux Distros](https://github.com/Azure/azure-linux-extensions/tree/master/DSC#4-supported-linux-distributions) for updates to the documentation reflecting these changes.
-
-## October 2020
-
-### Support for Automation and State Configuration GA in Switzerland North
-
-**Type:** New feature
-
-Automation account and State Configuration availability in Switzerland North. For more information, read [announcement](https://azure.microsoft.com/updates/azure-automation-in-switzerland-north-region/).
-
-### Support for Automation and State Configuration GA in Brazil South East
-
-**Type:** New feature
-
-Automation account and State Configuration availability in Brazil South East. For more information, read [announcement](https://azure.microsoft.com/updates/azure-automation-in-brazil-southeast-region/).
-
-### Update Management availability in South Central US
-
-**Type:** New feature
-
-Azure Automation region mapping updated to support Update Management feature in South Central US region. See [Supported region mapping](how-to/region-mappings.md#supported-mappings) for updates to the documentation to reflect this change.
-
-## September 2020
-
-### Start/Stop VMs during off-hours runbooks updated to use Azure Az modules
-
-**Type:** New feature
-
-Start/Stop VM runbooks have been updated to use Az modules in place of Azure Resource Manager modules. See [Start/Stop VMs during off-hours](automation-solution-vm-management.md) overview for updates to the documentation to reflect these changes.
-
-## August 2020
-
-### Published the DSC extension to support Azure Arc
-
-**Type:** New feature
-
-Use Azure Automation State Configuration to centrally store configurations and maintain the desired state of hybrid connected machines enabled through the Azure Arc-enabled servers DSC VM extension. For more information, read [Arc-enabled servers VM extensions overview](../azure-arc/servers/manage-vm-extensions.md).
-
-### July 2020
-
-### Introduced Public Preview of Private Link support in Automation
-
-**Type:** New feature
-
-Use Azure Private Link to securely connect virtual networks to Azure Automation using private endpoints. For more information, read the [announcement](https://azure.microsoft.com/updates/public-preview-private-link-azure-automation-is-now-available/).
-
-### Hybrid Runbook Worker support for Windows Server 2008 R2
-
-**Type:** New feature
-
-Automation Hybrid Runbook Worker supports the Windows Server 2008 R2 operating system. See [Supported operating systems](automation-windows-hrw-install.md#supported-windows-operating-system) for updates to the documentation to reflect these changes.
-
-### Update Management support for Windows Server 2008 R2
-
-**Type:** New feature
-
-Update Management supports assessing and patching the Windows Server 2008 R2 operating system. See [Supported operating systems](update-management/operating-system-requirements.md) for updates to the documentation to reflect these changes.
-
-### Automation diagnostic logs schema update
-
-**Type:** New feature
-
-Changed the schema of Azure Automation log data in the Log Analytics service. To learn more, see [Forward Azure Automation job data to Azure Monitor logs](automation-manage-send-joblogs-log-analytics.md#filter-job-status-output-converted-into-a-json-object).
-
-### Azure Lighthouse supports Automation Update Management
-
-**Type:** New feature
-
-Azure Lighthouse enables delegated resource management with Update Management for service providers and customers. Read more [here](https://azure.microsoft.com/blog/how-azure-lighthouse-enables-management-at-scale-for-service-providers/).
-
-## June 2020
-
-### Automation and Update Management availability in the US Gov Arizona region
-
-**Type:** New feature
-
-Automation account and Update Management are available in US Gov Arizona. For more information, see [announcement](https://azure.microsoft.com/updates/azure-automation-generally-available-in-usgov-arizona-region/).
-
-### Hybrid Runbook Worker onboarding script updated to use Az modules
-
-**Type:** New feature
-
-The New-OnPremiseHybridWorker runbook has been updated to support Az modules. For more information, see the package in the [PowerShell Gallery](https://www.powershellgallery.com/packages/New-OnPremiseHybridWorker/1.7).
-
-### Update Management availability in China East 2
-
-**Type:** New feature
-
-Azure Automation region mapping updated to support Update Management feature in China East 2 region. See [Supported region mapping](how-to/region-mappings.md#supported-mappings) for updates to the documentation to reflect this change.
-
-## May 2020
-
-### Updated Automation service DNS records from region-specific to Automation account-specific URLs
-
-**Type:** New feature
-
-Azure Automation DNS records have been updated to support Private Links. For more information, read the [announcement](https://azure.microsoft.com/updates/azure-automation-updateddns-records/).
-
-### Added capability to keep Automation runbooks and DSC scripts encrypted by default
-
-**Type:** New feature
-
-In addition to improve security of assets, runbooks, and DSC scripts are also encrypted to enhance Azure Automation security.
-
-## April 2020
-
-### Retirement of the Automation watcher task
-
-**Type:** Plan for change
-
-Azure Logic Apps is now the recommended and supported way to monitor for events, schedule recurring tasks, and trigger actions. There will be no further investments in Watcher task functionality. To learn more, see [Schedule and run recurring automated tasks with Logic Apps](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md).
-
-## March 2020
-
-### Support for Impact Level 5 (IL5) compute isolation in Azure commercial and Government cloud
-
-**Type:**
-
-Azure Automation Hybrid Runbook Worker can be used in Azure Government to support Impact Level 5 workloads. To learn more, see our [documentation](automation-hybrid-runbook-worker.md#support-for-impact-level-5-il5).
-
-## February 2020
-
-### Introduced support for Azure virtual network service tags
-
-**Type:** New feature
-
-Automation support of service tags allows or denies the traffic for the Automation service, for a subset of scenarios. To learn more, see the [documentation](automation-hybrid-runbook-worker.md#service-tags).
-
-### Enable TLS 1.2 support for Azure Automation service
-
-**Type:** Plan for change
-
-Azure Automation fully supports TLS 1.2 and all client calls (through webhooks, DSC nodes, and hybrid worker). TLS 1.1 and TLS 1.0 are still supported for backward compatibility with older clients until customers standardize and fully migrate to TLS 1.2.
-
-## January 2020
-
-### Introduced Public Preview of customer-managed keys for Azure Automation
-
-**Type:** New feature
-
-Customers can manage and secure encryption of Azure Automation assets using their own managed keys. For more information, see [Use of customer-managed keys](automation-secure-asset-encryption.md#use-of-customer-managed-keys-for-an-automation-account).
-
-### Retirement of Azure Service Management (ASM) REST APIs for Azure Automation
-
-**Type:** Retire
-
-Azure Service Management (ASM) REST APIs for Azure Automation will be retired and no longer supported after January 30, 2020. To learn more, see the [announcement](https://azure.microsoft.com/updates/azure-automation-service-management-rest-apis-are-being-retired-april-30-2019/).
- ## Next steps If you'd like to contribute to Azure Automation documentation, see the [Docs Contributor Guide](/contribute/).
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
azure-arc Create Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-sql-managed-instance.md
You can copy the external IP and port number from here and connect to it using y
## Next steps - [Connect to Azure Arc-enabled SQL Managed Instance](connect-managed-instance.md) - [Register your instance with Azure and upload metrics and logs about your instance](upload-metrics-and-logs-to-azure-monitor.md)-- [Deploy Azure SQL managed instance using Azure Data Studio](create-sql-managed-instance-azure-data-studio.md)
+- [Deploy Azure SQL Managed Instance using Azure Data Studio](create-sql-managed-instance-azure-data-studio.md)
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/azure-rbac.md
A conceptual overview of this feature is available in the [Azure RBAC on Azure A
1. Create a new Azure AD application and get its `appId` value. This value is used in later steps as `serverApplicationId`. ```azurecli
- az ad app create --display-name "<clusterName>Server" --identifier-uris "https://<clusterName>Server" --query appId -o tsv
+ CLUSTERNAME="<clusterName>"
+ SERVER_APP_ID=$(az ad app create --display-name "${CLUSTERNAME}Server" --identifier-uris "https://${CLUSTERNAME}Server" --query appId -o tsv)
+ echo $SERVER_APP_ID
``` 1. Update the application's group membership claims: ```azurecli
- az ad app update --id <serverApplicationId> --set groupMembershipClaims=All
+ az ad app update --id "${SERVER_APP_ID}" --set groupMembershipClaims=All
``` 1. Create a service principal and get its `password` field value. This value is required later as `serverApplicationSecret` when you're enabling this feature on the cluster. ```azurecli
- az ad sp create --id <serverApplicationId>
- az ad sp credential reset --name <serverApplicationId> --credential-description "ArcSecret" --query password -o tsv
+ az ad sp create --id "${SERVER_APP_ID}"
+ SERVER_APP_SECRET=$(az ad sp credential reset --name "${SERVER_APP_ID}" --credential-description "ArcSecret" --query password -o tsv)
```
-1. Grant API permissions to the application:
+1. Grant "Sign in and read user profile" API permissions to the application:
```azurecli
- az ad app permission add --id <serverApplicationId> --api 00000003-0000-0000-c000-000000000000 --api-permissions e1fe6dd8-ba31-4d61-89e7-88639da4683d=Scope
- az ad app permission grant --id <serverApplicationId> --api 00000003-0000-0000-c000-000000000000
+ az ad app permission add --id "${SERVER_APP_ID}" --api 00000003-0000-0000-c000-000000000000 --api-permissions e1fe6dd8-ba31-4d61-89e7-88639da4683d=Scope
+ az ad app permission grant --id "${SERVER_APP_ID}" --api 00000003-0000-0000-c000-000000000000
``` > [!NOTE]
A conceptual overview of this feature is available in the [Azure RBAC on Azure A
1. Create a new Azure AD application and get its `appId` value. This value is used in later steps as `clientApplicationId`. ```azurecli
- az ad app create --display-name "<clusterName>Client" --native-app --reply-urls "https://<clusterName>Client" --query appId -o tsv
+ CLIENT_APP_ID=$(az ad app create --display-name "${CLUSTERNAME}Client" --native-app --reply-urls "https://${CLUSTERNAME}Client" --query appId -o tsv)
+ echo $CLIENT_APP_ID
``` 2. Create a service principal for this client application: ```azurecli
- az ad sp create --id <clientApplicationId>
+ az ad sp create --id "${CLIENT_APP_ID}"
``` 3. Get the `oAuthPermissionId` value for the server application: ```azurecli
- az ad app show --id <serverApplicationId> --query "oauth2Permissions[0].id" -o tsv
+ az ad app show --id "${SERVER_APP_ID}" --query "oauth2Permissions[0].id" -o tsv
``` 4. Grant the required permissions for the client application: ```azurecli
- az ad app permission add --id <clientApplicationId> --api <serverApplicationId> --api-permissions <oAuthPermissionId>=Scope
- az ad app permission grant --id <clientApplicationId> --api <serverApplicationId>
+ az ad app permission add --id "${CLIENT_APP_ID}" --api "${SERVER_APP_ID}" --api-permissions <oAuthPermissionId>=Scope
+ az ad app permission grant --id "${CLIENT_APP_ID}" --api "${SERVER_APP_ID}"
``` ## Create a role assignment for the server application
The server application needs the `Microsoft.Authorization/*/read` permissions to
2. Run the following command to create the new custom role: ```azurecli
- az role definition create --role-definition ./accessCheck.json
+ ROLE_ID=$(az role definition create --role-definition ./accessCheck.json --query id -o tsv)
```
-3. From the output of the preceding command, store the value of the `id` field. This field is used in later steps as `roleId`.
-
-4. Create a role assignment on the server application as `assignee` by using the role that you created:
+3. Create a role assignment on the server application as `assignee` by using the role that you created:
```azurecli
- az role assignment create --role <roleId> --assignee <serverApplicationId> --scope /subscriptions/<subscription-id>
+ az role assignment create --role "${ROLE_ID}" --assignee "${SERVER_APP_ID}" --scope /subscriptions/<subscription-id>
``` ## Enable Azure RBAC on the cluster
The server application needs the `Microsoft.Authorization/*/read` permissions to
Enable Azure role-based access control (RBAC) on your Arc enabled Kubernetes cluster by running the following command: ```console
-az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --features azure-rbac --app-id <serverApplicationId> --app-secret <serverApplicationSecret>
+az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --features azure-rbac --app-id "${SERVER_APP_ID}" --app-secret "${SERVER_APP_SECRET}"
``` > [!NOTE]
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021 #
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc-enabled Kubernetes description: Sample Azure Resource Graph queries for Azure Arc-enabled Kubernetes showing use of resource types and tables to access Azure Arc-enabled Kubernetes related resources and properties. Previously updated : 08/09/2021 Last updated : 08/27/2021
azure-arc Tutorial Use Gitops Connected Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md
Use the Azure CLI extension for `k8s-configuration` to link a connected cluster
"lastModifiedByType": null }, "type": "Microsoft.KubernetesConfiguration/sourceControlConfigurations"
- ```
+ }
+ ```
### Use a public Git repository
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc description: Sample Azure Resource Graph queries for Azure Arc showing use of resource types and tables to access Azure Arc related resources and properties. Previously updated : 08/09/2021 Last updated : 08/27/2021
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/overview.md
Title: Azure Arc-enabled servers Overview description: Learn how to use Azure Arc-enabled servers to manage servers hosted outside of Azure like an Azure resource. Previously updated : 08/18/2021 Last updated : 08/27/2021
Log data collected and stored in a Log Analytics workspace from the hybrid machi
[!INCLUDE [azure-lighthouse-supported-service](../../../includes/azure-lighthouse-supported-service.md)]
+To learn more about how Arc-enabled servers can be used to implement Azure monitoring, security, and update services across hybrid and multicloud environments, see the following video.
+
+> [!VIDEO https://www.youtube.com/embed/mJnmXBrU1ao]
+ ## Supported regions For a definitive list of supported regions with Azure Arc-enabled servers, see the [Azure products by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc) page.
azure-arc Plan At Scale Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/plan-at-scale-deployment.md
Title: How to plan and deploy Azure Arc-enabled servers description: Learn how to enable a large number of machines to Azure Arc-enabled servers to simplify configuration of essential security, management, and monitoring capabilities in Azure. Previously updated : 07/16/2021 Last updated : 08/27/2021
For the deployment to proceed smoothly, your plan should establish a clear under
The purpose of this article is to ensure you are prepared for a successful deployment of Azure Arc-enabled servers across multiple production physical servers or virtual machines in your environment.
+To learn more about our at-scale deployment recommendations, you can also refer to this video.
+
+> [!VIDEO https://www.youtube.com/embed/Cf1jUPOB_vs]
+ ## Prerequisites * Your machines run a [supported operating system](agent-overview.md#supported-operating-systems) for the Connected Machine agent.
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc-enabled servers description: Sample Azure Resource Graph queries for Azure Arc-enabled servers showing use of resource types and tables to access Azure Arc-enabled servers related resources and properties. Previously updated : 08/09/2021 Last updated : 08/27/2021
azure-arc Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Arc-enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
azure-cache-for-redis Cache Best Practices Connection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices-connection.md
+
+ Title: Best practices for connection resilience
+
+description: Learn how to make your Azure Cache for Redis connections resilient.
+++ Last updated : 08/25/2021+++
+# Connection resilience
+
+## Retry commands
+
+Configure your client connections to retry commands with exponential backoff. For more information, see [retry guidelines](/azure/architecture/best-practices/retry-service-specific#azure-cache-for-redis).
+
+## Test resiliency
+
+Test your system's resiliency to connection breaks using a [Reboot](cache-administration.md#reboot) to simulate a patch. For more information on testing your performance, see [Performance testing](cache-best-practices-performance.md).
+
+## Configure appropriate timeouts
+
+Configure your client library to use *connect timeout* of 10 to 15 seconds and a *command timeout* of 5 seconds. The *connect timeout* is the time your client waits to establish a connection with Redis server. Most client libraries have another timeout configuration for *command timeouts*, which is the time the client waits for a response from Redis server.
+
+Some libraries have the *command timeout* set to 5 seconds by default. Consider setting it higher or lower depending on your scenario and the sizes of the values that are stored in your cache.
+
+If the *command timeout* is too small, the connection can look unstable. However, if the *command timeout* is too large, your application might have to wait for a long time to find out whether the command is going to timeout or not.
+
+Configure your client library to use a *connect timeout* of at least 15 seconds, giving the system sufficient time to connect even under higher CPU conditions. A small *connection timeout* value doesn't guarantee a connection is established in that time frame.
+
+If something goes wrong (high client CPU, high server CPU, and so on), then a short connection timeout value causes the connection attempt to fail. This behavior often makes a bad situation worse. Instead of helping, shorter timeouts aggravate the problem by forcing the system to restart the process of trying to reconnect, which can lead to a *connect -> fail -> retry* loop.
+
+We generally recommend that you leave your *connection timeout* at 15 seconds or higher. It's better to let your connection attempt to succeed after 15 or 20 seconds than to have it fail quickly only to retry. Such a retry loop can cause your outage to last longer than if you let the system just take longer initially.
+
+## Avoid client connection spikes
+
+Avoid creating many connections at the same time when reconnecting after a connection loss. Similar to the way that [short connect timeouts](#configure-appropriate-timeouts) can result in longer outages, starting many reconnect attempts at the same time can also increase server load and extend how long it takes for all clients to reconnect successfully.
+
+If you're reconnecting many client instances, consider staggering the new connections to avoid a steep spike in the number of connected clients.
+
+> [!NOTE]
+> When you use the `StackExchange.Redis` client library, set `abortConnect` to `false` in your connection string. We recommend letting the `ConnectionMultiplexer` handle reconnection. For more information, see [StackExchange.Redis best practices](/cache-management-faq.md#stackexchangeredis-best-practices).
+
+## Avoid leftover connections
+
+Caches have limits on the number of client connections per cache tier. Ensure that when your client application recreates connections that it closes and removes the old connections.
+
+## Advance maintenance notification
+
+Use notifications to learn of upcoming maintenance. For more information, see [Can I be notified in advance of a planned maintenance](/cache-failover.md#can-i-be-notified-in-advance-of-a-planned-maintenance).
+
+## Schedule maintenance window
+
+Adjust your cache settings to accommodate maintenance. For more information about creating a maintenance window to reduce any negative effects to your cache, see [Schedule updates](/azure-cache-for-redis/cache-administration.md#schedule-updates).
+
+## More design patterns for resilience
+
+Apply design patterns for resiliency. For more information, see [recommended design patterns](/cache-failover.md#how-do-i-make-my-application-resilient)
+
+## Idle timeout
+
+Azure Cache for Redis currently has a 10-minute idle timeout for connections, so the idle timeout setting in your client application should be less than 10 minutes. Most common client libraries have a configuration setting that allows client libraries to send Redis `PING` commands to a Redis server automatically and periodically. However, when using client libraries without this type of setting, customer applications themselves are responsible for keeping the connection alive.
azure-cache-for-redis Cache Best Practices Development https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices-development.md
+
+ Title: Best practices for development
+
+description: Learn how to develop code for Azure Cache for Redis.
+++ Last updated : 08/25/2021+++
+# Development
+
+## Connection resilience and server load
+
+When developing client applications, be sure to consider the relevant best practices for [connection resilience](cache-best-practices-connection.md) and [managing server load](cache-best-practices-server-load.md).
+
+## Consider more keys and smaller values
+
+Redis works best with smaller values. Consider dividing bigger chunks of data in to smaller chunks to spread the data over multiple keys. For more information on ideal value size, see this [article](https://stackoverflow.com/questions/55517224/what-is-the-ideal-value-size-range-for-redis-is-100kb-too-large/).
+
+In this Redis discussion, some considerations are listed for you to consider carefully. For an example problem that can be caused by large values, see [Large request or response Size](cache-troubleshoot-client.md#large-request-or-response-size).
+
+## Key distribution
+
+If you're planning to use Redis clustering, first read [Redis Clustering Best Practices with Keys](https://redislabs.com/blog/redis-clustering-best-practices-with-keys/).
+
+## Use pipelining
+
+Try to choose a Redis client that supports [Redis pipelining](https://redis.io/topics/pipelining). Pipelining helps make efficient use of the network and get the best throughput possible.
+
+## Avoid expensive operations
+
+Some Redis operations, like the [KEYS](https://redis.io/commands/keys) command, are expensive and should be avoided. For some considerations around long running commands, see [long-running commands](cache-troubleshoot-server.md#long-running-commands)
+
+## Choose an appropriate tier
+Use Standard or Premium tier for production systems. Don't use the Basic tier in production. The Basic tier is a single node system with no data replication and no SLA. Also, use at least a C1 cache. C0 caches are only meant for simple dev/test scenarios because:
+
+- they share a CPU core
+- use little memory
+- are prone to *noisy neighbor* issues
+
+We recommend performance testing to choose the right tier and validate connection settings. For more information, see [Performance testing](cache-best-practices-performance.md).
+
+## Client in same region as cache
+
+Locate your cache instance and your application in the same region. Connecting to a cache in a different region can significantly increase latency and reduce reliability.
+
+While you can connect from outside of Azure, it is not recommended *especially when using Redis as a cache*. If you're using Redis server as just a key/value store, latency may not be the primary concern.
+
+## Use TLS encryption
+
+Azure Cache for Redis requires TLS encrypted communications by default. TLS versions 1.0, 1.1 and 1.2 are currently supported. However, TLS 1.0 and 1.1 are on a path to deprecation industry-wide, so use TLS 1.2 if at all possible.
+
+If your client library or tool doesn't support TLS, then enabling unencrypted connections is possible through the [Azure portal](cache-configure.md#access-ports) or [management APIs](/rest/api/redis/redis/update). In cases where encrypted connections aren't possible, we recommend placing your cache and client application into a virtual network. For more information about which ports are used in the virtual network cache scenario, see this [table](cache-how-to-premium-vnet.md#outbound-port-requirements).
+
+## Client library-specific guidance
+
+* [StackExchange.Redis (.NET)](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-stackexchange-redis-md)
+* [Java - Which client should I use?](https://gist.github.com/warrenzhu25/1beb02a09b6afd41dff2c27c53918ce7#file-azure-redis-java-best-practices-md)
+* [Lettuce (Java)](https://github.com/Azure/AzureCacheForRedis/blob/main/Lettuce%20Best%20Practices.md)
+* [Jedis (Java)](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-java-jedis-md)
+* [Node.js](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-node-js-md)
+* [PHP](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-php-md)
+* [HiRedisCluster](https://github.com/Azure/AzureCacheForRedis/blob/main/HiRedisCluster%20Best%20Practices.md)
+* [ASP.NET Session State Provider](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-session-state-provider-md)
azure-cache-for-redis Cache Best Practices Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices-kubernetes.md
+
+ Title: Best practices for hosting a Kubernetes client application
+
+description: Learn how to host a Kubernetes client application that uses Azure Cache for Redis.
+++ Last updated : 08/25/2021+++
+# Kubernetes-hosted client application
+
+## Multiple pods
+
+When you have multiple pods connecting to a Redis server, ensure that the new connections from the pods are created in a staggered manner. If multiple pods start up in a short time without staggering, it causes a sudden spike in the number of client connections created. The high number of connections leads to high load on the Redis server and might cause timeouts.
+
+Avoid the same scenario when shutting down multiple pods at the same time. Failing to stagger shutdown might cause a steep dip in the number of connections that leads to CPU pressure.
+
+## Sufficient resources
+
+Ensure that the Kubernetes node that hosts the pod connecting to Redis server has sufficient memory, CPU, and network bandwidth.
+
+## Noisy neighbor problem
+
+Beware of the *noisy neighbor* problem. A pod running the client can be affected by other pods running on the same node and throttle Redis connections or IO operations.
azure-cache-for-redis Cache Best Practices Memory Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices-memory-management.md
+
+ Title: Best practices for memory management
+
+description: Learn how to manage your Azure Cache for Redis memory effectively.
+++ Last updated : 08/25/2021++
+# Memory management
+
+## Eviction policy
+
+Choose an [eviction policy](https://redis.io/topics/lru-cache) that works for your application. The default policy for Azure Cache for Redis is `volatile-lru`, which means that only keys that have a TTL value set are eligible for eviction. If no keys have a TTL value, then the system won't evict any keys. If you want the system to allow any key to be evicted if under memory pressure, then you may want to consider the `allkeys-lru` policy.
+
+## Keys expiration
+
+Set an expiration value on your keys. An expiration removes keys proactively instead of waiting until there's memory pressure. When eviction happens because of memory pressure, it can cause more load on your server. For more information, see the documentation for the [EXPIRE](https://redis.io/commands/expire) and [EXPIREAT](https://redis.io/commands/expireat) commands.
+
+## Minimize memory fragmentation
+
+Large values can leave memory fragmented on eviction and might lead to high memory usage and server load.
+
+## Monitor memory usage
+
+Add monitoring on memory usage to ensure that you don't run out of memory and have the chance to scale your cache before seeing issues.
+
+## Configure your maxmemory-reserved setting
+
+Configure your [maxmemory-reserved setting](cache-configure.md#maxmemory-policy-and-maxmemory-reserved) to improve system responsiveness:
+
+* A sufficient reservation setting is especially important for write-heavy workloads or if you're storing values of 100 KB or more in your cache. Start with 10% of the size of your cache and increase this percentage if you have write-heavy loads.
+
+* The `maxmemory-reserved` setting configures the amount of memory, in MB per instance in a cluster, that is reserved for non-cache operations, such as replication during failover. Setting this value allows you to have a more consistent Redis server experience when your load varies. This value should be set higher for workloads that write large amounts of data. When memory is reserved for such operations, it's unavailable for storage of cached data.
+
+* The `maxfragmentationmemory-reserved` setting configures the amount of memory, in MB per instance in a cluster, that is reserved to accommodate for memory fragmentation. When you set this value, the Redis server experience is more consistent when the cache is full or close to full and the fragmentation ratio is high. When memory is reserved for such operations, it's unavailable for storage of cached data.
+
+* One thing to consider when choosing a new memory reservation value (`maxmemory-reserved` or `maxfragmentationmemory-reserved`) is how this change might affect a cache that is already running with large amounts of data in it. For instance, if you have a 53-GB cache with 49 GB of data and then change the reservation value to 8 GB, the max available memory for the system will drop to 45 GB. If either your current `used_memory` or your `used_memory_rss` values are higher than the new limit of 45 GB, then the system must evict data until both `used_memory` and `used_memory_rss` are below 45 GB. Eviction can increase server load and memory fragmentation. For more information on cache metrics such as `used_memory` and `used_memory_rss`, see [Available metrics and reporting intervals](cache-how-to-monitor.md#available-metrics-and-reporting-intervals).
azure-cache-for-redis Cache Best Practices Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices-performance.md
+
+ Title: Best practices for performance testing
+
+description: Learn how to test the performance of Azure Cache for Redis.
+++ Last updated : 08/25/2021+++
+# Performance testing
+
+1. Start by using `redis-benchmark.exe` to check the general throughput and latency characteristics of your cache before writing your own performance tests. For more information, see [Redis-Benchmark](#redis-benchmark-utility).
+
+1. The client VM used for testing should be *in the same region* as your Redis cache instance.
+
+1. Make sure the client VM you use has *at least as much compute and bandwidth* as the cache being tested.
+
+1. It's important that you don't test the performance of your cache only under steady state conditions. *Test under failover conditions too*, and measure the CPU/Server Load on your cache during that time. You can start a failover by [rebooting the primary node](cache-administration.md#reboot). Testing under failover conditions allows you to see the throughput and latency of your application during failover conditions. Failover can happen during updates or during an unplanned event. Ideally you don't want to see CPU/Server Load peak to more than say 80% even during a failover as that can affect performance.
+
+1. Consider using Premium tier Azure Cache for Redis instances. These cache sizes have better network latency and throughput because they're running on better hardware for both CPU and network.
+
+ > [!NOTE]
+ > Our observed performance results are [published here](./cache-planning-faq.yml#azure-cache-for-redis-performance) for your reference. Also, be aware that SSL/TLS adds some overhead, so you may get different latencies and/or throughput if you're using transport encryption.
+
+## Redis-benchmark utility
+
+**Redis-benchmark** documentation can be [found here](https://redis.io/topics/benchmarks).
+
+The `redis-benchmark.exe` doesn't support TLS. You'll have to [enable the Non-TLS port through the Portal](cache-configure.md#access-ports) before you run the test. A windows compatible version of redis-benchmark.exe can be found [here](https://github.com/MSOpenTech/redis/releases).
+
+## Redis-benchmark examples
+
+**Pre-test setup**:
+Prepare the cache instance with data required for the latency and throughput testing:
+
+```azurecli
+redis-benchmark -h yourcache.redis.cache.windows.net -a yourAccesskey -t SET -n 10 -d 1024
+```
+
+**To test latency**:
+Test GET requests using a 1k payload:
+
+```azurecli
+redis-benchmark -h yourcache.redis.cache.windows.net -a yourAccesskey -t GET -d 1024 -P 50 -c 4
+```
+
+**To test throughput:**
+Pipelined GET requests with 1k payload:
+
+```azurecli
+redis-benchmark -h yourcache.redis.cache.windows.net -a yourAccesskey -t GET -n 1000000 -d 1024 -P 50 -c 50
+```
azure-cache-for-redis Cache Best Practices Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices-scale.md
+
+ Title: Best practices for scaling your Azure Cache for Redis
+
+description: Learn how to scale your Azure Cache for Redis.
+++ Last updated : 08/25/2021++
+
+# Scaling
+
+## Scaling under load
+
+While scaling a cache under load, configure your maxmemory-reserved setting to improve system responsiveness. For more information, see [Configure your maxmemory-reserved setting](cache-best-practices-memory-management.md#configure-your-maxmemory-reserved-setting).
+
+## Scaling clusters
+
+Try reducing data as much as you can in the cache before scaling your clustered cache in or out. Reducing data ensures smaller amounts of data have to be moved, which reduces the time required for the scale operation. For more information on when to scale, see [When to scale](cache-how-to-scale.md#when-to-scale).
+
+## Scale before load is too high
+
+Start scaling before the server load or memory usage gets too high. If it's too high, that means Redis server is busy. The busy Redis server doesn't have enough resources to scale and redistribute data.
+
+## Cache sizes
+
+If you are using TLS and you have a high number of connections, consider scaling out so that you can distribute the load over more cores. Some cache sizes are hosted on VMs with four or more cores.
+
+Distribute the TLS encryption/decryption and TLS connection/disconnection workloads across multiple cores to bring down overall CPU usage on the cache VMs. For more information, see [details around VM sizes and cores](./cache-planning-faq.yml#azure-cache-for-redis-performance).
azure-cache-for-redis Cache Best Practices Server Load https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices-server-load.md
+
+ Title: Best practices for Using and Monitoring the Server Load for Azure Cache for Redis
+
+description: Learn how to use and monitor your server load for Azure Cache for Redis.
+++ Last updated : 08/25/2021+++
+# Manage Server Load for Azure Cache for Redis
+
+## Value sizes
+
+The design of your client application determines whether you should store many small values or a smaller number of larger values. From a Redis server perspective, smaller values give better performance. We recommend keeping value size smaller than 100 kB.
+
+If your design requires you to store larger values in the Azure Cache for Redis, the server load will be higher. In this case, you might need to use a higher cache tier to ensure CPU usage doesn't limit throughput.
+
+Even if the cache has sufficient CPU capacity, larger values do increase latencies, so follow the guidance in [Configure appropriate timeouts](cache-best-practices-connection.md#configure-appropriate-timeouts).
+
+Larger values also increase the chances of memory fragmentation, so be sure to follow the guidance in [Configure your maxmemory-reserved setting](cache-best-practices-memory-management.md#configure-your-maxmemory-reserved-setting).
+
+## Avoid client connection spikes
+
+Creating and closing connections is an expensive operation for Redis server. If your client application creates or closes too many connections in a small amount of time, it could burden the Redis server.
+
+If you're instantiating many client instances to connect to Redis at once, consider staggering the new connection creations to avoid a steep spike in the number of connected clients.
+
+## Memory pressure
+
+High memory usage on the server makes it more likely that the system will need to page data to disk, resulting in page faults that can slow down the system significantly.
+
+## Avoid long running commands
+
+Redis server is a single-threaded system. Long running commands can cause latency or timeouts on the client side because the server can't respond to any other requests while it's busy working on a long running command. For more information, see [Troubleshoot Azure Cache for Redis server-side issues](cache-troubleshoot-server.md).
+
+## Monitor Server Load
+
+Add monitoring on Server load to ensure you get notifications when high server load occurs. Monitoring can help you understand your application constraints. Then, you can work proactively to mitigate issues. We recommend trying to keep server load under 80% to avoid negative performance effects.
+
+## Plan for server maintenance
+
+Ensure you have enough server capacity to handle your peak load while your cache servers are undergoing maintenance. Test your system by rebooting nodes while under peak load. For more information on how to simulate deployment of a patch, see [reboot](cache-administration.md#reboot).
+
+## Next steps
+
+- [Troubleshoot Azure Cache for Redis server-side issues](cache-troubleshoot-server.md)
+- [Connection resilience](cache-best-practices-connection.md)
+ - [Configure appropriate timeouts](cache-best-practices-connection.md#configure-appropriate-timeouts).
+- [Memory management](cache-best-practices-memory-management.md)
+ - [Configure your maxmemory-reserved setting](cache-best-practices-memory-management.md#configure-your-maxmemory-reserved-setting)
azure-cache-for-redis Cache Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices.md
- Title: Best practices for Azure Cache for Redis
-description: Learn how to use your Azure Cache for Redis effectively by following these best practices.
-
-reviewer: shpathak
-- Previously updated : 01/06/2020---
-# Best practices for Azure Cache for Redis
-
-By following these best practices, you can help maximize the performance and cost-effective use of your Azure Cache for Redis instance.
-
-## Configuration and concepts
-
-* **Use Standard or Premium tier for production systems.** The Basic tier is a single node system with no data replication and no SLA. Also, use at least a C1 cache. C0 caches are meant for simple dev/test scenarios since they have a shared CPU core, little memory, and are prone to "noisy neighbor" issues.
-
-* **Remember that Redis is an in-memory data store.** [This article](cache-troubleshoot-data-loss.md) outlines some scenarios where data loss can occur.
-
-* **Develop your system such that it can handle connection blips** [because of patching and failover](cache-failover.md).
-
-* **Configure your [maxmemory-reserved setting](cache-configure.md#maxmemory-policy-and-maxmemory-reserved) to improve system responsiveness** under memory pressure conditions. A sufficient reservation setting is especially important for write-heavy workloads or if you're storing values of 100 KB or more in Redis. Start with 10% of the size of your cache and increase this percentage if you have write-heavy loads.
-
-* **Redis works best with smaller values**, so consider chopping up bigger data into multiple keys. In [this Redis discussion](https://stackoverflow.com/questions/55517224/what-is-the-ideal-value-size-range-for-redis-is-100kb-too-large/), some considerations are listed that you should consider carefully. Read [this article](cache-troubleshoot-client.md#large-request-or-response-size) for an example problem that can be caused by large values.
-
-* **Locate your cache instance and your application in the same region.** Connecting to a cache in a different region can significantly increase latency and reduce reliability. While you can connect from outside of Azure, it not recommended *especially when using Redis as a cache*. If you're using Redis as just a key/value store, latency may not be the primary concern.
-
-* **Reuse connections.** Creating new connections is expensive and increases latency, so reuse connections as much as possible. If you choose to create new connections, make sure to close the old connections before you release them (even in managed memory languages like .NET or Java).
-
-* **Use pipelining.** Try to choose a Redis client that supports [Redis pipelining](https://redis.io/topics/pipelining). Pipelining helps make efficient use of the network and get the best throughput possible.
-
-* **Configure your client library to use a *connect timeout* of at least 15 seconds**, giving the system time to connect even under higher CPU conditions. A small connection timeout value doesn't guarantee that the connection is established in that time frame. If something goes wrong (high client CPU, high server CPU, and so on), then a short connection timeout value will cause the connection attempt to fail. This behavior often makes a bad situation worse. Instead of helping, shorter timeouts aggravate the problem by forcing the system to restart the process of trying to reconnect, which can lead to a *connect -> fail -> retry* loop. We generally recommend that you leave your connection Timeout at 15 seconds or higher. It's better to let your connection attempt succeed after 15 or 20 seconds than to have it fail quickly only to retry. Such a retry loop can cause your outage to last longer than if you let the system just take longer initially.
- > [!NOTE]
- > This guidance is specific to the *connection attempt* and not related to the time you're willing to wait for an *operation* like GET or SET to complete.
-
-* **Avoid expensive operations** - Some Redis operations, like the [KEYS](https://redis.io/commands/keys) command, are *very* expensive and should be avoided. For more information, see some considerations around [long-running commands](cache-troubleshoot-server.md#long-running-commands)
-
-* **Use TLS encryption** - Azure Cache for Redis requires TLS encrypted communications by default. TLS versions 1.0, 1.1 and 1.2 are currently supported. However, TLS 1.0 and 1.1 are on a path to deprecation industry-wide, so use TLS 1.2 if at all possible. If your client library or tool doesn't support TLS, then enabling unencrypted connections can be done [through the Azure portal](cache-configure.md#access-ports) or [management APIs](/rest/api/redis/redis/update). In such cases where encrypted connections aren't possible, placing your cache and client application into a virtual network would be recommended. For more information about which ports are used in the virtual network cache scenario, see this [table](cache-how-to-premium-vnet.md#outbound-port-requirements).
-
-* **Idle Timeout** - Azure Cache for Redis currently has 10-minute idle timeout for connections, so your setting should be to less than 10 minutes. Most common client libraries have a configuration setting that allows client libraries to send Redis `PING` commands to a Redis server automatically and periodically. However, when using client libraries without this type of setting, customer applications themselves are responsible for keeping the connection alive.
-
-<!-- Most common client libraries have keep-alive configuration that pings Azure Redis automatically. However, in clients that don't have a keep-alive setting, customer applications are responsible for keeping the connection alive.
- -->
-## Memory management
-
-There are several things related to memory usage within your Redis server instance that you may want to consider. Here are a few:
-
-* **Choose an [eviction policy](https://redis.io/topics/lru-cache) that works for your application.** The default policy for Azure Redis is *volatile-lru*, which means that only keys that have a TTL value set will be eligible for eviction. If no keys have a TTL value, then the system won't evict any keys. If you want the system to allow any key to be evicted if under memory pressure, then you may want to consider the *allkeys-lru* policy.
-
-* **Set an expiration value on your keys.** An expiration will remove keys proactively instead of waiting until there's memory pressure. When eviction does kick in because of memory pressure, it can cause more load on your server. For more information, see the documentation for the [EXPIRE](https://redis.io/commands/expire) and [EXPIREAT](https://redis.io/commands/expireat) commands.
-
-## Client library specific guidance
-
-* [StackExchange.Redis (.NET)](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-stackexchange-redis-md)
-* [Java - Which client should I use?](https://gist.github.com/warrenzhu25/1beb02a09b6afd41dff2c27c53918ce7#file-azure-redis-java-best-practices-md)
-* [Lettuce (Java)](https://github.com/Azure/AzureCacheForRedis/blob/main/Lettuce%20Best%20Practices.md)
-* [Jedis (Java)](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-java-jedis-md)
-* [Node.js](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-node-js-md)
-* [PHP](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-php-md)
-* [HiRedisCluster](https://github.com/Azure/AzureCacheForRedis/blob/main/HiRedisCluster%20Best%20Practices.md)
-* [ASP.NET Session State Provider](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-session-state-provider-md)
-
-## When is it safe to retry?
-
-Unfortunately, there's no easy answer. Each application needs to decide what operations can be retried and which can't. Each operation has different requirements and inter-key dependencies. Here are some things you might consider:
-
-* You can get client-side errors even though Redis successfully ran the command you asked it to run. For example:
- * Timeouts are a client-side concept. If the operation reached the server, the server will run the command even if the client gives up waiting.
- * When an error occurs on the socket connection, it's not possible to know if the operation actually ran on the server. For example, the connection error can happen after the server processed the request but before the client receives the response.
-* How does my application react if I accidentally run the same operation twice? For instance, what if I increment an integer twice instead of once? Is my application writing to the same key from multiple places? What if my retry logic overwrites a value set by some other part of my app?
-
-If you would like to test how your code works under error conditions, consider using the [Reboot feature](cache-administration.md#reboot). Rebooting allows you to see how connection blips affect your application.
-
-## Performance testing
-
-* **Start by using `redis-benchmark.exe`** to get a feel for possible throughput/latency before writing your own perf tests. Redis-benchmark documentation can be [found here](https://redis.io/topics/benchmarks). The `redis-benchmark.exe` doesn't support TLS. You'll have to [enable the Non-TLS port through the Portal](cache-configure.md#access-ports) before you run the test. A windows compatible version of redis-benchmark.exe can be found [here](https://github.com/MSOpenTech/redis/releases).
-* The client VM used for testing should be **in the same region** as your Redis cache instance.
-* **We recommend using Dv2 VM Series** for your client as they have better hardware and will give the best results.
-* Make sure the client VM you use has **at least as much compute and bandwidth* as the cache being tested.
-* **Test under failover conditions** on your cache. It's important to ensure that you don't test the performance of your cache only under steady state conditions. Test under failover conditions, too, and measure the CPU/Server Load on your cache during that time. You can start a failover by [rebooting the primary node](cache-administration.md#reboot). Testing under failover conditions allows you to see how your application behaves in terms of throughput and latency during failover conditions. Failover can happen during updates and during an unplanned event. Ideally you don't want to see CPU/Server Load peak to more than say 80% even during a failover as that can affect performance.
-* **Some cache sizes** are hosted on VMs with four or more cores. Distribute the TLS encryption/decryption and TLS connection/disconnection workloads across multiple cores to bring down overall CPU usage on the cache VMs. [See here for details around VM sizes and cores](./cache-planning-faq.yml#azure-cache-for-redis-performance)
-* **Enable VRSS** on the client machine if you are on Windows. [See here for details](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn383582(v=ws.11)). Example PowerShell script:
- >PowerShell -ExecutionPolicy Unrestricted Enable-NetAdapterRSS -Name ( Get-NetAdapter).Name
-
-* **Consider using Premium tier Redis instances**. These cache sizes will have better network latency and throughput because they're running on better hardware for both CPU and Network.
-
- > [!NOTE]
- > Our observed performance results are [published here](./cache-planning-faq.yml#azure-cache-for-redis-performance) for your reference. Also, be aware that SSL/TLS adds some overhead, so you may get different latencies and/or throughput if you're using transport encryption.
-
-### Redis-Benchmark examples
-
-**Pre-test setup**:
-Prepare the cache instance with data required for the latency and throughput testing commands listed below.
-> redis-benchmark -h yourcache.redis.cache.windows.net -a yourAccesskey -t SET -n 10 -d 1024
-
-**To test latency**:
-Test GET requests using a 1k payload.
-> redis-benchmark -h yourcache.redis.cache.windows.net -a yourAccesskey -t GET -d 1024 -P 50 -c 4
-
-**To test throughput:**
-Pipelined GET requests with 1k payload.
-> redis-benchmark -h yourcache.redis.cache.windows.net -a yourAccesskey -t GET -n 1000000 -d 1024 -P 50 -c 50
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
azure-cache-for-redis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-app-settings.md
The value for this setting indicates a custom package index URL for Python apps.
To learn more, see [Custom dependencies](functions-reference-python.md#remote-build-with-extra-index-url) in the Python developer reference.
-## PYTHON\_ISOLATE\_WORKER\_DEPENDENCIES
+## PYTHON\_ISOLATE\_WORKER\_DEPENDENCIES (Preview)
-The configuration is specific to Python function apps. It defines the prioritization of module loading order. When your Python function apps face issues related to module collision (e.g. when you're using protobuf, tensorflow, or grpcio in your project), configuring this app setting to `1` should resolve your issue. By default, this value is set to `0`.
+The configuration is specific to Python function apps. It defines the prioritization of module loading order. When your Python function apps face issues related to module collision (e.g. when you're using protobuf, tensorflow, or grpcio in your project), configuring this app setting to `1` should resolve your issue. By default, this value is set to `0`. This flag is currently in Preview.
|Key|Value|Description| ||--|--|
azure-functions Functions Manually Run Non Http https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-manually-run-non-http.md
This article demonstrates how to manually run a non HTTP-triggered function via
In some contexts, you may need to run "on-demand" an Azure Function that is indirectly triggered. Examples of indirect triggers include [functions on a schedule](./functions-create-scheduled-function.md) or functions that run as the result of [another resource's action](./functions-create-storage-blob-triggered-function.md).
-[Postman](https://www.getpostman.com/) is used in the following example, but you may use [cURL](https://curl.haxx.se/), [Fiddler](https://www.telerik.com/fiddler) or any other like tool to send HTTP requests.
+[Postman](https://www.getpostman.com/) is used in the following example, but you can use [cURL](https://curl.haxx.se/), [Fiddler](https://www.telerik.com/fiddler) or any other like tool to send HTTP requests.
## Define the request location
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-python.md
def main(req):
## Environment variables
-In Functions, [application settings](functions-app-settings.md), such as service connection strings, are exposed as environment variables during execution. You can access these settings by declaring `import os` and then using, `setting = os.environ["setting-name"]`.
+In Functions, [application settings](functions-app-settings.md), such as service connection strings, are exposed as environment variables during execution. There are two main ways to access these settings in your code.
-The following example gets the [application setting](functions-how-to-use-azure-function-app-settings.md#settings), with the key named `myAppSetting`:
+| Method | Description |
+| | |
+| **`os.environ["myAppSetting"]`** | Tries to get the application setting by key name, raising an error when unsuccessful. |
+| **`os.getenv("myAppSetting")`** | Tries to get the application setting by key name, returning null when unsuccessful. |
+
+Both of these ways require you to declare `import os`.
+
+The following example uses `os.environ["myAppSetting"]` to get the [application setting](functions-how-to-use-azure-function-app-settings.md#settings), with the key named `myAppSetting`:
```python import logging
The Functions Python worker requires a specific set of libraries. You can also u
> If your function app's requirements.txt contains an `azure-functions-worker` entry, remove it. The functions worker is automatically managed by Azure Functions platform, and we regularly update it with new features and bug fixes. Manually installing an old version of worker in requirements.txt may cause unexpected issues. > [!NOTE]
-> If your package contains certain libraries that may collide with worker's dependencies (e.g. protobuf, tensorflow, grpcio), please configure `PYTHON_ISOLATE_WORKER_DEPENDENCIES` to `1` in app settings to prevent your application from referring worker's dependencies.
+> If your package contains certain libraries that may collide with worker's dependencies (e.g. protobuf, tensorflow, grpcio), please configure [`PYTHON_ISOLATE_WORKER_DEPENDENCIES`](functions-app-settings.md#python_isolate_worker_dependencies-preview) to `1` in app settings to prevent your application from referring worker's dependencies. This feature is in preview.
### Azure Functions Python library
azure-functions Manage Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/manage-connections.md
Title: Manage connections in Azure Functions
description: Learn how to avoid performance problems in Azure Functions by using static connection clients. Previously updated : 02/25/2018 Last updated : 08/23/2021 # Customer intent: As a developer, I want to know how to write my Azure Functions code so that I efficiently use connections and avoid potential bottlenecks. # Manage connections in Azure Functions
-Functions in a function app share resources. Among those shared resources are connections: HTTP connections, database connections, and connections to services such as Azure Storage. When many functions are running concurrently, it's possible to run out of available connections. This article explains how to code your functions to avoid using more connections than they need.
+Functions in a function app share resources. Among those shared resources are connections: HTTP connections, database connections, and connections to services such as Azure Storage. When many functions are running concurrently in a Consumption plan, it's possible to run out of available connections. This article explains how to code your functions to avoid using more connections than they need.
+
+> [!NOTE]
+> Connection limits described in this article apply only when running in a [Consumption plan](consumption-plan.md). However, the techniques described here may be beneficial when running on any plan.
## Connection limit
-The number of available connections is limited partly because a function app runs in a [sandbox environment](https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox). One of the restrictions that the sandbox imposes on your code is a limit on the number of outbound connections, which is currently 600 active (1,200 total) connections per instance. When you reach this limit, the functions runtime writes the following message to the logs: `Host thresholds exceeded: Connections`. For more information, see the [Functions service limits](functions-scale.md#service-limits).
+The number of available connections in a Consumption plan is limited partly because a function app in this plan runs in a [sandbox environment](https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox). One of the restrictions that the sandbox imposes on your code is a limit on the number of outbound connections, which is currently 600 active (1,200 total) connections per instance. When you reach this limit, the functions runtime writes the following message to the logs: `Host thresholds exceeded: Connections`. For more information, see the [Functions service limits](functions-scale.md#service-limits).
This limit is per instance. When the [scale controller adds function app instances](event-driven-scaling.md) to handle more requests, each instance has an independent connection limit. That means there's no global connection limit, and you can have much more than 600 active connections across all active instances.
Here are some guidelines to follow when you're using a service-specific client i
This section demonstrates best practices for creating and using clients from your function code.
-### HttpClient example (C#)
+### HTTP requests
+# [C#](#tab/csharp)
Here's an example of C# function code that creates a static [HttpClient](/dotnet/api/system.net.http.httpclient?view=netcore-3.1&preserve-view=true) instance: ```cs
public static async Task Run(string input)
A common question about [HttpClient](/dotnet/api/system.net.http.httpclient?view=netcore-3.1&preserve-view=true) in .NET is "Should I dispose of my client?" In general, you dispose of objects that implement `IDisposable` when you're done using them. But you don't dispose of a static client because you aren't done using it when the function ends. You want the static client to live for the duration of your application.
-### HTTP agent examples (JavaScript)
+# [JavaScript](#tab/javascript)
Because it provides better connection management options, you should use the native [`http.agent`](https://nodejs.org/dist/latest-v6.x/docs/api/http.html#http_class_http_agent) class instead of non-native methods, such as the `node-fetch` module. Connection parameters are configured through options on the `http.agent` class. For detailed options available with the HTTP agent, see [new Agent(\[options\])](https://nodejs.org/dist/latest-v6.x/docs/api/http.html#http_new_agent_options).
const options = { agent: httpAgent };
http.request(options, onResponseCallback); ```
-### DocumentClient code example (C#)
++
+### Azure Cosmos DB clients
+
+# [C#](#tab/csharp)
[DocumentClient](/dotnet/api/microsoft.azure.documents.client.documentclient) connects to an Azure Cosmos DB instance. The Azure Cosmos DB documentation recommends that you [use a singleton Azure Cosmos DB client for the lifetime of your application](../cosmos-db/performance-tips.md#sdk-usage). The following example shows one pattern for doing that in a function:
Also, create a file named "function.proj" for your trigger and add the below con
</Project> ```
-### CosmosClient code example (JavaScript)
+
+# [JavaScript](#tab/javascript)
+ [CosmosClient](/javascript/api/@azure/cosmos/cosmosclient) connects to an Azure Cosmos DB instance. The Azure Cosmos DB documentation recommends that you [use a singleton Azure Cosmos DB client for the lifetime of your application](../cosmos-db/performance-tips.md#sdk-usage). The following example shows one pattern for doing that in a function: ```javascript
module.exports = async function (context) {
} ``` ++ ## SqlClient connections Your function code can use the .NET Framework Data Provider for SQL Server ([SqlClient](/dotnet/api/system.data.sqlclient)) to make connections to a SQL relational database. This is also the underlying provider for data frameworks that rely on ADO.NET, such as [Entity Framework](/ef/ef6/). Unlike [HttpClient](/dotnet/api/system.net.http.httpclient) and [DocumentClient](/dotnet/api/microsoft.azure.documents.client.documentclient) connections, ADO.NET implements connection pooling by default. But because you can still run out of connections, you should optimize connections to the database. For more information, see [SQL Server Connection Pooling (ADO.NET)](/dotnet/framework/data/adonet/sql-server-connection-pooling).
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compare-azure-government-global-azure.md
ms.devlang: na
na Previously updated : 08/24/2021 Last updated : 08/26/2021 # Compare Azure Government and global Azure
The following Language Understanding **features are not currently available** in
### [Speech service](../cognitive-services/speech-service/overview.md)
-The following Speech service **features are not currently available** in Azure Government:
--- Custom Voice-
-See details of supported locales by features in [Speech service supported regions](../cognitive-services/speech-service/regions.md). For more information including API endpoints, see [Speech service in sovereign clouds](../cognitive-services/Speech-Service/sovereign-clouds.md).
+For feature variations and limitations, including API endpoints, see [Speech service in sovereign clouds](../cognitive-services/Speech-Service/sovereign-clouds.md).
### [Translator](../cognitive-services/translator/translator-info-overview.md)
The following Translator **features are not currently available** in Azure Gover
This section outlines variations and considerations when using Analytics services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=data-share,power-bi-embedded,analysis-services,event-hubs,data-lake-analytics,storage,data-catalog,data-factory,synapse-analytics,stream-analytics,databricks,hdinsight&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
-### [Azure Data Factory](../data-factory/index.yml)
-
-The following Data Factory **features are not currently available** in Azure Government:
--- Mapping data flows- ### [HDInsight](../hdinsight/hadoop/apache-hadoop-introduction.md) The following HDInsight **features are not currently available** in Azure Government:
The following Azure DevTest Labs **features are not currently available** in Azu
- Auto shutdown feature for Azure Compute VMs; however, setting auto shutdown for [Labs](https://azure.microsoft.com/updates/azure-devtest-labs-auto-shutdown-notification/) and [Lab Virtual Machines](https://azure.microsoft.com/updates/azure-devtest-labs-set-auto-shutdown-for-a-single-lab-vm/) is available.
+## Identity
+
+This section outlines variations and considerations when using Identity services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=information-protection,active-directory-ds,active-directory&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+
+### [Azure Active Directory Premium P1 and P2](../active-directory/index.yml)
+
+The following features have known limitations in Azure Government:
+
+- Limitations with B2B Collaboration in supported Azure US Government tenants:
+ - B2B Collaboration is available in most Azure US Government tenants created after June 2019. Over time, more tenants will get access to this functionality. See [How can I tell if B2B collaboration is available in my Azure US Government tenant?](../active-directory/external-identities/current-limitations.md#how-can-i-tell-if-b2b-collaboration-is-available-in-my-azure-us-government-tenant)
+ - B2B collaboration is supported between tenants that are both within Azure US Government cloud and that both support B2B collaboration. Azure US Government tenants that support B2B collaboration can also collaborate with social users using Microsoft, Google accounts, or email one-time passcode accounts. If you invite a user outside of these groups (for example, if the user is in a tenant that isn't part of the Azure US Government cloud or doesn't yet support B2B collaboration), the invitation will fail or the user will be unable to redeem the invitation.
+ - B2B collaboration via Power BI is not supported. When you invite a guest user from within Power BI, the B2B flow is not used and the guest user won't appear in the tenant's user list. If a guest user is invited through other means, they'll appear in the Power BI user list, but any sharing request to the user will fail and display a 403 Forbidden error.
+ - Microsoft 365 Groups are not supported for B2B users and can't be enabled.
+ - Some SQL tools such as SQL Server Management Studio (SSMS) require you to set the appropriate cloud parameter. In the tool's Azure service setup options, set the cloud parameter to Azure Government.
+
+- Limitations with multifactor authentication:
+ - Hardware OATH tokens are not available in Azure Government.
+ - Trusted IPs are not supported in Azure Government. Instead, use Conditional Access policies with named locations to establish when multifactor authentication should and should not be required based off the user's current IP address.
+
+- Limitations with Azure AD join:
+ - Enterprise state roaming for Windows 10 devices is not available
++ ## Internet of Things This section outlines variations and considerations when using Internet of Things services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=api-management,cosmos-db,notification-hubs,logic-apps,stream-analytics,machine-learning-studio,machine-learning-service,event-grid,functions,azure-rtos,azure-maps,iot-central,iot-hub&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
Traffic Manager health checks can originate from certain IP addresses for Azure
This section outlines variations and considerations when using Security services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-sentinel,azure-dedicated-hsm,information-protection,application-gateway,vpn-gateway,security-center,key-vault,active-directory-ds,ddos-protection,active-directory&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
-### [Azure Active Directory Premium P1 and P2](../active-directory/index.yml)
-
-The following features have known limitations in Azure Government:
--- Limitations with B2B Collaboration in supported Azure US Government tenants:
- - B2B Collaboration is available in most Azure US Government tenants created after June 2019. Over time, more tenants will get access to this functionality. See [How can I tell if B2B collaboration is available in my Azure US Government tenant?](../active-directory/external-identities/current-limitations.md#how-can-i-tell-if-b2b-collaboration-is-available-in-my-azure-us-government-tenant)
- - B2B collaboration is supported between tenants that are both within Azure US Government cloud and that both support B2B collaboration. Azure US Government tenants that support B2B collaboration can also collaborate with social users using Microsoft, Google accounts, or email one-time passcode accounts. If you invite a user outside of these groups (for example, if the user is in a tenant that isn't part of the Azure US Government cloud or doesn't yet support B2B collaboration), the invitation will fail or the user will be unable to redeem the invitation.
- - B2B collaboration via Power BI is not supported. When you invite a guest user from within Power BI, the B2B flow is not used and the guest user won't appear in the tenant's user list. If a guest user is invited through other means, they'll appear in the Power BI user list, but any sharing request to the user will fail and display a 403 Forbidden error.
- - Microsoft 365 Groups are not supported for B2B users and can't be enabled.
- - Some SQL tools such as SQL Server Management Studio (SSMS) require you to set the appropriate cloud parameter. In the tool's Azure service setup options, set the cloud parameter to Azure Government.
--- Limitations with multifactor authentication:
- - Hardware OATH tokens are not available in Azure Government.
- - Trusted IPs are not supported in Azure Government. Instead, use Conditional Access policies with named locations to establish when multifactor authentication should and should not be required based off the user's current IP address.
--- Limitations with Azure AD join:
- - Enterprise state roaming for Windows 10 devices is not available
- ### [Azure Defender for IoT](../defender-for-iot/index.yml) For feature variations and limitations, see [Cloud feature availability for US Government customers](../security/fundamentals/feature-availability.md#azure-defender-for-iot).
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
description: This article tracks FedRAMP and DoD compliance scope for Azure, Dyn
Previously updated : 08/24/2021 Last updated : 08/27/2021 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
For current Azure Government regions and available services, see [Products avail
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services in scope for FedRAMP High, DoD IL2, DoD IL4, DoD IL5, and DoD IL6 authorizations across Azure, Azure Government, and Azure Government Secret cloud environments. For other authorization details in Azure Government Secret and Azure Government Top Secret, contact your Microsoft account representative. ## Azure public services by audit scope
-*Last Updated: August 2021*
+*Last updated: August 2021*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/) | &#x2705; | &#x2705; | | | [Azure Peering Service](../../peering-service/about.md) | &#x2705; | &#x2705; | | | [Azure Policy](https://azure.microsoft.com/services/azure-policy/) | &#x2705; | &#x2705; | |
-| [Azure Policy Guest Configuration](../../governance/policy/concepts/guest-configuration.md) | &#x2705; | &#x2705; | |
+| [Azure Policy's guest configuration](../../governance/policy/concepts/guest-configuration.md) | &#x2705; | &#x2705; | |
| [Azure Public IP](../../virtual-network/public-ip-addresses.md) | &#x2705; | &#x2705; | | | [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/) | &#x2705; | &#x2705; | | | [Azure Resource Graph](../../governance/resource-graph/overview.md) | &#x2705; | &#x2705; | | | [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/) | &#x2705; | &#x2705; | | | [Azure Scheduler](../../scheduler/scheduler-intro.md) (replaced by [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/)) | &#x2705; | &#x2705; | | | [Azure Security Center](https://azure.microsoft.com/services/security-center/) | &#x2705; | &#x2705; | |
+| [Azure Sentinel](https://azure.microsoft.com/services/azure-sentinel/) (incl. [UEBA](../../sentinel/identify-threats-with-entity-behavior-analytics.md#what-is-user-and-entity-behavior-analytics-ueba)) | &#x2705; | &#x2705; | |
| [Azure Service Fabric](https://azure.microsoft.com/services/service-fabric/) | &#x2705; | &#x2705; | |
-| [Azure Service Health](https://azure.microsoft.com/features/service-health/) | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **Planned 2021** |
+| [Azure Service Health](https://azure.microsoft.com/features/service-health/) | &#x2705; | &#x2705; | |
| [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; | |
-| [Azure Sentinel](https://azure.microsoft.com/services/azure-sentinel/) (incl. [UEBA](../../sentinel/identify-threats-with-entity-behavior-analytics.md#what-is-user-and-entity-behavior-analytics-ueba)) | &#x2705; | &#x2705; | |
| [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) | &#x2705; | &#x2705; | | | [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) | &#x2705; | &#x2705; | | | [Azure Sphere](https://azure.microsoft.com/services/azure-sphere/) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Event Hubs](https://azure.microsoft.com/services/event-hubs/) | &#x2705; | &#x2705; | | | [GitHub AE](https://docs.github.com/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | &#x2705; | | | **Service** | **FedRAMP High** | **DoD IL2** | **Planned 2021** |
-| [GitHub Codespaces](https://visualstudio.microsoft.com/services/github-codespaces/) (formerly Visual Studio Codespaces) | &#x2705; | &#x2705; | |
| [Import/Export](https://azure.microsoft.com/services/storage/import-export/) | &#x2705; | &#x2705; | | | [Key Vault](https://azure.microsoft.com/services/key-vault/) | &#x2705; | &#x2705; | | | [Load Balancer](https://azure.microsoft.com/services/load-balancer/) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Intune](/mem/intune/fundamentals/) | &#x2705; | &#x2705; | | | [Microsoft Stream](/stream/overview) | &#x2705; | &#x2705; | | | [Microsoft Threat Experts](/microsoft-365/security/defender-endpoint/microsoft-threat-experts) | &#x2705; | &#x2705; | |
-| **Service** | **FedRAMP High** | **DoD IL2** | **Planned 2021** |
| [Multifactor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | |
+| **Service** | **FedRAMP High** | **DoD IL2** | **Planned 2021** |
| [Network Watcher](https://azure.microsoft.com/services/network-watcher/) incl. [Traffic Analytics](../../network-watcher/traffic-analytics.md) | &#x2705; | &#x2705; | | | [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | &#x2705; | &#x2705; | | | [Power AI Builder](/ai-builder/overview) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [SQL Server Registry](/sql/sql-server/end-of-support/sql-server-extended-security-updates) | &#x2705; | &#x2705; | | | [SQL Server Stretch Database](https://azure.microsoft.com/services/sql-server-stretch-database/) | &#x2705; | &#x2705; | | | [Storage: Blobs](https://azure.microsoft.com/services/storage/blobs/) (incl. [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)) | &#x2705; | &#x2705; | |
-| **Service** | **FedRAMP High** | **DoD IL2** | **Planned 2021** |
| [Storage: Data Movement)](../../storage/common/storage-use-data-movement-library.md) | &#x2705; | &#x2705; | |
+| **Service** | **FedRAMP High** | **DoD IL2** | **Planned 2021** |
| [Storage: Disks](https://azure.microsoft.com/services/storage/disks/) (incl. [managed disks](../../virtual-machines/managed-disks-overview.md)) | &#x2705; | &#x2705; | | | [Storage: Files](https://azure.microsoft.com/services/storage/files/) | &#x2705; | &#x2705; | | | [Storage: Queues](https://azure.microsoft.com/services/storage/queues/) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
**&ast;&ast;** FedRAMP High authorization for Azure Databricks is applicable to limited regions in Azure. To configure Azure Databricks for FedRAMP High use, contact your Microsoft or Databricks representative. ## Azure Government services by audit scope
-*Last Updated: August 2021*
+*Last updated: August 2021*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Peering Service](../../peering-service/about.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Policy](https://azure.microsoft.com/services/azure-policy/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Policy Guest Configuration](../../governance/policy/concepts/guest-configuration.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Policy's guest configuration](../../governance/policy/concepts/guest-configuration.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Public IP](../../virtual-network/public-ip-addresses.md) | &#x2705; | &#x2705; | | | | | [Azure Resource Graph](../../governance/resource-graph/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/azure-ad-authentication.md
Last updated 08/02/2021
# Azure AD authentication for Application Insights (Preview)
-Application Insights now supports Azure Active Directory (Azure AD) authentication. By using Azure AD, you can now ensure that only authenticated telemetry is ingested in your Application Insights resources.
+Application Insights now supports Azure Active Directory (Azure AD) authentication. By using Azure AD, you can ensure that only authenticated telemetry is ingested in your Application Insights resources.
Typically, using various authentication systems can be cumbersome and pose risk since itΓÇÖs difficult to manage credentials at a large scale. You can now choose to opt-out of local authentication and ensure only telemetry that is exclusively authenticated using [Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md) and [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md) is ingested in your Application Insights resource. This feature is a step to enhance the security and reliability of the telemetry used to make both critical operational (alerting/autoscale etc.) and business decisions.
azure-monitor Azure Cli Metrics Alert Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/azure-cli-metrics-alert-sample.md
+
+ Title: Create metric alert monitors in Azure CLI
+description: Learn how to create metric alerts in Azure Monitor with Azure CLI commands. These samples create alerts for a virtual machine and an App Service Plan.
+++ Last updated : 08/06/2021++++
+# Create metric alert monitors in Azure CLI
+
+These samples create metric alert monitors in Azure Monitor by using Azure CLI commands. The first sample creates an alert for a virtual machine. The second command creates an alert that includes a dimension for an App Service Plan.
++
+## Create an alert
+
+This alert monitors an existing virtual machine named `VM07` in the resource group named `ContosoVMRG`.
+
+You can create a resource group by using the [az group create](/cli/azure/group#az-group-create) command. For information about creating virtual machines, see [Create a Windows virtual machine with the Azure CLI](../virtual-machines/windows/quick-create-cli.md), [Create a Linux virtual machine with the Azure CLI](../virtual-machines/linux/quick-create-cli.md), and the [az vm create](/cli/azure/vm#az-vm-create) command.
+
+```azurecli
+# resource group name: ContosoVMRG
+# virtual machine name: VM07
+
+# Create scope
+scope=$(az vm show --resource-group ContosoVMRG --name VM07 --output tsv --query id)
+
+# Create action
+action=$(az monitor action-group create --name ContosoWebhookAction \
+ --resource-group ContosoVMRG --output tsv --query id \
+ --action webhook https://alerts.contoso.com usecommonalertschema)
+
+# Create condition
+condition=$(az monitor metrics alert condition create --aggregation Average \
+ --metric "Percentage CPU" --op GreaterThan --type static --threshold 90 --output tsv)
+
+# Create metrics alert
+az monitor metrics alert create --name alert-01 --resource-group ContosoVMRG \
+ --scopes $scope --action $action --condition $condition --description "Test High CPU"
+```
+
+This sample uses the `tsv` output type, which doesn't include unwanted symbols such as quotation marks. For more information, see [Use Azure CLI effectively](/cli/azure/use-cli-effectively).
+
+## Create an alert with a dimension
+
+This sample creates an App Service Plan and then creates a metrics alert for it. The example uses a dimension to specify that all instances of the App Service Plan will fall under this metric. The sample creates a resource group and application service plan.
+
+```azurecli
+# Create resource group
+az group create --name ContosoRG --location eastus2
+
+# Create application service plan
+az appservice plan create --resource-group ContosoRG --name ContosoAppServicePlan \
+ --is-linux --number-of-workers 4 --sku S1
+
+# Create scope
+scope=$(az appservice plan show --resource-group ContosoRG --name ContosoAppServicePlan \
+ --output tsv --query id)
+
+# Create dimension
+dim01=$(az monitor metrics alert dimension create --name Instance --value * --op Include --output tsv)
+
+# Create condition
+condition=$(az monitor metrics alert condition create --aggregation Average \
+ --metric CpuPercentage --op GreaterThan --type static --threshold 90 \
+ --dimension $dim01 --output tsv)
+```
+
+To see a list of the possible metrics, run the [az monitor metrics list-definitions](/cli/azure/monitor/metrics#az_monitor_metrics_list_definitions) command. The `--output` parameter displays the values in a readable format.
++
+```azurecli
+az monitor metrics list-definitions --resource $scope --output table
+
+# Create metrics alert
+az monitor metrics alert create --name alert-02 --resource-group ContosoRG \
+ --scopes $scope --condition $condition --description "Service Plan High CPU"
+```
+
+## Clean up deployment
+
+If you created resource groups to test these commands, you can remove a resource group and all its contents by using the [az group delete](/cli/azure/group#az-group-delete) command:
+
+```azurecli
+az group delete --name ContosoVMRG
+
+az group delete --name ContosoRG
+```
+
+If you used existing resources that you want to keep, use the [az monitor metrics alert delete](/cli/azure/monitor/metrics/alert#az-monitor-metrics-alert-delete) command to delete your practice alerts:
+
+```azurecli
+az monitor metrics alert delete --name alert-01
+
+az monitor metrics alert delete --name alert-02
+```
+
+## Azure CLI commands used in this article
+
+This article uses the following Azure CLI commands:
+
+- [az appservice plan create](/cli/azure/appservice/plan#az_appservice_plan_create)
+- [az appservice plan show](/cli/azure/appservice/plan#az_appservice_plan_show)
+- [az group create](/cli/azure/group#az-group-create)
+- [az group delete](/cli/azure/group#az-group-delete)
+- [az monitor action-group create](/cli/azure/monitor/action-group#az_monitor_action_group_create)
+- [az monitor metrics alert condition create](/cli/azure/monitor/metrics/alert#az-monitor-metrics-alert-condition-create)
+- [az monitor metrics alert create](/cli/azure/monitor/metrics/alert#az-monitor-metrics-alert-create)
+- [az monitor metrics alert delete](/cli/azure/monitor/metrics/alert#az-monitor-metrics-alert-delete)
+- [az monitor metrics alert dimension create](/cli/azure/monitor/metrics/alert#az-monitor-metrics-alert-dimension-create)
+- [az monitor metrics list-definitions](/cli/azure/monitor/metrics#az_monitor_metrics_list_definitions)
+- [az vm show](/cli/azure/vm#az_vm_show)
+
+## Next steps
+
+- [Azure Monitor CLI samples](cli-samples.md)
+- [Understand how metric alerts work in Azure Monitor](alerts/alerts-metric-overview.md)
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
azure-monitor Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Monitor description: Sample Azure Resource Graph queries for Azure Monitor showing use of resource types and tables to access Azure Monitor related resources and properties. Previously updated : 08/09/2021 Last updated : 08/27/2021
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
azure-resource-manager Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/install.md
To install Bicep CLI in an air-gapped environment, you need to download the Bice
- **Linux**
- 1. Download **bicep-linux-x64** from the [Bicep release page](/Azure/bicep/releases/) in a non-air-gapped environment.
+ 1. Download **bicep-linux-x64** from the [Bicep release page](https://github.com/Azure/bicep/releases/latest/) in a non-air-gapped environment.
1. Copy the executable to the **$HOME/.azure/bin** directory on an air-gapped machine. - **macOS**
- 1. Download **bicep-osx-x64** from the [Bicep release page](/Azure/bicep/releases/) in a non-air-gapped environment.
+ 1. Download **bicep-osx-x64** from the [Bicep release page](https://github.com/Azure/bicep/releases/latest/) in a non-air-gapped environment.
1. Copy the executable to the **$HOME/.azure/bin** directory on an air-gapped machine. - **Windows**
- 1. Download **bicep-win-x64.exe** from the [Bicep release page](/Azure/bicep/releases/) in a non-air-gapped environment.
+ 1. Download **bicep-win-x64.exe** from the [Bicep release page](https://github.com/Azure/bicep/releases/latest/) in a non-air-gapped environment.
1. Copy the executable to the **%UserProfile%/.azure/bin** directory on an air-gapped machine. Note `bicep install` and `bicep upgrade` commands don't not work in an air-gapped environment.
azure-resource-manager Learn Bicep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/learn-bicep.md
Title: Discover Bicep on Microsoft Learn description: Provides an overview of the units that are available on Microsoft Learn for Bicep. Previously updated : 08/08/2021 Last updated : 08/26/2021 # Bicep on Microsoft Learn
-For step-by-step guidance on using Bicep to deploy your infrastructure to Azure, Microsoft Learn offers several learning modules.
+Ready to see how Bicep can help simplify and accelerate your deployments to Azure? Check out the many hands-on courses on Microsoft Learn.
-## Introductory path
+## Get started
-The [Deploy and manage resources in Azure by using Bicep](/learn/paths/bicep-deploy/) learning path is the best place to start. It introduces you to the concept of infrastructure as code. The path takes you through the steps of building increasingly complex Bicep files.
+These two learning paths will help you get started:
-This path contains the following modules.
+<img src="media/learn-bicep/bicep-deploy-manage.svg" width="101" height="120" alt="The trophy for the Deploy and manage resources in Azure by using Bicep learning path." role="presentation"></img>
-| Learn module | Description |
-| | -- |
-| [Introduction to infrastructure as code using Bicep](/learn/modules/introduction-to-infrastructure-as-code-using-bicep/) | This module describes the benefits of using infrastructure as code, Azure Resource Manager, and Bicep to quickly and confidently scale your cloud deployments. It helps you determine the types of deployments for which Bicep is a good deployment tool. |
-| [Build your first Bicep template](/learn/modules/build-first-bicep-template/) | In this module, you define Azure resources within a Bicep template. You improve the consistency and reliability of your deployments, reduce the manual effort required, and scale your deployments across environments. Your template will be flexible and reusable by using parameters, variables, expressions, and modules. |
-| [Build reusable Bicep templates by using parameters](/learn/modules/build-reusable-bicep-templates-parameters/) | This module describes how you can use Bicep parameters to provide information for your template during each deployment. You'll learn about parameter decorators, which make your parameters easy to understand and work with. You'll also learn about the different ways that you can provide parameter values and protect them when you're working with secure information. |
-| [Build flexible Bicep templates by using conditions and loops](/learn/modules/build-flexible-bicep-templates-conditions-loops/) | Learn how to use conditions to deploy resources only when specific constraints are in place. Also learn how to use loops to deploy multiple resources that have similar properties. |
-| [Deploy child and extension resources by using Bicep](/learn/modules/child-extension-bicep-templates/) | This module shows how to deploy various Azure resources in your Bicep code. Learn about child and extension resources, and how they can be defined and used within Bicep. Use Bicep to work with resources that you created outside a Bicep template or module. |
-| [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/learn/modules/deploy-resources-scopes-bicep/) | Deploy Azure resources at the subscription, management group, and tenant scope. Learn what these resources are, why you would use them, and how you create Bicep code to deploy them. Also learn how to create a single set of Bicep files that you can deploy across multiple scopes in one operation. |
-| [Extend templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts/) | Learn how to add custom steps to your Bicep file or Azure Resource Manager template (ARM template) by using deployment scripts. |
+[Part 1: Deploy and manage resources in Azure by using Bicep](/learn/paths/bicep-deploy/)
+
+<img src="media/learn-bicep/bicep-collaborate.svg" width="101" height="120" alt="The trophy for the Build Azure infrastructure in a team environment by using Bicep learning path." role="presentation"></img>
+
+[Part 2: Build Azure infrastructure in a team environment by using Bicep](/learn/paths/bicep-collaborate/)
-## Other modules
-In addition to the preceding path, the following modules contain Bicep content.
+## Azure Pipelines and GitHub Actions modules
+
+In addition to the preceding learning paths, the following modules contain Bicep content related to Azure Pipelines and GitHub Actions.
| Learn module | Description | | | -- |
-| [Manage changes to your Bicep code by using Git](/learn/modules/manage-changes-bicep-code-git/) | Learn how to use Git to support your Bicep development workflow by keeping track of the changes you make as you work. You'll find out how to commit files, view the history of the files you've changed, and how to use branches to develop multiple versions of your code at the same time. You'll also learn how to use GitHub or Azure Repos to publish a repository so that you can collaborate with team members. |
-| [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs/) | Template specs enable you to reuse and share your ARM templates across your organization. Learn how to create and publish template specs, and how to deploy them. You'll also learn how to manage template specs, including how to control access and how to safely update them by using versions. |
-| [Preview Azure deployment changes by using what-if](/learn/modules/arm-template-whatif/) | This module teaches you how to preview your changes with the what-if operation. By using what-if, you can make sure your Bicep file only makes changes that you expect. |
-| [Structure your Bicep code for collaboration](/learn/modules/structure-bicep-code-collaboration/) | Build Bicep files that support collaborative development and follow best practices. Plan your parameters to make your templates easy to deploy. Use a consistent style, clear structure, and comments to make your Bicep code easy to understand, use, and modify. |
-| [Authenticate your Azure deployment pipeline by using service principals](/learn/modules/authenticate-azure-deployment-pipeline-service-principals/) | Service principals enable your deployment pipelines to authenticate securely with Azure. In this module, you'll learn what service principals are, how they work, and how to create them. You'll also learn how to grant them permission to your Azure resources so that your pipelines can deploy your Bicep files. |
| [Build your first Bicep deployment pipeline by using Azure Pipelines](/learn/modules/build-first-bicep-deployment-pipeline-using-azure-pipelines/) | Build a basic deployment pipeline for Bicep code. Use a service connection to securely identify your pipeline to Azure. Configure when the pipeline runs by using triggers. |
-| [Build your first Bicep deployment workflow by using GitHub Actions](/learn/modules/build-first-bicep-deployment-pipeline-using-github-actions/) | Build a basic deployment workflow for Bicep code. Use a secret to securely identify your GitHub Actions workflow to Azure, and then set when the workflow runs by using triggers and schedules. |
+| [Build your first Bicep deployment workflow by using GitHub Actions](/learn/modules/build-first-bicep-deployment-workflow-using-github-actions/) | Build a basic deployment workflow for Bicep code. Use a secret to securely identify your GitHub Actions workflow to Azure, and then set when the workflow runs by using triggers and schedules. |
+| [Authenticate your Azure deployment pipeline by using service principals](/learn/modules/authenticate-azure-deployment-pipeline-service-principals/) | Service principals enable your deployment pipelines to authenticate securely with Azure. In this module, you'll learn what service principals are, how they work, and how to create them. You'll also learn how to grant them permission to your Azure resources so that your pipelines can deploy your Bicep files. |
## Next steps
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-subscription-service-limits.md
The following table applies to v1, v2, Standard, and WAF SKUs unless otherwise s
[!INCLUDE [private-link-limits](../../../includes/private-link-limits.md)]
-## Purview limits
-
-The latest values for Azure Purview quotas can be found in the [Azure Purview quota page](../../purview/how-to-manage-quotas.md)
- ### Traffic Manager limits [!INCLUDE [traffic-manager-limits](../../../includes/traffic-manager-limits.md)]
The latest values for Azure Purview quotas can be found in the [Azure Purview qu
[!INCLUDE [notification-hub-limits](../../../includes/notification-hub-limits.md)]
+## Purview limits
+
+The latest values for Azure Purview quotas can be found in the [Azure Purview quota page](../../purview/how-to-manage-quotas.md).
+ ## Service Bus limits [!INCLUDE [azure-servicebus-limits](../../../includes/service-bus-quotas-table.md)]
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-support-resources.md
Title: Move operation support by resource type description: Lists the Azure resource types that can be moved to a new resource group, subscription, or region. Previously updated : 04/23/2021 Last updated : 08/24/2021 # Move operation support for resources
Jump to a resource provider namespace:
## Microsoft.RecoveryServices
-> [!IMPORTANT]
-> See [Recovery Services move guidance](../../backup/backup-azure-move-recovery-services-vault.md?toc=/azure/azure-resource-manager/toc.json).
+>[!IMPORTANT]
+>- See [Recovery Services move guidance](../../backup/backup-azure-move-recovery-services-vault.md?toc=/azure/azure-resource-manager/toc.json).
+>- See [Continue backups in Recovery Services vault after moving resources across regions](../../backup/azure-backup-move-vaults-across-regions.md?toc=/azure/azure-resource-manager/toc.json).
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move |
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/overview.md
Title: Azure Resource Manager overview description: Describes how to use Azure Resource Manager for deployment, management, and access control of resources on Azure. Previously updated : 03/25/2021 Last updated : 08/27/2021 # What is Azure Resource Manager?
Azure provides four levels of scope: [management groups](../../governance/manage
You apply management settings at any of these levels of scope. The level you select determines how widely the setting is applied. Lower levels inherit settings from higher levels. For example, when you apply a [policy](../../governance/policy/overview.md) to the subscription, the policy is applied to all resource groups and resources in your subscription. When you apply a policy on the resource group, that policy is applied to the resource group and all its resources. However, another resource group doesn't have that policy assignment.
+For information about managing identities and access, see [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md).
+ You can deploy templates to tenants, management groups, subscriptions, or resource groups. ## Resource groups
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
azure-resource-manager Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Resource Manager description: Sample Azure Resource Graph queries for Azure Resource Manager showing use of resource types and tables to access Azure Resource Manager related resources and properties. Previously updated : 08/09/2021 Last updated : 08/27/2021
azure-resource-manager Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
azure-signalr Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
azure-sql Features Comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/features-comparison.md
Previously updated : 08/12/2021 Last updated : 08/26/2021 # Features comparison: Azure SQL Database and Azure SQL Managed Instance
The following table lists the major features of SQL Server and provides informat
| [Functions](/sql/t-sql/functions/functions) | Most - see individual functions | Yes - see [Stored procedures, functions, triggers differences](../managed-instance/transact-sql-tsql-differences-sql-server.md#stored-procedures-functions-and-triggers) | | [In-memory optimization](/sql/relational-databases/in-memory-oltp/in-memory-oltp-in-memory-optimization) | Yes in [Premium and Business Critical service tiers](../in-memory-oltp-overview.md).</br> Limited support for non-persistent In-Memory OLTP objects such as memory-optimized table variables in [Hyperscale service tier](service-tier-hyperscale.md).| Yes in [Business Critical service tier](../managed-instance/sql-managed-instance-paas-overview.md) | | [Language elements](/sql/t-sql/language-elements/language-elements-transact-sql) | Most - see individual elements | Yes - see [T-SQL differences](../managed-instance/transact-sql-tsql-differences-sql-server.md) |
+| [Ledger](ledger-overview.md) | Yes | No |
| [Linked servers](/sql/relational-databases/linked-servers/linked-servers-database-engine) | No - see [Elastic query](elastic-query-horizontal-partitioning.md) | Yes. Only to [SQL Server and SQL Database](../managed-instance/transact-sql-tsql-differences-sql-server.md#linked-servers) without distributed transactions. | | [Linked servers](/sql/relational-databases/linked-servers/linked-servers-database-engine) that read from files (CSV, Excel)| No. Use [BULK INSERT](/sql/t-sql/statements/bulk-insert-transact-sql#e-importing-data-from-a-csv-file) or [OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql#g-accessing-data-from-a-csv-file-with-a-format-file) as an alternative for CSV format. | No. Use [BULK INSERT](/sql/t-sql/statements/bulk-insert-transact-sql#e-importing-data-from-a-csv-file) or [OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql#g-accessing-data-from-a-csv-file-with-a-format-file) as an alternative for CSV format. Track these requests on [SQL Managed Instance feedback item](https://feedback.azure.com/forums/915676-sql-managed-instance/suggestions/35657887-linked-server-to-non-sql-sources)| | [Log shipping](/sql/database-engine/log-shipping/about-log-shipping-sql-server) | [High availability](high-availability-sla.md) is included with every database. Disaster recovery is discussed in [Overview of business continuity](business-continuity-high-availability-disaster-recover-hadr-overview.md). | Natively built in as a part of [Azure Data Migration Service (DMS)](../../dms/tutorial-sql-server-to-managed-instance.md) migration process. Natively built for custom data migration projects as an external [Log Replay Service (LRS)](../managed-instance/log-replay-service-migrate.md).<br /> Not available as High availability solution, because other [High availability](high-availability-sla.md) methods are included with every database and it is not recommended to use Log-shipping as HA alternative. Disaster recovery is discussed in [Overview of business continuity](business-continuity-high-availability-disaster-recover-hadr-overview.md). Not available as a replication mechanism between databases - use secondary replicas on [Business Critical tier](service-tier-business-critical.md), [auto-failover groups](auto-failover-group-overview.md), or [transactional replication](../managed-instance/replication-transactional-overview.md) as the alternatives. |
azure-sql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/policy-reference.md
Title: Built-in policy definitions for Azure SQL Database description: Lists Azure Policy built-in policy definitions for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
azure-sql Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure SQL Database description: Sample Azure Resource Graph queries for Azure SQL Database showing use of resource types and tables to access Azure SQL Database related resources and properties. Previously updated : 08/09/2021 Last updated : 08/27/2021
azure-sql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SQL Database description: Lists Azure Policy Regulatory Compliance controls available for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
azure-web-pubsub Choose Server Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/choose-server-sdks.md
Last updated 03/11/2021
# Choose the server SDKs
-The Azure Web PubSub service provides server SDK with 4 languages: C#, Java, JavaScript and Python.
+The Azure Web PubSub service provides server SDK with four languages: C#, Java, JavaScript, and Python.
## Server SDK - C#
-[C# Server SDK instruction](https://azure.github.io/azure-webpubsub/references/server-sdks/csharp-server-sdks)
+[C# Server SDK instruction](reference-server-sdk-csharp.md)
## Server SDK - Java
-[Java Server SDK instruction](https://azure.github.io/azure-webpubsub/references/server-sdks/java-server-sdks)
+[Java Server SDK instruction](reference-server-sdk-java.md)
## Server SDK - JavaScript
-[JavaScript Server SDK instruction](https://azure.github.io/azure-webpubsub/references/server-sdks/js-server-sdks)
+[JavaScript Server SDK instruction](reference-server-sdk-js.md)
## Server SDK - Python
-[Python Server SDK instruction](https://azure.github.io/azure-webpubsub/references/server-sdks/python-server-sdks)
+[Python Server SDK instruction](reference-server-sdk-python.md)
azure-web-pubsub Howto Websocket Connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/howto-websocket-connect.md
+
+ Title: How to start WebSocket connection to the Azure Web PubSub service
+description: An instruction on how to start WebSocket connection to the Azure Web PubSub service in different languages
++++ Last updated : 08/26/2021++
+# How to start WebSocket connection to the Azure Web PubSub service
+
+Clients connect to the Azure Web PubSub service using the standard [WebSocket](https://tools.ietf.org/html/rfc6455) protocol. So languages having WebSocket client support can be used to write a client for the service. In below sections, we show several WebSocket client samples in different languages.
+
+## Auth
+The Web PubSub service uses [JWT token](https://tools.ietf.org/html/rfc7519.html) to validate and auth the clients. Clients can either put the token in the `access_token` query parameter, or put it in `Authorization` header when connecting to the service.
+
+A typical workflow is the client communicates with its app server first to get the URL of the service and the token. And then the client opens the WebSocket connection to the service using the URL and token it receives.
+
+The portal also provides a dynamically generated *Client URL* with token for clients to start a quick test:
++
+> [!NOTE]
+> Make sure to only include necessary roles when generating the token.
+>
+
+To simplify the sample workflow, in below sections, we use this temporarily generated URL from portal for the client to connect, using `<Client_URL_From_Portal>` to represent the value. The token generated expires in 50 minutes by default, so don't forget to regenerate one when the token expires.
+
+The service supports two types of WebSocket clients, one is the simple WebSocket client, and the other is the PubSub WebSocket client. Here we show how these two kinds of clients connect to the service. Check [WebSocket client protocols for Azure Web PubSub](./concept-client-protocols.md) for the details of these two kinds of clients.
+
+## Dependency
+
+# [In Browser](#tab/browser)
+In most modern browsers, `WebSocket` API is natively supported.
+
+# [Node.js](#tab/javascript)
+
+* [Node.js 12.x or above](https://nodejs.org)
+* `npm install ws`
+
+# [Python](#tab/python)
+* [Python](https://www.python.org/)
+* `pip install websockets`
+
+# [C#](#tab/csharp)
+
+* [.NET Core 2.1 or above](https://dotnet.microsoft.com/download)
+* `dotnet add package Websocket.Client`
+ * [Websocket.Client](https://github.com/Marfusios/websocket-client) is a third-party WebSocket client with built-in reconnection and error handling
+
+# [Java](#tab/java)
+- [Java Development Kit (JDK)](/java/azure/jdk/) version 8 or above.
+- [Apache Maven](https://maven.apache.org/download.cgi).
+++
+## Simple WebSocket Client
+
+# [In Browser](#tab/browser)
+
+Inside the `script` block of the html page:
+```html
+<script>
+ // Don't forget to replace this <Client_URL_From_Portal> with the value fetched from the portal
+ let ws = new WebSocket("<Client_URL_From_Portal>");
+ ws.onopen = () => {
+ // Do things when the WebSocket connection is established
+ };
+
+ ws.onmessage = event => {
+ // Do things when messages are received.
+ };
+</script>
+```
+
+# [Node.js](#tab/javascript)
+
+```js
+const WebSocket = require('ws');
+// Don't forget to replace this <Client_URL_From_Portal> with the value fetched from the portal
+const client = new WebSocket("<Client_URL_From_Portal>");
+client.on('open', () => {
+ // Do things when the WebSocket connection is established
+});
+client.on('message', msg => {
+ // Do things when messages are received.
+});
+```
+
+# [Python](#tab/python)
+
+```python
+import asyncio
+import websockets
+
+async def hello():
+ # Don't forget to replace this <Client_URL_From_Portal> with the value fetched from the portal
+ uri = '<Client_URL_From_Portal>'
+ async with websockets.connect(uri) as ws:
+ while True:
+ await ws.send('hello')
+ greeting = await ws.recv()
+ print(greeting)
+
+asyncio.get_event_loop().run_until_complete(hello())
+```
+
+# [C#](#tab/csharp)
+
+```csharp
+using System;
+using System.IO;
+using System.Text;
+using System.Threading.Tasks;
+using Websocket.Client;
+
+namespace subscriber
+{
+ class Program
+ {
+ static async Task Main(string[] args)
+ {
+ // Don't forget to replace this <Client_URL_From_Portal> with the value fetched from the portal
+ using (var client = new WebsocketClient(new Uri("<Client_URL_From_Portal>")))
+ {
+ // Disable the auto disconnect and reconnect because the sample would like the client to stay online even no data comes in
+ client.ReconnectTimeout = null;
+ client.MessageReceived.Subscribe(msg => Console.WriteLine($"Message received: {msg}"));
+ await client.Start();
+ Console.WriteLine("Connected.");
+ Console.Read();
+ }
+ }
+ }
+}
+```
+
+# [Java](#tab/java)
+
+```java
+package client;
+
+import java.net.URI;
+import java.net.http.HttpClient;
+import java.net.http.WebSocket;
+import java.util.concurrent.CompletionStage;
+
+/**
+ * A simple WebSocket Client.
+ *
+ */
+public final class SimpleClient {
+ private SimpleClient() {
+ }
+
+ /**
+ * Starts a simple WebSocket connection.
+ * @param args The arguments of the program.
+ */
+ public static void main(String[] args) throws Exception {
+ // Don't forget to replace this <Client_URL_From_Portal> with the value fetched from the portal
+ WebSocket ws = HttpClient.newHttpClient().newWebSocketBuilder()
+ .buildAsync(URI.create("<Client_URL_From_Portal>"), new WebSocketClient()).join();
+ System.in.read();
+ }
+
+ private static final class WebSocketClient implements WebSocket.Listener {
+ private WebSocketClient() {
+ }
+
+ @Override
+ public void onOpen(WebSocket webSocket) {
+ System.out.println("onOpen using subprotocol " + webSocket.getSubprotocol());
+ WebSocket.Listener.super.onOpen(webSocket);
+ }
+
+ @Override
+ public CompletionStage<?> onText(WebSocket webSocket, CharSequence data, boolean last) {
+ System.out.println("onText received " + data);
+ return WebSocket.Listener.super.onText(webSocket, data, last);
+ }
+
+ @Override
+ public void onError(WebSocket webSocket, Throwable error) {
+ System.out.println("Bad day! " + webSocket.toString());
+ WebSocket.Listener.super.onError(webSocket, error);
+ }
+ }
+}
+
+```
++++
+## PubSub WebSocket Client
+
+# [In Browser](#tab/browser)
+
+Inside the `script` block of the html page:
+```html
+<script>
+ // Don't forget to replace this <Client_URL_From_Portal> with the value fetched from the portal
+ let ws = new WebSocket("<Client_URL_From_Portal>", 'json.webpubsub.azure.v1');
+ ws.onopen = () => {
+ // Do things when the WebSocket connection is established
+ };
+
+ ws.onmessage = event => {
+ // Do things when messages are received.
+ };
+</script>
+```
+
+# [Node.js](#tab/javascript)
+
+```js
+const WebSocket = require('ws');
+// Don't forget to replace this <Client_URL_From_Portal> with the value fetched from the portal
+const client = new WebSocket("<Client_URL_From_Portal>", "json.webpubsub.azure.v1");
+client.on('open', () => {
+ // Do things when the WebSocket connection is established
+});
+client.on('message', msg => {
+ // Do things when messages are received.
+});
+```
+
+# [Python](#tab/python)
+
+```python
+import asyncio
+import websockets
+
+async def join_group():
+ # Don't forget to replace this <Client_URL_From_Portal> with the value fetched from the portal
+ uri = '<Client_URL_From_Portal>'
+ async with websockets.connect(uri, subprotocols=['json.webpubsub.azure.v1']) as ws:
+ await ws.send('{"type":"joinGroup","ackId":1,"group":"group1"}')
+ return await ws.recv()
+
+print(asyncio.get_event_loop().run_until_complete(join_group()))
+```
+
+# [C#](#tab/csharp)
+
+```csharp
+using System;
+using System.IO;
+using System.Net.WebSockets;
+using System.Text;
+using System.Threading.Tasks;
+namespace subscriber
+{
+ class Program
+ {
+ static async Task Main(string[] args)
+ {
+ // Don't forget to replace this <Client_URL_From_Portal> with the value fetched from the portal
+ using (var client = new WebsocketClient(new Uri("<Client_URL_From_Portal>"), () =>
+ {
+ var inner = new ClientWebSocket();
+ inner.Options.AddSubProtocol("json.webpubsub.azure.v1");
+ return inner;
+ }))
+ {
+ // Disable the auto disconnect and reconnect because the sample would like the client to stay online even no data comes in
+ client.ReconnectTimeout = null;
+ client.MessageReceived.Subscribe(msg => Console.WriteLine($"Message received: {msg}"));
+ await client.Start();
+ Console.WriteLine("Connected.");
+ Console.Read();
+ }
+ }
+ }
+}
+```
+
+# [Java](#tab/java)
+
+```java
+package client;
+
+import java.net.URI;
+import java.net.http.HttpClient;
+import java.net.http.WebSocket;
+import java.util.concurrent.CompletionStage;
+
+/**
+ * A PubSub WebSocket Client.
+ *
+ */
+public final class SubprotocolClient {
+ private SubprotocolClient() {
+ }
+
+ /**
+ * Starts a PubSub WebSocket connection.
+ * @param args The arguments of the program.
+ */
+ public static void main(String[] args) throws Exception {
+ // Don't forget to replace this <Client_URL_From_Portal> with the value fetched from the portal
+ WebSocket ws = HttpClient.newHttpClient().newWebSocketBuilder().subprotocols("json.webpubsub.azure.v1")
+ .buildAsync(URI.create("<Client_URL_From_Portal>"), new WebSocketClient()).join();
+
+ ws.sendText("{\"type\":\"joinGroup\",\"ackId\":1,\"group\":\"group1\"}", true);
+ System.in.read();
+ }
+
+ private static final class WebSocketClient implements WebSocket.Listener {
+ private WebSocketClient() {
+ }
+
+ @Override
+ public void onOpen(WebSocket webSocket) {
+ System.out.println("onOpen using subprotocol " + webSocket.getSubprotocol());
+ WebSocket.Listener.super.onOpen(webSocket);
+ }
+
+ @Override
+ public CompletionStage<?> onText(WebSocket webSocket, CharSequence data, boolean last) {
+ System.out.println("onText received " + data);
+ return WebSocket.Listener.super.onText(webSocket, data, last);
+ }
+
+ @Override
+ public void onError(WebSocket webSocket, Throwable error) {
+ System.out.println("Bad day! " + webSocket.toString());
+ WebSocket.Listener.super.onError(webSocket, error);
+ }
+ }
+}
+```
+++
+## Next step
+
+In this article, we show how to connect to the service using the URL generated from the portal. Check below tutorials to see how the clients communicate with the app server to get the URL in real-world applications.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Create a chatroom with Azure Web PubSub](./tutorial-build-chat.md)
+
+> [!div class="nextstepaction"]
+> [Tutorial: Client streaming using service-supported subprotocol](./tutorial-subprotocol.md)
+
+> [!div class="nextstepaction"]
+> [Explore more Azure Web PubSub samples](https://aka.ms/awps/samples)
+
azure-web-pubsub Reference Server Sdk Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/reference-server-sdk-csharp.md
+
+ Title: Reference - .NET server SDK for Azure Web PubSub service
+description: The reference describes the .NET server SDK for Azure Web PubSub service
++++ Last updated : 08/26/2021++
+# .NET server SDK for Azure Web PubSub service
+
+This library can be used to do the following actions. Details about the terms used here are described in [Key concepts](#key-concepts) section.
+
+- Send messages to hubs and groups.
+- Send messages to particular users and connections.
+- Organize users and connections into groups.
+- Close connections
+- Grant, revoke, and check permissions for an existing connection
+
+[Source code][code] |
+[Package][package] |
+[API reference documentation][api] |
+[Product documentation](https://aka.ms/awps/doc) |
+[Samples][samples_ref]
+
+## Getting started
+### Install the package
+
+Install the client library from [NuGet](https://www.nuget.org/):
+
+```PowerShell
+dotnet add package Azure.Messaging.WebPubSub --prerelease
+```
+
+### Prerequisites
+
+- An [Azure subscription][azure_sub].
+- An existing Azure Web PubSub service instance.
+
+### Authenticate the client
+
+In order to interact with the service, you'll need to create an instance of the WebPubSubServiceClient class. To make this possible, you'll need the connection string or a key, which you can access in the Azure portal.
+
+### Create a `WebPubSubServiceClient`
+
+```csharp
+var serviceClient = new WebPubSubServiceClient(new Uri("<endpoint>"), "<hub>", new AzureKeyCredential("<access-key>"));
+```
+
+## Key concepts
++
+## Examples
+
+### Broadcast a text message to all clients
+
+```C# Snippet:WebPubSubHelloWorld
+var serviceClient = new WebPubSubServiceClient(new Uri(endpoint), "some_hub", new AzureKeyCredential(key));
+
+serviceClient.SendToAll("Hello World!");
+```
+
+### Broadcast a JSON message to all clients
+
+```C# Snippet:WebPubSubSendJson
+var serviceClient = new WebPubSubServiceClient(new Uri(endpoint), "some_hub", new AzureKeyCredential(key));
+
+serviceClient.SendToAll(RequestContent.Create(
+ new
+ {
+ Foo = "Hello World!",
+ Bar = 42
+ }),
+ ContentType.ApplicationJson);
+```
+
+### Broadcast a binary message to all clients
+
+```C# Snippet:WebPubSubSendBinary
+var serviceClient = new WebPubSubServiceClient(new Uri(endpoint), "some_hub", new AzureKeyCredential(key));
+
+Stream stream = BinaryData.FromString("Hello World!").ToStream();
+serviceClient.SendToAll(RequestContent.Create(stream), ContentType.ApplicationOctetStream);
+```
+
+## Troubleshooting
+
+### Setting up console logging
+You can also easily [enable console logging](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/core/Azure.Core/samples/Diagnostics.md#logging) if you want to dig deeper into the requests you're making against the service.
+
+[azure_sub]: https://azure.microsoft.com/free/
+[samples_ref]: https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp
+[code]: https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/webpubsub/Azure.Messaging.WebPubSub/src
+[package]: https://www.nuget.org/packages/Azure.Messaging.WebPubSub
+[api]: /dotnet/api/azure.messaging.webpubsub
+
+## Next steps
+
azure-web-pubsub Reference Server Sdk Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/reference-server-sdk-java.md
+
+ Title: Reference - Java server SDK for Azure Web PubSub service
+description: The reference describes the Java server SDK for Azure Web PubSub service
++++ Last updated : 08/26/2021++
+# Java server SDK for Azure Web PubSub service
+
+Use the library to:
+
+- Send messages to hubs and groups.
+- Send messages to particular users and connections.
+- Organize users and connections into groups.
+- Close connections
+- Grant/revoke/check permissions for an existing connection
+
+[Source code][source_code] | [API Reference Documentation][api] | [Product Documentation][product_documentation] | [Samples][samples_readme]
+
+## Getting started
+
+### Prerequisites
+
+- A [Java Development Kit (JDK)][jdk_link], version 8 or later.
+- [Azure Subscription][azure_subscription]
+
+### Include the Package
+
+[//]: # ({x-version-update-start;com.azure:azure-messaging-webpubsub;current})
+
+```xml
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-messaging-webpubsub</artifactId>
+ <version>1.0.0-beta.2</version>
+</dependency>
+```
+
+[//]: # ({x-version-update-end})
+
+### Create a Web PubSub client using connection string
+
+```java
+WebPubSubServiceClient webPubSubServiceClient = new WebPubSubClientBuilder()
+ .connectionString("{connection-string}")
+ .hub("chat")
+ .buildClient();
+```
+
+### Create a Web PubSub client using access key
+
+```java
+WebPubSubServiceClient webPubSubServiceClient = new WebPubSubClientBuilder()
+ .credential(new AzureKeyCredential("{access-key}"))
+ .endpoint("<Insert endpoint from Azure Portal>")
+ .hub("chat")
+ .buildClient();
+```
+
+### Create a Web PubSub Group client
+```java
+WebPubSubServiceClient webPubSubServiceClient = new WebPubSubClientBuilder()
+ .credential(new AzureKeyCredential("{access-key}"))
+ .hub("chat")
+ .buildClient();
+WebPubSubGroup javaGroup = webPubSubServiceClient.getGroup("java");
+```
+
+## Key concepts
+++
+## Examples
+
+### Broadcast message to entire hub
+
+```java
+webPubSubServiceClient.sendToAll("Hello world!");
+```
+
+### Broadcast message to a group
+
+```java
+WebPubSubGroup javaGroup = webPubSubServiceClient.getGroup("Java");
+javaGroup.sendToAll("Hello Java!");
+```
+
+### Send message to a connection
+
+```java
+webPubSubServiceClient.sendToConnection("myconnectionid", "Hello connection!");
+```
+
+### Send message to a user
+```java
+webPubSubServiceClient.sendToUser("Andy", "Hello Andy!");
+```
+
+## Troubleshooting
+
+### Enable client logging
+You can set the `AZURE_LOG_LEVEL` environment variable to view logging statements made in the client library. For
+example, setting `AZURE_LOG_LEVEL=2` would show all informational, warning, and error log messages. The log levels can
+be found here: [log levels][log_levels].
+
+### Default HTTP Client
+All client libraries by default use the Netty HTTP client. Adding the above dependency will automatically configure
+the client library to use the Netty HTTP client. Configuring or changing the HTTP client is detailed in the
+[HTTP clients wiki](/azure/developer/java/sdk/http-client-pipeline).
+
+### Default SSL library
+All client libraries, by default, use the Tomcat-native Boring SSL library to enable native-level performance for SSL
+operations. The Boring SSL library is an uber jar containing native libraries for Linux / macOS / Windows, and provides
+better performance compared to the default SSL implementation within the JDK. For more information, including how to
+reduce the dependency size, see the [performance tuning][performance_tuning] section of the wiki.
+
+[azure_subscription]: https://azure.microsoft.com/free
+[jdk_link]: /java/azure/jdk
+[source_code]: https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/webpubsub/azure-messaging-webpubsub/src
+[product_documentation]: https://aka.ms/awps/doc
+[samples_readme]: https://github.com/Azure/azure-webpubsub/tree/main/samples/java
+[log_levels]: https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/core/azure-core/src/main/java/com/azure/core/util/logging/ClientLogger.java
+[performance_tuning]: https://github.com/Azure/azure-sdk-for-java/wiki/Performance-Tuning
+[api]: /java/api/com.azure.messaging.webpubsub
+
+## Next steps
+
azure-web-pubsub Reference Server Sdk Js https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/reference-server-sdk-js.md
+
+ Title: Reference - JavaScript SDK for the Azure Web PubSub service
+description: The reference describes the JavaScript SDK for the Azure Web PubSub service
++++ Last updated : 08/26/2021++
+# JavaScript SDK for the Azure Web PubSub service
+
+There are 2 libraries offered for JavaScript:
+- [Service client library](#service-client-library) to
+ - Send messages to hubs and groups.
+ - Send messages to particular users and connections.
+ - Organize users and connections into groups.
+ - Close connections
+ - Grant/revoke/check permissions for an existing connection
+- [Express middleware](#express) to handle incoming client events
+ - Handle abuse validation requests
+ - Handle client events requests
+
+<a name="service-client-library"></a>
+
+## Azure Web PubSub service client library for JavaScript
+Use the library to:
+
+- Send messages to hubs and groups.
+- Send messages to particular users and connections.
+- Organize users and connections into groups.
+- Close connections
+- Grant/revoke/check permissions for an existing connection
+
+[Source code](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/web-pubsub/web-pubsub) |
+[Package (NPM)](https://www.npmjs.com/package/@azure/web-pubsub) |
+[API reference documentation](/javascript/api/@azure/web-pubsub/) |
+[Product documentation](https://aka.ms/awps/doc) |
+[Samples][samples_ref]
+
+### Getting started
+
+#### Currently supported environments
+
+- [Node.js](https://nodejs.org/) version 8.x.x or higher
+
+#### Prerequisites
+
+- An [Azure subscription][azure_sub].
+- An existing Azure Web PubSub service instance.
+
+#### 1. Install the `@azure/web-pubsub` package
+
+```bash
+npm install @azure/web-pubsub
+```
+
+#### 2. Create and authenticate a WebPubSubServiceClient
+
+```js
+const { WebPubSubServiceClient } = require("@azure/web-pubsub");
+
+const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+```
+
+You can also authenticate the `WebPubSubServiceClient` using an endpoint and an `AzureKeyCredential`:
+
+```js
+const { WebPubSubServiceClient, AzureKeyCredential } = require("@azure/web-pubsub");
+
+const key = new AzureKeyCredential("<Key>");
+const serviceClient = new WebPubSubServiceClient("<Endpoint>", key, "<hubName>");
+```
+
+### Key concepts
++
+### Examples
+
+#### Broadcast a JSON message to all users
+
+```js
+const { WebPubSubServiceClient } = require("@azure/web-pubsub");
+
+const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+await serviceClient.sendToAll({ message: "Hello world!" });
+```
+
+#### Broadcast a plain text message to all users
+
+```js
+const { WebPubSubServiceClient } = require("@azure/web-pubsub");
+
+const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+await serviceClient.sendToAll("Hi there!", { contentType: "text/plain" });
+```
+
+#### Broadcast a binary message to all users
+
+```js
+const { WebPubSubServiceClient } = require("@azure/web-pubsub");
+
+const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+
+const payload = new Uint8Array(10);
+await serviceClient.sendToAll(payload.buffer);
+```
+
+### Troubleshooting
+
+#### Enable logs
+
+You can set the following environment variable to get the debug logs when using this library.
+
+- Getting debug logs from the SignalR client library
+
+```bash
+export AZURE_LOG_LEVEL=verbose
+```
+
+For more detailed instructions on how to enable logs, you can look at the [@azure/logger package docs](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/core/logger).
+
+<a name="express"></a>
+
+## Azure Web PubSub CloudEvents handlers for Express
+
+Use the express library to:
+- Add Web PubSub CloudEvents middleware to handle incoming client events
+ - Handle abuse validation requests
+ - Handle client events requests
+
+[Source code](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/web-pubsub/web-pubsub-express) |
+[Package (NPM)](https://www.npmjs.com/package/@azure/web-pubsub-express) |
+[API reference documentation](/javascript/api/@azure/web-pubsub-express/) |
+[Product documentation](https://aka.ms/awps/doc) |
+[Samples][samples_ref]
+
+### Getting started
+
+#### Currently supported environments
+
+- [Node.js](https://nodejs.org/) version 8.x.x or higher
+- [Express](https://expressjs.com/) version 4.x.x or higher
+
+#### Prerequisites
+
+- An [Azure subscription][azure_sub].
+- An existing Azure Web PubSub endpoint.
+
+#### 1. Install the `@azure/web-pubsub-express` package
+
+```bash
+npm install @azure/web-pubsub-express
+```
+
+#### 2. Create a WebPubSubEventHandler
+
+```js
+const express = require("express");
+
+const { WebPubSubEventHandler } = require("@azure/web-pubsub-express");
+const handler = new WebPubSubEventHandler(
+ "chat",
+ ["https://<yourAllowedService>.webpubsub.azure.com"],
+ {
+ handleConnect: (req, res) => {
+ // auth the connection and set the userId of the connection
+ res.success({
+ userId: "<userId>"
+ });
+ }
+ }
+);
+
+const app = express();
+
+app.use(handler.getMiddleware());
+
+app.listen(3000, () =>
+ console.log(`Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`)
+);
+```
+
+### Key concepts
+
+#### Client Events
+
+Events are created during the lifecycle of a client connection. For example, a simple WebSocket client connection creates a `connect` event when it tries to connect to the service, a `connected` event when it successfully connected to the service, a `message` event when it sends messages to the service and a `disconnected` event when it disconnects from the service.
+
+#### Event Handler
+
+Event handler contains the logic to handle the client events. Event handler needs to be registered and configured in the service through the portal or Azure CLI beforehand. The place to host the event handler logic is generally considered as the server-side.
+
+### Troubleshooting
+
+#### Dump request
+
+Set `dumpRequest` to `true` to view the incoming requests.
+
+#### Live Trace
+
+Use **Live Trace** from the Web PubSub service portal to view the live traffic.
+
+[azure_sub]: https://azure.microsoft.com/free/
+[samples_ref]: https://github.com/Azure/azure-webpubsub/tree/main/samples/javascript
++
+## Next steps
+
azure-web-pubsub Reference Server Sdk Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/reference-server-sdk-python.md
+
+ Title: Reference - Python server SDK for Azure Web PubSub service
+description: The reference describes the Python server SDK for Azure Web PubSub service
++++ Last updated : 08/26/2021++
+# Python server SDK for Azure Web PubSub service
+
+Use the library to:
+
+- Send messages to hubs and groups.
+- Send messages to particular users and connections.
+- Organize users and connections into groups.
+- Close connections
+- Grant/revoke/check permissions for an existing connection
+
+[Source code](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/webpubsub/azure-messaging-webpubsubservice) | [Package (Pypi)][package] | [API reference documentation](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/webpubsub/azure-messaging-webpubsubservice) | [Product documentation][webpubsubservice_docs] |
+[Samples][samples_ref]
+
+## Getting started
+
+### Installations the package
+
+```bash
+python -m pip install azure-messaging-webpubsubservice
+```
+
+#### Prerequisites
+
+- Python 2.7, or 3.6 or later is required to use this package.
+- You need an [Azure subscription][azure_sub], and an [Azure WebPubSub service instance][webpubsubservice_docs] to use this package.
+- An existing Azure Web PubSub service instance.
+
+### Authenticating the client
+
+In order to interact with the Azure WebPubSub service, you'll need to create an instance of the [WebPubSubServiceClient][webpubsubservice_client_class] class. In order to authenticate against the service, you need to pass in an AzureKeyCredential instance with endpoint and access key. The endpoint and access key can be found on Azure portal.
+
+```python
+>>> from azure.messaging.webpubsubservice import WebPubSubServiceClient
+>>> from azure.core.credentials import AzureKeyCredential
+>>> client = WebPubSubServiceClient(endpoint='<endpoint>', credential=AzureKeyCredential('somesecret'))
+>>> client
+<WebPubSubServiceClient endpoint:'<endpoint>'>
+```
+
+## Examples
+
+### Sending a request
+
+```python
+>>> from azure.messaging.webpubsubservice import WebPubSubServiceClient
+>>> from azure.core.credentials import AzureKeyCredential
+>>> from azure.messaging.webpubsubservice.rest import build_send_to_all_request
+>>> client = WebPubSubServiceClient(endpoint='<endpoint>', credential=AzureKeyCredential('somesecret'))
+>>> request = build_send_to_all_request('default', json={ 'Hello': 'webpubsub!' })
+>>> request
+<HttpRequest [POST], url: '/api/hubs/default/:send?api-version=2020-10-01'>
+>>> response = client.send_request(request)
+>>> response
+<RequestsTransportResponse: 202 Accepted>
+>>> response.status_code
+202
+>>> with open('file.json', 'r') as f:
+>>> request = build_send_to_all_request('ahub', content=f, content_type='application/json')
+>>> response = client.send_request(request)
+>>> print(response)
+<RequestsTransportResponse: 202 Accepted>
+```
+
+## Key concepts
++
+## Troubleshooting
+
+### Logging
+
+This SDK uses Python standard logging library.
+You can configure logging print debugging information to the stdout or anywhere you want.
+
+```python
+import logging
+
+logging.basicConfig(level=logging.DEBUG)
+```
+
+Http request and response details are printed to stdout with this logging config.
+
+[webpubsubservice_docs]: https://aka.ms/awps/doc
+[azure_cli]: /cli/azure
+[azure_sub]: https://azure.microsoft.com/free/
+[webpubsubservice_client_class]: https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/webpubsub/azure-messaging-webpubsubservice/azure/messaging/webpubsubservice/__init__.py
+[package]: https://pypi.org/project/azure-messaging-webpubsubservice/
+[default_cred_ref]: https://aka.ms/azsdk-python-identity-default-cred-ref
+[samples_ref]: https://github.com/Azure/azure-webpubsub/tree/main/samples/python
+
+## Next steps
+
azure-web-pubsub Tutorial Permission https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/tutorial-permission.md
+
+ Title: Tutorial - Add authentication and permissions to your application when using Azure Web PubSub service
+description: A tutorial to walk through how to add authentication and permissions to your application when using Azure Web PubSub service
++++ Last updated : 08/26/2021++
+# Tutorial: Add authentication and permissions to your application when using Azure Web PubSub service
+
+In [Build a chat app tutorial](./tutorial-build-chat.md), you've learned how to use WebSocket APIs to send and receive data with Azure Web PubSub. You may have noticed that for simplicity it does not require any authentication. Though Azure Web PubSub requires access token to be connected, the `negotiate` API we used in the tutorial to generate access token doesn't need authentication, so anyone can call this API to get an access token.
+
+In a real world application it's common that you want user to log in first before they can use your application to protect it from being abused. In this tutorial, you'll learn how to integrate Azure Web PubSub with the authentication/authorization system of your application to make it more secure.
+
+The complete code sample of this tutorial can be found [here][code].
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Enable GitHub authentication
+> * Add authentication middleware to your application
+> * Add permissions to the clients
+
+## Add authentication to the chat room app
+
+This tutorial reuses the chat application created in [Build a chat app tutorial](./tutorial-build-chat.md). You can also clone the complete code sample for the chat app from [here][chat-js].
+
+In this tutorial, we will add authentication to the chat application and integrate it with Azure Web PubSub service.
+
+First let's add GitHub authentication to the chat room so user can use GitHub account to log in.
+
+1. Install dependencies
+
+ ```bash
+ npm install --save cookie-parser
+ npm install --save express-session
+ npm install --save passport
+ npm install --save passport-github2
+ ```
+
+2. Add the following code to `server.js` to enable GitHub authentication
+
+ ```javascript
+ const app = express();
+
+ const users = {};
+ passport.use(
+ new GitHubStrategy({
+ clientID: process.argv[3],
+ clientSecret: process.argv[4]
+ },
+ (accessToken, refreshToken, profile, done) => {
+ users[profile.id] = profile;
+ return done(null, profile);
+ }
+ ));
+
+ passport.serializeUser((user, done) => {
+ done(null, user.id);
+ });
+
+ passport.deserializeUser((id, done) => {
+ if (users[id]) return done(null, users[id]);
+ return done(`invalid user id: ${id}`);
+ });
+
+ app.use(cookieParser());
+ app.use(session({
+ resave: false,
+ saveUninitialized: true,
+ secret: 'keyboard cat'
+ }));
+ app.use(passport.initialize());
+ app.use(passport.session());
+ app.get('/auth/github', passport.authenticate('github', { scope: ['user:email'] }));
+ app.get('/auth/github/callback', passport.authenticate('github', { successRedirect: '/' }));
+ ```
+
+ The code above uses [Passport.js](http://www.passportjs.org/) to enable GitHub authentication. Here is a simple illustration of how it works:
+
+ 1. `/auth/github` will redirect to github.com for login
+ 2. After login is completed, GitHub will redirect you to `/auth/github/callback` with a code for your application to complete the authentication (see the verify callback in `passport.use()` to see how the profile returned from GitHub is verified and persisted in the server).
+ 3. After authentication is completed, you'll be redirected to the homepage (`/`) of the site.
+
+ For more details about GitHub OAuth and Passport.js, see the following articles:
+
+ - [Authorizing OAuth Apps](https://docs.github.com/en/developers/apps/authorizing-oauth-apps)
+ - [Passport.js doc](http://www.passportjs.org/docs/)
+
+ To test this, you need to first create a GitHub OAuth app:
+
+ 1. Go to https://www.github.com, open your profile -> Settings -> Developer settings
+ 2. Go to OAuth Apps, click "New OAuth App"
+ 3. Fill in application name, homepage URL (can be anything you like), and set Authorization callback URL to `http://localhost:8080/auth/github/callback` (which matches the callback API you exposed in the server)
+ 4. After the application is registered, copy the Client ID and click "Generate a new client secret" to generate a new client secret
+
+ Then run `node server <connection-string> <client-id> <client-secret>`, open `http://localhost:8080/auth/github`, you'll be redirected to GitHub to log in. After the login is succeeded, you'll be redirected to the chat application.
+
+3. Then let's update the chat room to make use of the identity we get from GitHub, instead of popping up a dialog to ask for username.
+
+ Update `public/https://docsupdatetracker.net/index.html` to directly call `/negotiate` without passing in a user ID.
+
+ ```javascript
+ let messages = document.querySelector('#messages');
+ let res = await fetch(`/negotiate`);
+ if (res.status === 401) {
+ let m = document.createElement('p');
+ m.innerHTML = 'Not authorized, click <a href="/auth/github">here</a> to login';
+ messages.append(m);
+ return;
+ }
+ let data = await res.json();
+ let ws = new WebSocket(data.url);
+ ```
+
+ When a user is logged in, the request will automatically carry the user's identity through cookie. So we just need to check whether the user exists in the `req` object and add the username to Web PubSub access token:
+
+ ```javascript
+ app.get('/negotiate', async (req, res) => {
+ if (!req.user || !req.user.username) {
+ res.status(401).send('missing user id');
+ return;
+ }
+ let options = {
+ userId: req.user.username
+ };
+ let token = await serviceClient.getAuthenticationToken(options);
+ res.json({
+ url: token.url
+ });
+ });
+ ```
+
+ Now rerun the server and you'll see a "not authorized" message for the first time you open the chat room. Click the login link to log in, then you'll see it works as before.
+
+## Working with permissions
+
+In the previous tutorials, you have learned to use `WebSocket.send()` to directly publish messages to other clients using subprotocol. In a real application, you may not want client to be able to publish/subscribe to any group without permission control. In this section, you'll see how to control clients using the permission system of Azure Web PubSub.
+
+In Azure Web PubSub there are three types of operations a client can do with subprotocol:
+
+- Send events to server
+- Publish messages to a group
+- Join (subscribe) a group
+
+Send event to server is the default operation of client even no protocol is used, so it's always allowed. To publish and subscribe to a group, client needs to get permission. There are two ways for server to grant permission to clients:
+
+- Specify roles when a client is connected (role is a concept to represent initial permissions when a client is connected)
+- Use API to grant permission to a client after it's connected
+
+For join group permission, client still needs to join the group using join group message after it gets the permission. Or server can use API to add client to a group even it doesn't have the join permission.
+
+Now let's use this permission system to add a new feature to the chat room. We will add a new type of user called administrator to the chat room, for administrator, we will allow them to send system message (message starts with "[SYSTEM]") directly from client.
+
+First we need to separate system and user messages into two different groups so their permissions can be controlled separately.
+
+Change `server.js` to send different messages to different groups:
+
+```javascript
+let handler = new WebPubSubEventHandler(hubName, ['*'], {
+ path: '/eventhandler',
+ handleConnect: (req, res) => {
+ res.success({
+ groups: ['system', 'message'],
+ });
+ },
+ onConnected: req => {
+ console.log(`${req.context.userId} connected`);
+ serviceClient.group('system').sendToAll(`${req.context.userId} joined`, { contentType: 'text/plain' });
+ },
+ handleUserEvent: (req, res) => {
+ if (req.context.eventName === 'message') {
+ serviceClient.group('message').sendToAll({
+ user: req.context.userId,
+ message: req.data
+ });
+ }
+ res.success();
+ }
+});
+```
+
+You can see the code above uses `WebPubSubServiceClient.group().sendToAll()` to send message to group instead of the hub.
+
+Since the message is now sent to groups, we need to add clients to groups so they can continue receiving messages. This is done in the `handleConnect` handler.
+
+> [!Note]
+> `handleConnect` is triggered when a client is trying to connect to Azure Web PubSub. In this handler you can return groups and roles, so service can add connection to groups or grant roles, as soon as the connection is established. It can also `res.fail()` to deny the connection.
+>
+
+To make `handleConnect` be triggered, go to event handler settings in Azure portal, and check `connect` in system events.
+
+We also need to update client HTML since now server sends JSON messages instead of plain text:
+
+```javascript
+let ws = new WebSocket(data.url, 'json.webpubsub.azure.v1');
+ws.onopen = () => console.log('connected');
+
+ws.onmessage = event => {
+ let m = document.createElement('p');
+ let message = JSON.parse(event.data);
+ switch (message.type) {
+ case 'message':
+ if (message.group === 'system') m.innerText = `[SYSTEM] ${message.data}`;
+ else if (message.group === 'message') m.innerText = `[${message.data.user}] ${message.data.message}`;
+ break;
+ }
+ messages.appendChild(m);
+};
+
+let message = document.querySelector('#message');
+message.addEventListener('keypress', e => {
+ if (e.charCode !== 13) return;
+ ws.send(JSON.stringify({
+ type: 'event',
+ event: 'message',
+ dataType: 'text',
+ data: message.value
+ }));
+ message.value = '';
+});
+```
+
+Then change the client code to send to system group when users click "system message":
+
+```html
+<button id="system">system message</button>
+...
+<script>
+ (async function() {
+ ...
+ let system = document.querySelector('#system');
+ system.addEventListener('click', e => {
+ ws.send(JSON.stringify({
+ type: 'sendToGroup',
+ group: 'system',
+ dataType: 'text',
+ data: message.value
+ }));
+ message.value = '';
+ });
+ })();
+</script>
+```
+
+By default client doesn't have permission to send to any group, update server code to grant permission for admin user (for simplicity the ID of the admin is provided as a command-line argument).
+
+```javascript
+app.get('/negotiate', async (req, res) => {
+ ...
+ if (req.user.username === process.argv[5]) options.claims = { role: ['webpubsub.sendToGroup.system'] };
+ let token = await serviceClient.getAuthenticationToken(options);
+});
+```
+
+Now run `node server <connection-string> <client-id> <client-secret> <admin-id>`, you'll see you can send a system message to every client when you log in as `<admin-id>`.
+
+But if you log in as a different user, when you click "system message", nothing will happen. You may expect service give you an error to let you know the operation is not allowed. This can be done by setting an `ackId` when publishing the message. Whenever `ackId` is specified, Azure Web PubSub will return an ack message with a matching `ackId` to indicate whether the operation is succeeded or not.
+
+Change the code of sending system message to the following code:
+
+```javascript
+let ackId = 0;
+system.addEventListener('click', e => {
+ ws.send(JSON.stringify({
+ type: 'sendToGroup',
+ group: 'system',
+ ackId: ++ackId,
+ dataType: 'text',
+ data: message.value
+ }));
+ message.value = '';
+});
+```
+
+Also change the code of processing messages to handle ack message:
+
+```javascript
+ws.onmessage = event => {
+ ...
+ switch (message.type) {
+ case 'ack':
+ if (!message.success && message.error.name === 'Forbidden') m.innerText = 'No permission to send system message';
+ break;
+ }
+};
+```
+
+Now rerun server and login as a different user, you'll see an error message when trying to send system message.
+
+The complete code sample of this tutorial can be found [here][code].
+
+## Next steps
+
+This tutorial provides you a basic idea of how to connect to the Web PubSub service and how to publish messages to the connected clients using subprotocol.
+
+Check other tutorials to further dive into how to use the service.
+
+> [!div class="nextstepaction"]
+> [Explore more Azure Web PubSub samples](https://aka.ms/awps/samples)
+
+[code]: https://github.com/Azure/azure-webpubsub/tree/main/samples/javascript/githubchat/
+[chat-js]: https://github.com/Azure/azure-webpubsub/tree/main/samples/javascript/chatapp
azure-web-pubsub Tutorial Pub Sub Messages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/tutorial-pub-sub-messages.md
This tutorial provides you a basic idea of how to connect to the Web PubSub serv
Check other tutorials to further dive into how to use the service. > [!div class="nextstepaction"]
-> [Tutorial: Create a simple chatroom with Azure Web PubSub](./tutorial-build-chat.md)
+> [Tutorial: Create a chatroom with Azure Web PubSub](./tutorial-build-chat.md)
> [!div class="nextstepaction"] > [Explore more Azure Web PubSub samples](https://aka.ms/awps/samples)
backup Azure Backup Move Vaults Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/azure-backup-move-vaults-across-regions.md
+
+ Title: Move Azure Recovery Services vault to another region
+description: In this article, you'll learn how to ensure continued backups after moving the resources across regions.
+ Last updated : 08/27/2021++
+# Back up resources in Recovery Services vault after moving across regions
+
+Azure Resource Mover supports the movement of multiple resources across regions. While moving your resources from one region to another, you can ensure that your resources stay protected. As Azure Backup supports protection of several workloads, you may need to take some steps to continue having the same level of protection in the new region.
+
+To understand the detailed steps to achieve this, refer to the sections below.
+
+>[!Note]
+>Azure Backup currently doesnΓÇÖt support the movement of backup data from one Recovery Services vault to another. To protect your resource in the new region, the resource needs to be registered and backed up to a new/existing vault in the new region. When moving your resources from one region to another, backup data in your existing Recovery services vaults in the older region can be retained/deleted based on your requirement. If you choose to retain data in the old vaults, you will incur backup charges accordingly.
+
+## Back up Azure Virtual Machine after moving across regions
+
+When an Azure Virtual Machine (VM) thatΓÇÖs been protected by a Recovery Services vault is moved from one region to another, it can no longer be backed up to the older vault. The backups in the old vault will start failing with the errors **BCMV2VMNotFound** or [**ResourceNotFound**](/azure/backup/backup-azure-vms-troubleshoot#320001-resourcenotfoundcould-not-perform-the-operation-as-vm-no-longer-exists--400094-bcmv2vmnotfoundthe-virtual-machine-doesnt-exist--an-azure-virtual-machine-wasnt-found).
+
+To protect your VM in the new region, you should follow these steps:
+
+1. Before moving the VM, [select the VM on the **Backup Items** tab](/azure/backup/backup-azure-delete-vault#delete-protected-items-in-the-cloud) of existing vaultΓÇÖs dashboard and select **Stop protection** followed by retain/delete data as per your requirement. When the backup data for a VM is stopped with retain data, the recovery points remain forever and donΓÇÖt adhere to any policy. This ensures you always have your backup data ready for restore.
+
+ >[!Note]
+ >Retaining data in the older vault will incur backup charges. If you no longer wish to retain data to avoid billing, you need to delete the retained backup data using the [Delete data option](/azure/backup/backup-azure-manage-vms#delete-backup-data).
+
+1. Move your VM to the new region using [Azure Resource Mover](/azure/resource-mover/tutorial-move-region-virtual-machines).
+
+1. Start protecting your VM in a new or existing Recovery Services vault in the new region.
+ When you need to restore from your older backups, you can still do it from your old Recovery Services vault if you had chosen to retain the backup data.
+
+The above steps should help ensure that your resources are being backed up in the new region as well.
+
+## Back up Azure File Share after moving across regions
+
+To move your Storage Accounts along with the file shares in them from one region to another, see [Move an Azure Storage account to another region](/azure/storage/common/storage-account-move).
+
+>[!Note]
+>When Azure File Share is copied across regions, its associated snapshots donΓÇÖt move along with it. In order to move the snapshots data to the new region, you need to move the individual files and directories of the snapshots to the Storage Account in the new region using [AzCopy](/azure/storage/common/storage-use-azcopy-files#copy-all-file-shares-directories-and-files-to-another-storage-account).
+
+Azure Backup offers [a snapshot management solution](/azure/backup/backup-afs#discover-file-shares-and-configure-backup) for your Azure Files today. This means, you donΓÇÖt move the file share data into the Recovery Services vaults. Also, as the snapshots donΓÇÖt move with your Storage Account, youΓÇÖll effectively have all your backups (snapshots) in the existing region only and protected by the existing vault. However, you can ensure that the new file shares that you create in the new region are protected by Azure Backup by following these steps:
+
+1. Start protecting the Azure File Share copied into the new Storage Account in a new or existing Recovery Services vault in the new region.
+
+1. Once the Azure File Share is copied to the new region, you can choose to stop protection and retain/delete the snapshots (and the corresponding recovery points) of the original Azure File Share as per your requirement. This can be done by selecting your file share on the [Backup Items tab](/azure/backup/backup-azure-delete-vault#delete-protected-items-in-the-cloud) of the original vaultΓÇÖs dashboard. When the backup data for Azure File Share is stopped with retain data, the recovery points remain forever and donΓÇÖt adhere to any policy.
+
+ This ensures that you will always have your snapshots ready for restore from the older vault.
+
+## Back up SQL Server in Azure VM/SAP HANA in Azure VM
+
+When you move a VM running SQL or SAP HANA servers to another region, the SQL and SAP HANA databases in those VMs can no longer be backed up in the vault of the earlier region. To protect the SQL and SAP HANA servers running in Azure VM in the new region, you should follow these steps:
+
+1. Before moving VM running SQL Server/SAP HANA to a new region, select it in the [Backup Items tab](/azure/backup/backup-azure-delete-vault#delete-protected-items-in-the-cloud) of the existing vaultΓÇÖs dashboard and select _the databases_ for which backup needs to be stopped. Select **Stop protection** followed by retain/delete data as per your requirement. When the backup data is stopped with retain data, the recovery points remain forever and donΓÇÖt adhere to any policy. This ensures that you always have your backup data ready for restore.
+
+ >[!Note]
+ >Retaining data in the older vault will incur backup charges. If you no longer wish to retain data to avoid billing, you need to delete the retained backup data using [Delete data option](/azure/backup/backup-azure-manage-vms#delete-backup-data).
+
+1. Move the VM running SQL Server/SAP HANA to the new region using [Azure Resource Mover](/azure/resource-mover/tutorial-move-region-virtual-machines).
+
+1. Start protecting the VM in a new/existing Recovery Services vault in the new region. When you need to restore from your older backups, you can still do it from your old Recovery Services vault.
+
+The above steps should help ensure that your resources are being backed up in the new region as well.
backup Backup Azure Move Recovery Services Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-move-recovery-services-vault.md
Title: How to move Azure Backup Recovery Services vaults description: Instructions on how to move a Recovery Services vault across Azure subscriptions and resource groups. Previously updated : 04/08/2019 Last updated : 08/27/2021
You can move a Recovery Services vault and its associated resources to a differe
> [!NOTE] > Cross subscription backup (RS vault and protected VMs are in different subscriptions) isn't a supported scenario. Also, storage redundancy option from local redundant storage (LRS) to global redundant storage (GRS) and vice versa can't be modified during the vault move operation.
->
->
+
+## Use Azure portal to back up resources in Recovery Services vault after moving across regions
+
+Azure Resource Mover supports the movement of multiple resources across regions. While moving your resources from one region to another, you can ensure that your resources stay protected. As Azure Backup supports protection of several workloads, you may need to take some steps to continue having the same level of protection in the new region.
+
+To understand the detailed steps to achieve this, refer to the sections below.
+
+>[!Note]
+>Azure Backup currently doesnΓÇÖt support the movement of backup data from one Recovery Services vault to another. To protect your resource in the new region, the resource needs to be registered and backed up to a new/existing vault in the new region. When moving your resources from one region to another, backup data in your existing Recovery services vaults in the older region can be retained/deleted based on your requirement. If you choose to retain data in the old vaults, you will incur backup charges accordingly.
+
+### Back up Azure Virtual Machine after moving across regions
+
+When an Azure Virtual Machine (VM) thatΓÇÖs been protected by a Recovery Services vault is moved from one region to another, it can no longer be backed up to the older vault. The backups in the old vault will start failing with the errors **BCMV2VMNotFound** or [**ResourceNotFound**](/azure/backup/backup-azure-vms-troubleshoot#320001-resourcenotfoundcould-not-perform-the-operation-as-vm-no-longer-exists--400094-bcmv2vmnotfoundthe-virtual-machine-doesnt-exist--an-azure-virtual-machine-wasnt-found).
+
+To protect your VM in the new region, you should follow these steps:
+
+1. Before moving the VM, [select the VM on the **Backup Items** tab](/azure/backup/backup-azure-delete-vault#delete-protected-items-in-the-cloud) of existing vaultΓÇÖs dashboard and select **Stop protection** followed by retain/delete data as per your requirement. When the backup data for a VM is stopped with retain data, the recovery points remain forever and donΓÇÖt adhere to any policy. This ensures you always have your backup data ready for restore.
+
+ >[!Note]
+ >Retaining data in the older vault will incur backup charges. If you no longer wish to retain data to avoid billing, you need to delete the retained backup data using the [Delete data option](/azure/backup/backup-azure-manage-vms#delete-backup-data).
+
+1. Move your VM to the new region using [Azure Resource Mover](/azure/resource-mover/tutorial-move-region-virtual-machines).
+
+1. Start protecting your VM in a new or existing Recovery Services vault in the new region.
+ When you need to restore from your older backups, you can still do it from your old Recovery Services vault if you had chosen to retain the backup data.
+
+The above steps should help ensure that your resources are being backed up in the new region as well.
+
+### Back up Azure File Share after moving across regions
+
+To move your Storage Accounts along with the file shares in them from one region to another, see [Move an Azure Storage account to another region](/azure/storage/common/storage-account-move).
+
+>[!Note]
+>When Azure File Share is copied across regions, its associated snapshots donΓÇÖt move along with it. In order to move the snapshots data to the new region, you need to move the individual files and directories of the snapshots to the Storage Account in the new region using [AzCopy](/azure/storage/common/storage-use-azcopy-files#copy-all-file-shares-directories-and-files-to-another-storage-account).
+
+Azure Backup offers [a snapshot management solution](/azure/backup/backup-afs#discover-file-shares-and-configure-backup) for your Azure Files today. This means, you donΓÇÖt move the file share data into the Recovery Services vaults. Also, as the snapshots donΓÇÖt move with your Storage Account, youΓÇÖll effectively have all your backups (snapshots) in the existing region only and protected by the existing vault. However, you can ensure that the new file shares that you create in the new region are protected by Azure Backup by following these steps:
+
+1. Start protecting the Azure File Share copied into the new Storage Account in a new or existing Recovery Services vault in the new region.
+
+1. Once the Azure File Share is copied to the new region, you can choose to stop protection and retain/delete the snapshots (and the corresponding recovery points) of the original Azure File Share as per your requirement. This can be done by selecting your file share on the [Backup Items tab](/azure/backup/backup-azure-delete-vault#delete-protected-items-in-the-cloud) of the original vaultΓÇÖs dashboard. When the backup data for Azure File Share is stopped with retain data, the recovery points remain forever and donΓÇÖt adhere to any policy.
+
+ This ensures that you will always have your snapshots ready for restore from the older vault.
+
+### Back up SQL Server in Azure VM/SAP HANA in Azure VM
+
+When you move a VM running SQL or SAP HANA servers to another region, the SQL and SAP HANA databases in those VMs can no longer be backed up in the vault of the earlier region. To protect the SQL and SAP HANA servers running in Azure VM in the new region, you should follow these steps:
+
+1. Before moving VM running SQL Server/SAP HANA to a new region, select it in the [Backup Items tab](/azure/backup/backup-azure-delete-vault#delete-protected-items-in-the-cloud) of the existing vaultΓÇÖs dashboard and select _the databases_ for which backup needs to be stopped. Select **Stop protection** followed by retain/delete data as per your requirement. When the backup data is stopped with retain data, the recovery points remain forever and donΓÇÖt adhere to any policy. This ensures that you always have your backup data ready for restore.
+
+ >[!Note]
+ >Retaining data in the older vault will incur backup charges. If you no longer wish to retain data to avoid billing, you need to delete the retained backup data using [Delete data option](/azure/backup/backup-azure-manage-vms#delete-backup-data).
+
+1. Move the VM running SQL Server/SAP HANA to the new region using [Azure Resource Mover](/azure/resource-mover/tutorial-move-region-virtual-machines).
+
+1. Start protecting the VM in a new/existing Recovery Services vault in the new region. When you need to restore from your older backups, you can still do it from your old Recovery Services vault.
+
+The above steps should help ensure that your resources are being backed up in the new region as well.
## Use PowerShell to move Recovery Services vault
backup Backup Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-sql-database.md
Title: Back up SQL Server databases to Azure description: This article explains how to back up SQL Server to Azure. The article also explains SQL Server recovery. Previously updated : 06/18/2019 Last updated : 08/20/2021 # About SQL Server Backup in Azure VMs
Last updated 06/18/2019
3. Point-in-time recovery up to a second 4. Individual database level backup and restore
+>[!Note]
+>Snapshot-based backup for SQL databases in Azure VM is now in preview. This unique offering combines the goodness of snapshots, leading to a better RTO and low impact on the server along with the benefits of frequent log backups for low RPO. For any queries/access, write to us at [AskAzureBackupTeam@microsoft.com](mailto:AskAzureBackupTeam@microsoft.com).
+ To view the backup and restore scenarios that we support today, refer to the [support matrix](sql-support-matrix.md#scenario-support). ## Backup process
Add **NT AUTHORITY\SYSTEM** and **NT Service\AzureWLBackupPluginSvc** logins to
1. Go the SQL Server Instance in the Object explorer. 2. Navigate to Security -> Logins
-3. Right-click on the logins and select *New Login…*
+3. Right-click the logins and select *New Login…*
![New Login using SSMS](media/backup-azure-sql-database/sql-2k8-new-login-ssms.png)
backup Backup Sql Server Database Azure Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-sql-server-database-azure-vms.md
Title: Back up multiple SQL Server VMs from the vault description: In this article, learn how to back up SQL Server databases on Azure virtual machines with Azure Backup from the Recovery Services vault Previously updated : 05/28/2021 Last updated : 08/20/2021 # Back up multiple SQL Server VMs from the Recovery Services vault
When you back up a SQL Server database on an Azure VM, the backup extension on t
### Database naming guidelines for Azure Backup
-Avoid using the following elements in database names:
+- Avoid using the following elements in database names:
-* Trailing and leading spaces
-* Trailing exclamation marks (!)
-* Closing square brackets (])
-* Semicolon ';'
-* Forward slash '/'
+ - Trailing and leading spaces
+ - Trailing exclamation marks (!)
+ - Closing square brackets (])
+ - Semicolon (;)
+ - Forward slash (/)
-Aliasing is available for unsupported characters, but we recommend avoiding them. For more information, see [Understanding the Table Service Data Model](/rest/api/storageservices/understanding-the-table-service-data-model).
+- Aliasing is available for unsupported characters, but we recommend avoiding them. For more information, see [Understanding the Table Service Data Model](/rest/api/storageservices/understanding-the-table-service-data-model).
+
+- Multiple databases on the same SQL instance with casing difference aren't supported.
+
+- Changing the casing of a SQL database isn't supported after configuring protection.
>[!NOTE]
->The **Configure Protection** operation for databases with special characters like "+" or "&" in their name isn't supported. You can either change the database name or enable **Auto Protection**, which can successfully protect these databases.
+>The **Configure Protection** operation for databases with special characters, such as '+' or '&', in their name isn't supported. You can change the database name or enable **Auto Protection**, which can successfully protect these databases.
[!INCLUDE [How to create a Recovery Services vault](../../includes/backup-create-rs-vault.md)]
backup Backup Sql Server On Availability Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-sql-server-on-availability-groups.md
+
+ Title: Back up SQL Server always on availability groups
+description: In this article, learn how to back up SQL Server on availability groups.
+ Last updated : 08/20/2021+
+# Backup SQL Server always on availability groups
+
+Azure Backup offers an end-to-end support for backing up SQL Server always on availability groups (AG) if all nodes are in the same region and subscription as the Recovery Services vault. However, if the AG nodes are spread across regions/subscriptions/on-premises and Azure, there are a few considerations to keep in mind.
+
+>[!Note]
+>Backup of Basic Availability Group databases is not supported by Azure Backup.
+
+The backup preference used by Azure Backup SQL AG supports full and differential backups only from the primary replica. So, these backup jobs always run on the Primary node irrespective of the backup preference. For copy-only full and transaction log backups, the AG backup preference is considered while deciding the node where backup will run.
+
+| AG Backup preference | Full and Diff backups run on | Copy-Only and Log backups are taken from |
+| -- | - | - |
+| Primary | Primary replica | Primary replica |
+| Secondary only | Primary replica | Any one of the secondary replicas |
+| Prefer Secondary | Primary replica | Secondary replicas are preferred, but backups can run on primary replica also. |
+| None/Any | Primary replica | Any replica |
+
+The workload backup extension gets installed on the node when it is registered with the Azure Backup service. When an AG database is configured for backup, the backup schedules are pushed to all the registered nodes of the AG. The schedules fire on all the AG nodes and the workload backup extensions on these nodes synchronize between themselves to decide which node will perform the backup. The node selection depends on the backup type and the backup preference as explained in section 1.
+
+The selected node proceeds with the backup job, whereas the job triggered on the other nodes bail out, that is, it skips the job.
+
+>[!Note]
+>Azure Backup doesnΓÇÖt consider backup priorities or replicas while deciding among the secondary replicas.
+
+## Register AG nodes to the Recovery Services vault
+
+A Recovery Services vault supports backup of databases only from VMs in the same region and subscription as that of the vault.
+
+- You must register the primary node to the vault (otherwise, full backups can't happen).
+- If the backup preference is _secondary only_, then you need to register at least one secondary node to the vault (otherwise, log/copy-only full backups can't happen).
+
+Configuring backups for AG databases will fail with the error code _FabricSvcBackupPreferenceCheckFailedUserError_ if the above conditions aren't met.
+
+LetΓÇÖs consider the following AG deployment as a reference.
++
+Taking the above sample AG deployment, following are various considerations:
+
+- As the primary node is in region 1 and subscription 1, the Recovery Services vault (Vault 1) must be in Region 1 and Subscription 1 for protecting this AG.
+- VM3 can't be registered to Vault 1 as it's in a different subscription.
+- VM4 can't be registered to Vault 1 as it's in a different region.
+- If the backup preference is _secondary only_, VM1 (Primary) and VM2 (Secondary) must be registered to the Vault 1 (because full backups require the primary node and logs require a secondary node). For other backup preferences, VM1 (Primary) must be registered to Vault 1, VM2 is optional (because all backups can run on primary node).
+- While VM3 could be registered to vault 2 in subscription 2 and the AG databases would then show up for protection in vault 2 but due to absence of the primary node in vault 2, configuring backups would fail.
+- Similarly, while VM4 could be registered to vault 4 in region 2, configuring backups would fail since the primary node is not registered in vault 4.
+
+## Handle failover
+
+After the AG has failed over to one of the secondary nodes:
+
+- The full and differential backups will continue from the new primary node if it's registered to the vault.
+- The log and copy-only full backups will continue from primary/secondary node based on the backup preference.
+
+>[!Note]
+>Log chain breaks do not happen on failover if the failover doesnΓÇÖt coincide with a backup.
+
+Taking the above sample AG deployment, following are the various failover possibilities:
+
+- Failover to VM2
+ - Full and differential backups will happen from VM2.
+ - Log and copy-only full backups will happen from VM1 or VM2 based on backup preference.
+- Failover to VM3 (another subscription)
+ - As backups aren't configured in Vault 2, no backups would happen.
+ - If the backup preference isn't secondary-only, backups can be configured now in Vault 2, because the primary node is registered in this vault. But this can lead to conflicts/backup failures. More about this in [Configure backups for a multi-region AG](#configure-backups-for-a-multi-region-ag).
+- Failover to VM4 (another region)
+ - As backups aren't configured in Vault 4, no backups would happen.
+ - If the backup preference is not secondary-only, backups can be configured now in Vault 4, because the primary node is registered in this vault. But this can lead to conflicts/backup failures. More about this in [Configure backups for a multi-region AG](#configure-backups-for-a-multi-region-ag).
+
+## Configure backups for a multi-region AG
+
+Recovery services vault doesnΓÇÖt support cross-subscription or cross-region backups. This section summarizes how to enable backups for AGs that are spanning subscriptions or Azure regions and the associated considerations.
+
+- Evaluate if you really need to enable backups from all nodes. If one region/subscription has most of the AG nodes and failover to other nodes happens very rarely, setting up backup in that first region may be enough. If the failovers to other region/subscription happen frequently and for prolonged duration, then you may want to setup backups proactively in the other region as well.
+
+- Each vault where the backup gets enabled will have its own set of recovery point chains. Restores from these recovery points can be done to VMs registered in that vault only.
+
+- Full/differential backups will happen successfully only in the vault that has the primary node. These backups in other vaults will keep failing.
+
+- Log backups will keep working in the previous vault till a log backup runs in the new vault (that is, in the vault where the new primary node is present) and _breaks_ the log chain for old vault.
+ >[!Note]
+ >There's a hard limit of 15 days beyond which log backups will start failing.
+
+- Copy-only full backups will work in all the vaults.
+
+- Protection in each vault is treated as a distinct data source and is billed separately.
+
+To avoid log backup conflicts between the two vaults, we recommend you to set the backup preference to Primary. Then, whichever vault has the primary node will also take the log backups.
+
+Taking the above sample AG deployment, here are the steps to enable backup from all the nodes. The assumption is that backup preference is satisfied in all the steps.
+
+### Step 1: Enable backups in Region 1, Subscription 1 (Vault 1)
+
+As the primary node is in region and subscription, the usual steps to enable backups will work.
+
+### Step 2: Enable backups in Region 1, Subscription 2 (Vault 2)
+
+1. Failover the AG to VM3 so that the primary node is present in Vault 2.
+1. Configure backups for the AG databases in Vault 2.
+1. At this point:
+ 1. The full/differential backups will fail in Vault 1 as none of the registered nodes can take this backup.
+ 1. The log backups will succeed in Vault 1 till a log backup runs in Vault 2 and _breaks_ the log chain for Vault 1.
+1. Failback the AG to VM1.
+
+### Step 3: Enable backups in Region 2, Subscription 1 (Vault 4)
+
+Same as Step 2.
+
+## Backup an AG that spans Azure and on-premises
+
+Azure Backup for SQL Server canΓÇÖt be run on-premises. If the primary node is in Azure and the backup preference is satisfied by the nodes in Azure, you can follow the above guidance for multi-region AG to enable backups for the replicas in Azure.
+If a failover to on-premises node happens, the full and differential backups in Azure will start failing. Log backups may continue till the log chain break happens/15 days pass.
+
+## Throttling for backup jobs in an AG database
+
+Currently, the backup throttling limits apply at an individual machine level. The default limit is 20 ΓÇô if more than 20 backups are triggered concurrently, first 20 will run and the others will get queued. When the running jobs are complete, the queued ones will start running.
+
+You can change this value to a smaller value if the concurrent backups are causing memory/IO/CPU strain on the node.
+**Since the throttling is at node level, having unbalanced AG nodes can lead to backup synchronization problems**. To understand this, consider a 2 nodes AG for instance.
+
+For example, the first node has 50 standalone databases protected and both the nodes have 5 AG databases protected. Effectively, Node 1 has 55 database backup jobs scheduled whereas Node 2 has only 5. Also, all these backups are configured to run at the same time, every hour. At a point, all 55 backups will trigger on Node 1 and 35 of these will get queued. Some of these would be the AG database backups. But on Node 2, the AG database backups would go ahead without any queuing.
+
+As the AG database jobs are queued on one node and running on another, the backup synchronization (mentioned in section 6) wonΓÇÖt work properly. Node 2 might assume that Node 1 is down and therefore jobs from there aren't coming up for synchronization. This can lead to log chain breaks or extra backups as both nodes can take backups independently.
+
+Similar problem can happen if the number of AG databases protected are more than the throttling limit. In such case, backup for, say, DB1 can be queued on Node 1 whereas it runs on Node 2.
+
+We recommend you to use the following backup preferences to avoid these synchronization issues:
+
+- For a 2 node AG, set the Backup Preference to Primary or Secondary Only ΓÇô then only one node can do the backups, the other will always bail out.
+- For an AG with more than 2 nodes, set the Backup Preference to Primary ΓÇô then only primary node can do the backups, others will bail out.
+
+## Billing for AG backups
+
+Same as a standalone SQL instance, one backed-up AG instance is considered as one protected instance. Total frontend size of all protected databases in an instance is charged. Consider the following deployment:
++
+The protected instances are calculated as follows:
+
+| Protected Instance/ Billing instance | Databases considered for calculating frontend size |
+| | -- |
+| AG1 | DB1, DB2 |
+| AG2 | DB4 |
+| VM2 | DB3 |
+| VM3 | DB6 |
+| VM4 | DB5 |
+
+## Moving a protected database in or out of an AG
+
+Azure Backup considers **SQL instance or AG name\Database name** as the database unique name. When the standalone DB was protected, its unique name was _StandAloneInstanceName\DBName_. When it moves under an AG, the unique name changes to _AGName\DBName_. The backups for the standalone database will start failing with error code: _UserErrorBackupFailedStandaloneDatabaseMovedInToAG_.
+
+The database must be configured for protection from under the AG. This will be treated as a new data source with a separate recovery point chain. The older protection of standalone database can be stopped with retain data to avoid future backups from triggering and failing on it. Similarly, when a protected AG database moves out of AG and becomes standalone database, its backups start failing with error code: _UserErrorBackupFailedDatabaseMovedOutOfAG_.
+
+The database must be configured for protection from under the standalone instance. This will be treated as a new data source with a separate recovery point chain. The older protection of AG database can be stopped with retain data to avoid future backups from triggering and failing on it.
+
+## Addition/Removal of a node to an AG
+
+When a new node gets added to an AG that is configured for backups, the workload backup extensions running on the already registered AG nodes detect the AG topology change and inform the Azure Backup service during the next scheduled database discovery job. When this new node gets registered for backups to the same Recovery Services vault as the other existing nodes, Azure Backup service triggers a workflow that configures this new node with the necessary metadata for performing AG backups.
+
+After this, the new node syncs the AG backup schedule information from the Azure Backup service and starts participating in the synchronized backup process. If the new node is not able to sync the backup schedules and participate in backups, triggering a re-registration on the node forces reconfiguration of the node for AG backups as well. Similarly, node addition, the workload extensions detect the AG topology change in this case and inform the Azure Backup service. The service starts a node _un-configuration_ workflow in the removed node to clear the backup schedules for AG databases and delete the AG related metadata.
+
+## Un-register an AG node from Azure Backup
+
+If a node is part of an AG that has one or more databases configured for backup, then Azure Backup doesnΓÇÖt allow un-registration of that node. This is to prevent future backup failures in case the backup preference canΓÇÖt be met without this node. To unregister the node, first you need to remove it from the AG. When the node _un-configuration_ workflow completes, cleaning up that node, you can unregister it.
+
+Restore a database from Azure Backup to an AG
+SQL Availability Groups do not support directly restoring a database into AG. The database needs to be restored to a standalone SQL instance and then needs to be joined to an AG.
+
+## Next steps
+
+Learn how to:
+
+* [Restore backed-up SQL Server databases](restore-sql-database-azure-vm.md)
+* [Manage backed-up SQL Server databases](manage-monitor-sql-database-backup.md)
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
backup Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
backup Sql Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/sql-support-matrix.md
Title: Azure Backup support matrix for SQL Server Backup in Azure VMs description: Provides a summary of support settings and limitations when backing up SQL Server in Azure VMs with the Azure Backup service. Previously updated : 06/07/2021 Last updated : 08/20/2021
You can use Azure Backup to back up SQL Server databases in Azure VMs hosted on
**Supported operating systems** | Windows Server 2019, Windows Server 2016, Windows Server 2012, Windows Server 2008 R2 SP1 <br/><br/> Linux isn't currently supported. **Supported SQL Server versions** | SQL Server 2019, SQL Server 2017 as detailed on the [Search product lifecycle page](https://support.microsoft.com/lifecycle/search?alpha=SQL%20server%202017), SQL Server 2016 and SPs as detailed on the [Search product lifecycle page](https://support.microsoft.com/lifecycle/search?alpha=SQL%20server%202016%20service%20pack), SQL Server 2014, SQL Server 2012, SQL Server 2008 R2, SQL Server 2008 <br/><br/> Enterprise, Standard, Web, Developer, Express.<br><br>Express Local DB versions aren't supported. **Supported .NET versions** | .NET Framework 4.5.2 or later installed on the VM
+**Supported deployments** | SQL Marketplace Azure VMs and non-Marketplace (SQL Server that is manually installed) VMs are supported. Support for standalone instances are always on [availability groups](backup-sql-server-on-availability-groups.md).
## Feature considerations and limitations
_*The database size limit depends on the data transfer rate that we support and
* TDE - enabled database backup is supported. To restore a TDE-encrypted database to another SQL Server, you need to first [restore the certificate to the destination server](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server). Backup compression for TDE-enabled databases for SQL Server 2016 and newer versions is available, but at lower transfer size as explained [here](https://techcommunity.microsoft.com/t5/sql-server/backup-compression-for-tde-enabled-databases-important-fixes-in/ba-p/385593). * Backup and restore operations for mirror databases and database snapshots aren't supported. * SQL Server **Failover Cluster Instance (FCI)** isn't supported.
-* Using more than one backup solutions to back up your standalone SQL Server instance or SQL Always on availability group may lead to backup failure. Refrain from doing so. Backing up two nodes of an availability group individually with same or different solutions, may also lead to backup failure.
-* When availability groups are configured, backups are taken from the different nodes based on a few factors. The backup behavior for an availability group is summarized below.
-
-### Back up behavior with Always on availability groups
-
-We recommend that the backup is configured on only one node of an availability group (AG). Always configure backup in the same region as the primary node. In other words, you always need the primary node to be present in the region where you're configuring the backup. If all the nodes of the AG are in the same region where the backup is configured, there isn't any concern.
-
-#### For cross-region AG
-
-* Regardless of the backup preference, backups will only run from the nodes that are in the same region where the backup is configured. This is because cross-region backups aren't supported. If you have only two nodes and the secondary node is in the other region, the backups will continue to run from the primary node (unless your backup preference is 'secondary only').
-* If a node fails over to a region different than the one where the backup is configured, backups will fail on the nodes in the failed-over region.
-
-Depending on the backup preference and backups types (full/differential/log/copy-only full), backups are taken from a particular node (primary/secondary).
-
-#### Backup preference: Primary
-
-**Backup Type** | **Node**
- |
-Full | Primary
-Differential | Primary
-Log | Primary
-Copy-Only Full | Primary
-
-#### Backup preference: Secondary Only
-
-**Backup Type** | **Node**
- |
-Full | Primary
-Differential | Primary
-Log | Secondary
-Copy-Only Full | Secondary
-
-#### Backup preference: Secondary
-
-**Backup Type** | **Node**
- |
-Full | Primary
-Differential | Primary
-Log | Secondary
-Copy-Only Full | Secondary
-
-#### No Backup preference
-
-**Backup Type** | **Node**
- |
-Full | Primary
-Differential | Primary
-Log | Secondary
-Copy-Only Full | Secondary
## Backup throughput performance
batch Credential Access Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/credential-access-key-vault.md
Title: Use certificates and securely access Azure Key Vault with Batch description: Learn how to programmatically access your credentials from Key Vault using Azure Batch. Previously updated : 10/28/2020 Last updated : 08/25/2021 # Use certificates and securely access Azure Key Vault with Batch
-In this article, you'll learn how to set up Batch nodes to securely access credentials stored in [Azure Key Vault](../key-vault/general/overview.md). There's no point in putting your admin credentials in Key Vault, then hard-coding credentials to access Key Vault from a script. The solution is to use a certificate that grants your Batch nodes access to Key Vault.
+In this article, you'll learn how to set up Batch nodes to securely access credentials stored in [Azure Key Vault](../key-vault/general/overview.md).
To authenticate to Azure Key Vault from a Batch node, you need:
To authenticate to Azure Key Vault from a Batch node, you need:
- A Batch account - A Batch pool with at least one node
+> [!IMPORTANT]
+> Batch now offers an improved option for accessing credentials stored in Azure Key Vault. By creating your pool with a user-assigned managed identity that can access the certificate in Azure Key Vault, you don't need to send the certificate content to the Batch Service, which enhances security. We recommend using automatic certificate rotation instead of the method described in this topic. For more information, see [Enable automatic certificate rotation in a Batch pool](automatic-certificate-rotation.md).
+ ## Obtain a certificate If you don't already have a certificate, the easiest way to get one is to generate a self-signed certificate using the `makecert` command-line tool.
These are the credentials to use in your script.
- Learn more about [Azure Key Vault](../key-vault/general/overview.md). - Review the [Azure Security Baseline for Batch](security-baseline.md).-- Learn about Batch features such as [configuring access to compute nodes](pool-endpoint-configuration.md), [using Linux compute nodes](batch-linux-nodes.md), and [using private endpoints](private-connectivity.md).
+- Learn about Batch features such as [configuring access to compute nodes](pool-endpoint-configuration.md), [using Linux compute nodes](batch-linux-nodes.md), and [using private endpoints](private-connectivity.md).
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
batch Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
cloud-services-extended-support Cloud Services Model And Package https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/cloud-services-model-and-package.md
The **ServiceDefinition.csdef** file specifies the settings that are used by Azu
```xml <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="MyServiceName" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
- <WebRole name="WebRole1" vmsize="Medium">
+ <WebRole name="WebRole1" vmsize="Standard_D1_v2">
<Sites> <Site name="Web"> <Bindings>
cognitive-services How To Configure Openssl Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-configure-openssl-linux.md
# Configure OpenSSL for Linux
-When using any Speech SDK version before 1.9.0, [OpenSSL](https://www.openssl.org) is dynamically configured to the host-system version. In later versions of the Speech SDK, OpenSSL (version [1.1.1b](https://mta.openssl.org/pipermail/openssl-announce/2019-February/000147.html)) is statically linked to the core library of the Speech SDK.
+When using any Speech SDK version before 1.9.0, [OpenSSL](https://www.openssl.org) is dynamically configured to the host-system version. In later versions of the Speech SDK OpenSSL is statically linked to the core library of the Speech SDK. In Speech SDK versions 1.9.0 to 1.16.0 [OpenSSL version 1.1.1b](https://mta.openssl.org/pipermail/openssl-announce/2019-February/000147.html) is used. In Speech SDK version 1.17.0 onward [Open SSL version 1.1.1k](https://mta.openssl.org/pipermail/openssl-announce/2021-March/000197.html) is used.
To ensure connectivity, verify that OpenSSL certificates have been installed in your system. Run a command: ```bash
config.setProperty("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true");
::: zone pivot="programming-language-python" ```Python
-speech_config.set_property_by_name("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true")?
+speech_config.set_property_by_name("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true")
``` ::: zone-end
config.setProperty("OPENSSL_DISABLE_CRL_CHECK", "true");
::: zone pivot="programming-language-python" ```Python
-speech_config.set_property_by_name("OPENSSL_DISABLE_CRL_CHECK", "true")?
+speech_config.set_property_by_name("OPENSSL_DISABLE_CRL_CHECK", "true")
``` ::: zone-end
speech_config.set_property_by_name("OPENSSL_DISABLE_CRL_CHECK", "true")?
## Next steps > [!div class="nextstepaction"]
-> [About the Speech SDK](speech-sdk.md)
+> [About the Speech SDK](speech-sdk.md)
cognitive-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
cognitive-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
confidential-computing Confidential Nodes Aks Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-get-started.md
You're now ready to deploy a test application.
Create a file named *hello-world-enclave.yaml* and paste in the following YAML manifest. You can find this sample application code in the [Open Enclave project](https://github.com/openenclave/openenclave/tree/master/samples/helloworld). This deployment assumes that you've deployed the *confcom* add-on.
+> [!NOTE]
+> The following example pulls a public container image from Docker Hub. We recommend that you set up a pull secret to authenticate using a Docker Hub account instead of making an anonymous pull request. To improve reliability when working with public content, import and manage the image in a private Azure container registry. [Learn more about working with public images.](../container-registry/buffer-gate-public-content.md)
+ ```yaml apiVersion: batch/v1 kind: Job
container-instances Container Instances Gpu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-gpu.md
When deploying GPU resources, set CPU and memory resources appropriate for the w
One way to add GPU resources is to deploy a container group by using a [YAML file](container-instances-multi-container-yaml.md). Copy the following YAML into a new file named *gpu-deploy-aci.yaml*, then save the file. This YAML creates a container group named *gpucontainergroup* specifying a container instance with a K80 GPU. The instance runs a sample CUDA vector addition application. The resource requests are sufficient to run the workload.
+ > [!NOTE]
+ > The following example uses a public container image. To improve reliability, import and manage the image in a private Azure container registry, and update your YAML to use your privately managed base image. [Learn more about working with public images](../container-registry/buffer-gate-public-content.md).
+ ```YAML additional_properties: {} apiVersion: '2019-12-01'
container-instances Container Instances Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-log-analytics.md
The following examples demonstrate two ways to create a container group that con
To deploy with the Azure CLI, specify the `--log-analytics-workspace` and `--log-analytics-workspace-key` parameters in the [az container create][az-container-create] command. Replace the two workspace values with the values you obtained in the previous step (and update the resource group name) before running the following command.
+> [!NOTE]
+> The following example pulls a public container image from Docker Hub. We recommend that you set up a pull secret to authenticate using a Docker Hub account instead of making an anonymous pull request. To improve reliability when working with public content, import and manage the image in a private Azure container registry. [Learn more about working with public images.](../container-registry/buffer-gate-public-content.md)
+ ```azurecli-interactive az container create \ --resource-group myResourceGroup \
az container create \
Use this method if you prefer to deploy container groups with YAML. The following YAML defines a container group with a single container. Copy the YAML into a new file, then replace `LOG_ANALYTICS_WORKSPACE_ID` and `LOG_ANALYTICS_WORKSPACE_KEY` with the values you obtained in the previous step. Save the file as **deploy-aci.yaml**.
+> [!NOTE]
+> The following example pulls a public container image from Docker Hub. We recommend that you set up a pull secret to authenticate using a Docker Hub account instead of making an anonymous pull request. To improve reliability when working with public content, import and manage the image in a private Azure container registry. [Learn more about working with public images.](../container-registry/buffer-gate-public-content.md)
+ ```yaml apiVersion: 2019-12-01 location: eastus
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/policy-reference.md
Title: Built-in policy definitions for Azure Container Instances description: Lists Azure Policy built-in policy definitions for Azure Container Instances. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
container-registry Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Container Registry description: Sample Azure Resource Graph queries for Azure Container Registry showing use of resource types and tables to access Azure Container Registry related resources and properties. Previously updated : 08/09/2021 Last updated : 08/27/2021
container-registry Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Container Registry description: Lists Azure Policy Regulatory Compliance controls available for Azure Container Registry. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
cosmos-db Analytical Store Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/analytical-store-private-endpoints.md
To configure network isolation for this account from a Synapse workspace:
## Next steps
-* Get started with [querying analytical store with Azure Synapse Spark](../synapse-analytics/synapse-link/how-to-query-analytical-store-spark.md?toc=/azure/cosmos-db/toc.json&bc=/azure/cosmos-db/breadcrumb/toc.json)
+* Get started with [querying analytical store with Azure Synapse Spark 3](../synapse-analytics/synapse-link/how-to-query-analytical-store-spark-3.md?toc=/azure/cosmos-db/toc.json&bc=/azure/cosmos-db/breadcrumb/toc.json)
+* Get started with [querying analytical store with Azure Synapse Spark 2](../synapse-analytics/synapse-link/how-to-query-analytical-store-spark.md?toc=/azure/cosmos-db/toc.json&bc=/azure/cosmos-db/breadcrumb/toc.json)
* Get started with [querying analytical store with Azure Synapse serverless SQL pools](../synapse-analytics/sql/query-cosmos-db-analytical-store.md?toc=/azure/cosmos-db/toc.json&bc=/azure/cosmos-db/breadcrumb/toc.json)
cosmos-db Attachments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/attachments.md
# Azure Cosmos DB Attachments Azure Cosmos DB attachments are special items that contain references to an associated metadata with an external blob or media file.
cosmos-db Bulk Executor Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/bulk-executor-overview.md
The bulk executor library makes sure to maximally utilize the throughput allocat
## Next Steps * Learn more by trying out the sample applications consuming the bulk executor library in [.NET](bulk-executor-dot-net.md) and [Java](bulk-executor-java.md).
-* Check out the bulk executor SDK information and release notes in [.NET](sql-api-sdk-bulk-executor-dot-net.md) and [Java](sql-api-sdk-bulk-executor-java.md).
+* Check out the bulk executor SDK information and release notes in [.NET](sql-api-sdk-bulk-executor-dot-net.md) and [Java](sql/sql-api-sdk-bulk-executor-java.md).
* The bulk executor library is integrated into the Cosmos DB Spark connector, to learn more, see [Azure Cosmos DB Spark connector](./create-sql-api-spark.md) article. * The bulk executor library is also integrated into a new version of [Azure Cosmos DB connector](../data-factory/connector-azure-cosmos-db.md) for Azure Data Factory to copy data.
cosmos-db Migrate Data Striim https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/migrate-data-striim.md
This article shows how to use Striim to migrate data from an **Oracle database**
1. Select **Create a resource** and search for **Striim** in the Azure marketplace. Select the first option and **Create**.
- :::image type="content" source="../media/cosmosdb-sql-api-migrate-data-striim/striim-azure-marketplace.png" alt-text="Find Striim marketplace item":::
+ :::image type="content" source="../sql/media/cosmosdb-sql-api-migrate-data-striim/striim-azure-marketplace.png" alt-text="Find Striim marketplace item":::
1. Next, enter the configuration properties of the Striim instance. The Striim environment is deployed in a virtual machine. From the **Basics** pane, enter the **VM user name**, **VM password** (this password is used to SSH into the VM). Select your **Subscription**, **Resource Group**, and **Location details** where youΓÇÖd like to deploy Striim. Once complete, select **OK**.
- :::image type="content" source="../media/cosmosdb-sql-api-migrate-data-striim/striim-configure-basic-settings.png" alt-text="Configure basic settings for Striim":::
+ :::image type="content" source="../sql/media/cosmosdb-sql-api-migrate-data-striim/striim-configure-basic-settings.png" alt-text="Configure basic settings for Striim":::
1. In the **Striim Cluster settings** pane, choose the type of Striim deployment and the virtual machine size.
This article shows how to use Striim to migrate data from an **Oracle database**
1. In the **Striim access settings** pane, configure the **Public IP address** (choose the default values), **Domain name for Striim**, **Admin password** that youΓÇÖd like to use to login to the Striim UI. Configure a VNET and Subnet (choose the default values). After filling in the details, select **OK** to continue.
- :::image type="content" source="../media/cosmosdb-sql-api-migrate-data-striim/striim-access-settings.png" alt-text="Striim access settings":::
+ :::image type="content" source="../sql/media/cosmosdb-sql-api-migrate-data-striim/striim-access-settings.png" alt-text="Striim access settings":::
1. Azure will validate the deployment and make sure everything looks good; validation takes few minutes to complete. After the validation is completed, select **OK**.
In this section, you will configure the Azure Cosmos DB Cassandra API account as
1. Navigate to the Striim instance that you deployed in the Azure portal. Select the **Connect** button in the upper menu bar and from the **SSH** tab, copy the URL in **Login using VM local account** field.
- :::image type="content" source="../media/cosmosdb-sql-api-migrate-data-striim/get-ssh-url.png" alt-text="Get the SSH URL":::
+ :::image type="content" source="../sql/media/cosmosdb-sql-api-migrate-data-striim/get-ssh-url.png" alt-text="Get the SSH URL":::
1. Open a new terminal window and run the SSH command you copied from the Azure portal. This article uses terminal in a MacOS, you can follow the similar instructions using PuTTY or a different SSH client on a Windows machine. When prompted, type **yes** to continue and enter the **password** you have set for the virtual machine in the previous step.
- :::image type="content" source="../media/cosmosdb-sql-api-migrate-data-striim/striim-vm-connect.png" alt-text="Connect to Striim VM":::
+ :::image type="content" source="../sql/media/cosmosdb-sql-api-migrate-data-striim/striim-vm-connect.png" alt-text="Connect to Striim VM":::
1. Now, open a new terminal tab to copy the **ojdbc8.jar** file you downloaded previously. Use the following SCP command to copy the jar file from your local machine to the tmp folder of the Striim instance running in Azure:
In this section, you will configure the Azure Cosmos DB Cassandra API account as
scp ojdbc8.jar striimdemo@striimdemo.westus.cloudapp.azure.com:/tmp ```
- :::image type="content" source="../media/cosmosdb-sql-api-migrate-data-striim/copy-jar-file.png" alt-text="Copy the Jar file from location machine to Striim":::
+ :::image type="content" source="../sql/media/cosmosdb-sql-api-migrate-data-striim/copy-jar-file.png" alt-text="Copy the Jar file from location machine to Striim":::
1. Next, navigate back to the window where you did SSH to the Striim instance and Login as sudo. Move the **ojdbc8.jar** file from the **/tmp** directory into the **lib** directory of your Striim instance with the following commands:
In this section, you will configure the Azure Cosmos DB Cassandra API account as
chmod +x ojdbc8.jar ```
- :::image type="content" source="../media/cosmosdb-sql-api-migrate-data-striim/move-jar-file.png" alt-text="Move the Jar file to lib folder":::
+ :::image type="content" source="../sql/media/cosmosdb-sql-api-migrate-data-striim/move-jar-file.png" alt-text="Move the Jar file to lib folder":::
1. From the same terminal window, restart the Striim server by executing the following commands:
In this section, you will configure the Azure Cosmos DB Cassandra API account as
1. Now, navigate back to Azure and copy the Public IP address of your Striim VM.
- :::image type="content" source="../media/cosmosdb-sql-api-migrate-data-striim/copy-public-ip-address.png" alt-text="Copy Striim VM IP address":::
+ :::image type="content" source="../sql/media/cosmosdb-sql-api-migrate-data-striim/copy-public-ip-address.png" alt-text="Copy Striim VM IP address":::
1. To navigate to the StriimΓÇÖs Web UI, open a new tab in a browser and copy the public IP followed by: 9080. Sign in by using the **admin** username, along with the admin password you specified in the Azure portal.
- :::image type="content" source="../media/cosmosdb-sql-api-migrate-data-striim/striim-login-ui.png" alt-text="Sign in to Striim":::
+ :::image type="content" source="../sql/media/cosmosdb-sql-api-migrate-data-striim/striim-login-ui.png" alt-text="Sign in to Striim":::
1. Now youΓÇÖll arrive at StriimΓÇÖs home page. There are three different panes ΓÇô **Dashboards**, **Apps**, and **SourcePreview**. The Dashboards pane allows you to move data in real time and visualize it. The Apps pane contains your streaming data pipelines, or data flows. On the right hand of the page is SourcePreview where you can preview your data before moving it. 1. Select the **Apps** pane, weΓÇÖll focus on this pane for now. There are a variety of sample apps that you can use to learn about Striim, however in this article you will create our own. Select the **Add App** button in the top right-hand corner.
- :::image type="content" source="../media/cosmosdb-sql-api-migrate-data-striim/add-striim-app.png" alt-text="Add the Striim app":::
+ :::image type="content" source="../sql/media/cosmosdb-sql-api-migrate-data-striim/add-striim-app.png" alt-text="Add the Striim app":::
1. There are a few different ways to create Striim applications. Select **Start from Scratch** for this scenario.
cosmos-db Choose Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/choose-api.md
Last updated 07/12/2021
# Choose an API in Azure Cosmos DB Azure Cosmos DB is a fully managed NoSQL database for modern app development. Azure Cosmos DB takes database administration off your hands with automatic management, updates, and patching. It also handles capacity management with cost-effective serverless and automatic scaling options that respond to application needs to match capacity with demand.
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cli-samples.md
- Title: Azure CLI Samples for Azure Cosmos DB Core (SQL) API
-description: Azure CLI Samples for Azure Cosmos DB Core (SQL) API
---- Previously updated : 08/26/2021----
-# Azure CLI samples for Azure Cosmos DB Core (SQL) API
-
-The following table includes links to sample Azure CLI scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb). Azure Cosmos DB CLI script samples can also be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
-
-These samples require Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli)
-
-For Azure CLI samples for other APIs see [CLI Samples for Cassandra](cassandr)
-
-## Common Samples
-
-These samples apply to all Azure Cosmos DB APIs
-
-|Task | Description |
-|||
-| [Add or failover regions](scripts/cli/common/regions.md?toc=%2fcli%2fazure%2ftoc.json) | Add a region, change failover priority, trigger a manual failover.|
-| [Account keys and connection strings](scripts/cli/common/keys.md?toc=%2fcli%2fazure%2ftoc.json) | List account keys, read-only keys, regenerate keys and list connection strings.|
-| [Secure with IP firewall](scripts/cli/common/ipfirewall.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account with IP firewall configured.|
-| [Secure new account with service endpoints](scripts/cli/common/service-endpoints.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account and secure with service-endpoints.|
-| [Secure existing account with service endpoints](scripts/cli/common/service-endpoints-ignore-missing-vnet.md?toc=%2fcli%2fazure%2ftoc.json)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
-|||
-
-## Core (SQL) API Samples
-
-|Task | Description |
-|||
-| [Create an Azure Cosmos account, database and container](scripts/cli/sql/create.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account, database, and container for Core (SQL) API. |
-| [Create an Azure Cosmos account, database and container with autoscale](scripts/cli/sql/autoscale.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account, database, and container with autoscale for Core (SQL) API. |
-| [Throughput operations](scripts/cli/sql/throughput.md?toc=%2fcli%2fazure%2ftoc.json) | Read, update and migrate between autoscale and standard throughput on a database and container.|
-| [Lock resources from deletion](scripts/cli/sql/lock.md?toc=%2fcli%2fazure%2ftoc.json)| Prevent resources from being deleted with resource locks.|
-|||
-
-## Next steps
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/configure-synapse-link.md
Use the instructions in [Connect to Azure Synapse Link](../synapse-analytics/syn
## <a id="query-analytical-store-spark"></a> Query analytical store using Apache Spark for Azure Synapse Analytics
-Use the instructions in the [Query Azure Cosmos DB analytical store](../synapse-analytics/synapse-link/how-to-query-analytical-store-spark.md) article on how to query with Synapse Spark. That article gives some examples on how you can interact with the analytical store from Synapse gestures. Those gestures are visible when you right-click on a container. With gestures, you can quickly generate code and tweak it to your needs. They are also perfect for discovering data with a single click.
+Use the instructions in the [Query Azure Cosmos DB analytical store using Spark 3](../synapse-analytics/synapse-link/how-to-query-analytical-store-spark-3.md) article on how to query with Synapse Spark 3. That article gives some examples on how you can interact with the analytical store from Synapse gestures. Those gestures are visible when you right-click on a container. With gestures, you can quickly generate code and tweak it to your needs. They are also perfect for discovering data with a single click.
+
+For Spark 2 integration use the instruction in the [Query Azure Cosmos DB analytical store using Spark 2](../synapse-analytics/synapse-link/how-to-query-analytical-store-spark.md) article.
## <a id="query-analytical-store-sql-on-demand"></a> Query the analytical store using serverless SQL pool in Azure Synapse Analytics
cosmos-db Cosmos Db Advanced Queries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cosmos-db-advanced-queries.md
# Troubleshoot issues with advanced diagnostics queries for the SQL (Core) API > [!div class="op_single_selector"] > * [SQL (Core) API](cosmos-db-advanced-queries.md) > * [MongoDB API](mongodb/diagnostic-queries-mongodb.md) > * [Cassandra API](cassandr)
-> * [Gremlin API](queries-gremlin.md)
+> * [Gremlin API](graph/diagnostic-queries-gremlin.md)
> In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account by using diagnostics logs sent to **Azure Diagnostics (legacy)** and **resource-specific (preview**) tables.
cosmos-db Cosmosdb Jupyter Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cosmosdb-jupyter-notebooks.md
To get started with built-in Jupyter Notebooks in Azure Cosmos DB, see the follo
* [Explore notebook samples gallery](https://cosmos.azure.com/gallery.html) * [Use Python notebook features and commands](use-python-notebook-features-and-commands.md) * [Use C# notebook features and commands](use-csharp-notebook-features-and-commands.md)
-* [Import notebooks from a GitHub repo](import-github-notebooks.md)
+* [Import notebooks from a GitHub repo](sql/import-github-notebooks.md)
cosmos-db Cosmosdb Monitor Logs Basic Queries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cosmosdb-monitor-logs-basic-queries.md
# Troubleshoot issues with diagnostics queries In this article, we'll cover how to write simple queries to help troubleshoot issues with your Azure Cosmos DB account using diagnostics logs sent to **AzureDiagnostics (legacy)** and **Resource-specific (preview)** tables.
cosmos-db Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/data-residency.md
# How to meet data residency requirements in Azure Cosmos DB In Azure Cosmos DB, you can configure your data and backups to remain in a single region to meet the[ residency requirements.](https://azure.microsoft.com/global-infrastructure/data-residency/)
cosmos-db Database Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/database-security.md
Each account consists of two keys: a primary key and secondary key. The purpose
Primary/secondary keys come in two versions: read-write and read-only. The read-only keys only allow read operations on the account, but do not provide access to read permissions resources.
-Primary/secondary keys can be retrieved and regenerated using the Azure portal. For instructions, see [View, copy, and regenerate access keys](manage-with-cli.md#regenerate-account-key).
+Primary/secondary keys can be retrieved and regenerated using the Azure portal. For instructions, see [View, copy, and regenerate access keys](sql/manage-with-cli.md#regenerate-account-key).
:::image type="content" source="./media/secure-access-to-data/nosql-database-security-master-key-portal.png" alt-text="Access control (IAM) in the Azure portal - demonstrating NoSQL database security":::
cosmos-db High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/high-availability.md
Availability Zones can be enabled via:
* [Azure PowerShell](manage-with-powershell.md#create-account)
-* [Azure CLI](manage-with-cli.md#add-or-remove-regions)
+* [Azure CLI](sql/manage-with-cli.md#add-or-remove-regions)
* [Azure Resource Manager templates](./manage-with-templates.md)
cosmos-db How To Manage Database Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-manage-database-account.md
This article describes how to manage various tasks on an Azure Cosmos account us
### <a id="create-database-account-via-cli"></a>Azure CLI
-Please see [Create an Azure Cosmos DB account with Azure CLI](manage-with-cli.md#create-an-azure-cosmos-db-account)
+Please see [Create an Azure Cosmos DB account with Azure CLI](sql/manage-with-cli.md#create-an-azure-cosmos-db-account)
### <a id="create-database-account-via-ps"></a>Azure PowerShell
In a multi-region write mode, you can add or remove any region, if you have at l
### <a id="add-remove-regions-via-cli"></a>Azure CLI
-Please see [Add or remove regions with Azure CLI](manage-with-cli.md#add-or-remove-regions)
+Please see [Add or remove regions with Azure CLI](sql/manage-with-cli.md#add-or-remove-regions)
### <a id="add-remove-regions-via-ps"></a>Azure PowerShell
Open the **Replicate Data Globally** tab and select **Enable** to enable multi-r
### <a id="configure-multiple-write-regions-cli"></a>Azure CLI
-Please see [Enable multiple-write regions with Azure CLI](manage-with-cli.md#enable-multiple-write-regions)
+Please see [Enable multiple-write regions with Azure CLI](sql/manage-with-cli.md#enable-multiple-write-regions)
### <a id="configure-multiple-write-regions-ps"></a>Azure PowerShell
The Automatic failover option allows Azure Cosmos DB to failover to the region w
### <a id="enable-automatic-failover-via-cli"></a>Azure CLI
-Please see [Enable automatic failover with Azure CLI](manage-with-cli.md#enable-automatic-failover)
+Please see [Enable automatic failover with Azure CLI](sql/manage-with-cli.md#enable-automatic-failover)
### <a id="enable-automatic-failover-via-ps"></a>Azure PowerShell
After a Cosmos account is configured for automatic failover, the failover priori
### <a id="set-failover-priorities-via-cli"></a>Azure CLI
-Please see [Set failover priority with Azure CLI](manage-with-cli.md#set-failover-priority)
+Please see [Set failover priority with Azure CLI](sql/manage-with-cli.md#set-failover-priority)
### <a id="set-failover-priorities-via-ps"></a>Azure PowerShell
The process for performing a manual failover involves changing the account's wri
### <a id="enable-manual-failover-via-cli"></a>Azure CLI
-Please see [Trigger manual failover with Azure CLI](manage-with-cli.md#trigger-manual-failover)
+Please see [Trigger manual failover with Azure CLI](sql/manage-with-cli.md#trigger-manual-failover)
### <a id="enable-manual-failover-via-ps"></a>Azure PowerShell
Please see [Trigger manual failover with PowerShell](manage-with-powershell.md#t
For more information and examples on how to manage the Azure Cosmos account as well as database and containers, read the following articles: * [Manage Azure Cosmos DB using Azure PowerShell](manage-with-powershell.md)
-* [Manage Azure Cosmos DB using Azure CLI](manage-with-cli.md)
+* [Manage Azure Cosmos DB using Azure CLI](sql/manage-with-cli.md)
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-setup-rbac.md
The Azure Cosmos DB data plane RBAC is built on concepts that are commonly found
> This permission model only covers database operations that let you read and write data. It does **not** cover any kind of management operations, like creating containers or changing their throughput. This means that you **cannot use any Azure Cosmos DB data plane SDK** to authenticate management operations with an AAD identity. Instead, you must use [Azure RBAC](role-based-access-control.md) through: > - [Azure Resource Manager (ARM) templates](manage-with-templates.md) > - [Azure PowerShell scripts](manage-with-powershell.md),
-> - [Azure CLI scripts](manage-with-cli.md),
+> - [Azure CLI scripts](sql/manage-with-cli.md),
> - Azure management libraries available in > - [.NET](https://www.nuget.org/packages/Microsoft.Azure.Management.CosmosDB/) > - [Java](https://search.maven.org/artifact/com.azure.resourcemanager/azure-resourcemanager-cosmos)
cosmos-db Linux Emulator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/linux-emulator.md
Last updated 06/04/2021
# Run the emulator on Docker for Linux (Preview) The Azure Cosmos DB Linux Emulator provides a local environment that emulates the Azure Cosmos DB service for development purposes. Currently, the Linux emulator only supports SQL API. Using the Azure Cosmos DB Emulator, you can develop and test your application locally, without creating an Azure subscription or incurring any costs. When you're satisfied with how your application is working in the Azure Cosmos DB Linux Emulator, you can switch to using an Azure Cosmos DB account in the cloud. This article describes how to install and use the emulator on macOS and Linux environments.
cosmos-db Local Emulator On Docker Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/local-emulator-on-docker-windows.md
Last updated 04/20/2021
# <a id="run-on-windows-docker"></a>Use the emulator on Docker for Windows You can run the Azure Cosmos DB Emulator on a Windows Docker container. See the [Docker Hub](https://hub.docker.com/r/microsoft/azure-cosmosdb-emulator/) for the docker pull command and [GitHub](https://github.com/Azure/azure-cosmos-db-emulator-docker) for the `Dockerfile` and more information. Currently, the emulator does not work on Docker for Oracle Linux. Use the following instructions to run the emulator on Docker for Windows:
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/powershell-samples.md
- Title: Azure PowerShell samples for Azure Cosmos DB Core (SQL) API
-description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB for Core (SQL) API
---- Previously updated : 08/26/2021---
-# Azure PowerShell samples for Azure Cosmos DB Core (SQL) API
-
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
-
-For PowerShell cmdlets for other APIs see [PowerShell Samples for Cassandra](cassandr)
-
-## Common Samples
-
-|Task | Description |
-|||
-|[Update an account](scripts/powershell/common/account-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update a Cosmos DB account's default consistency level. |
-|[Update an account's regions](scripts/powershell/common/update-region.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Update a Cosmos DB account's regions. |
-|[Change failover priority or trigger failover](scripts/powershell/common/failover-priority-update.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Change the regional failover priority of an Azure Cosmos account or trigger a manual failover. |
-|[Account keys or connection strings](scripts/powershell/common/keys-connection-strings.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Get primary and secondary keys, connection strings or regenerate an account key of an Azure Cosmos DB account. |
-|[Create a Cosmos Account with IP Firewall](scripts/powershell/common/firewall-create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create an Azure Cosmos DB account with IP Firewall enabled. |
-|||
-
-## Core (SQL) API Samples
-
-|Task | Description |
-|||
-|[Create an account, database and container](scripts/powershell/sql/create.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create an Azure Cosmos DB account, database and container. |
-|[Create an account, database and container with autoscale](scripts/powershell/sql/autoscale.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create an Azure Cosmos DB account, database and container with autoscale. |
-|[Create a container with a large partition key](scripts/powershell/sql/create-large-partition-key.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Create a container with a large partition key. |
-|[Create a container with no index policy](scripts/powershell/sql/create-index-none.md?toc=%2fpowershell%2fmodule%2ftoc.json) | Create an Azure Cosmos container with index policy turned off.|
-|[List or get databases or containers](scripts/powershell/sql/list-get.md?toc=%2fpowershell%2fmodule%2ftoc.json)| List or get database or containers. |
-|[Throughput operations](scripts/powershell/sql/throughput.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Throughput operations for a database or container including get, update and migrate between autoscale and standard throughput. |
-|[Lock resources from deletion](scripts/powershell/sql/lock.md?toc=%2fpowershell%2fmodule%2ftoc.json)| Prevent resources from being deleted with resource locks. |
-|||
-
-## Next steps
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Rate Limiting Requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/rate-limiting-requests.md
# Optimize your Azure Cosmos DB application using rate limiting This article provides developers with a methodology to rate limit requests to Azure Cosmos DB. Implementing this pattern can reduce errors and improve overall performance for workloads that exceed the provisioned throughput of the target database or container.
cosmos-db Relational Nosql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/relational-nosql.md
If you are managing data whose structures are constantly changing at a high rate
The [microservices](https://en.wikipedia.org/wiki/Microservices) pattern has grown significantly in recent years. This pattern has its roots in [Service-Oriented Architecture](https://en.wikipedia.org/wiki/Service-oriented_architecture). The de-facto standard for data transmission in these modern microservices architectures is [JSON](https://en.wikipedia.org/wiki/JSON), which also happens to be the storage medium for the vast majority of document-oriented NoSQL Databases. This makes NoSQL document stores a much more seamless fit for both the persistence and synchronization (using [event sourcing patterns](https://en.wikipedia.org/wiki/Event-driven_architecture)) across complex Microservice implementations. More traditional relational databases can be much more complex to maintain in these architectures. This is due to the greater amount of transformation required for both state and synchronization across APIs. Azure Cosmos DB in particular has a number of features that make it an even more seamless fit for JSON-based Microservices Architectures than many NoSQL databases: * a choice of pure JSON data types
-* a JavaScript engine and [query API](./javascript-query-api.md) built into the database.
+* a JavaScript engine and [query API](sql/javascript-query-api.md) built into the database.
* a state-of-the-art [change feed](./change-feed.md) which clients can subscribe to in order to get notified of modifications to a container. ## Some challenges with NoSQL databases
cosmos-db Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Cosmos DB description: Sample Azure Resource Graph queries for Azure Cosmos DB showing use of resource types and tables to access Azure Cosmos DB related resources and properties. Previously updated : 08/09/2021 Last updated : 08/27/2021
# Azure Resource Graph sample queries for Azure Cosmos DB This page is a collection of [Azure Resource Graph](../governance/resource-graph/overview.md) sample queries for Azure Cosmos DB. For a complete list of Azure Resource Graph samples, see
cosmos-db Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/role-based-access-control.md
This setting will prevent any changes to any Cosmos resource from any client con
- Modifying stored procedures, triggers or user-defined functions.
-If your applications (or users via Azure portal) perform any of these actions they will need to be migrated to execute via [ARM Templates](./manage-with-templates.md), [PowerShell](manage-with-powershell.md), [Azure CLI](manage-with-cli.md), REST, or [Azure Management Library](https://github.com/Azure-Samples/cosmos-management-net). Note that Azure Management is available in [multiple languages](/azure/?product=featured#languages-and-tools).
+If your applications (or users via Azure portal) perform any of these actions they will need to be migrated to execute via [ARM Templates](sql/manage-with-templates.md), [PowerShell](sql/manage-with-powershell.md), [Azure CLI](sql/manage-with-cli.md), REST, or [Azure Management Library](https://github.com/Azure-Samples/cosmos-management-net). Note that Azure Management is available in [multiple languages](/azure/?product=featured#languages-and-tools).
### Set via ARM Template
cosmos-db Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/20/2021 Last updated : 08/27/2021
cosmos-db Advanced Threat Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/advanced-threat-protection.md
+
+ Title: 'Advanced Threat Protection for Azure Cosmos DB'
+description: Learn how Azure Cosmos DB provides encryption of data at rest and how it's implemented.
+++ Last updated : 06/08/2021++++++
+# Advanced Threat Protection for Azure Cosmos DB (Preview)
+
+Advanced Threat Protection for Azure Cosmos DB provides an additional layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit Azure Cosmos DB accounts. This layer of protection allows you to address threats, even without being a security expert, and integrate them with central security monitoring systems.
+
+Security alerts are triggered when anomalies in activity occur. These security alerts are integrated with [Azure Security Center](https://azure.microsoft.com/services/security-center/), and are also sent via email to subscription administrators, with details of the suspicious activity and recommendations on how to investigate and remediate the threats.
+
+> [!NOTE]
+>
+> * Advanced Threat Protection for Azure Cosmos DB is currently available only for the SQL API.
+> * Advanced Threat Protection for Azure Cosmos DB is currently not available in Azure government and sovereign cloud regions.
+
+For a full investigation experience of the security alerts, we recommended enabling [diagnostic logging in Azure Cosmos DB](../monitor-cosmos-db.md), which logs operations on the database itself, including CRUD operations on all documents, containers, and databases.
+
+## Threat types
+
+Advanced Threat Protection for Azure Cosmos DB detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. It can currently trigger the following alerts:
+
+- **Access from unusual locations**: This alert is triggered when there is a change in the access pattern to an Azure Cosmos account, where someone has connected to the Azure Cosmos DB endpoint from an unusual geographical location. In some cases, the alert detects a legitimate action, meaning a new application or developerΓÇÖs maintenance operation. In other cases, the alert detects a malicious action from a former employee, external attacker, etc.
+
+- **Unusual data extraction**: This alert is triggered when a client is extracting an unusual amount of data from an Azure Cosmos DB account. This can be the symptom of some data exfiltration performed to transfer all the data stored in the account to an external data store.
+++
+## Configure Advanced Threat Protection
+
+You can configure advanced threat protection in any of several ways, described in the following sections.
+
+# [Portal](#tab/azure-portal)
+
+1. Launch the Azure portal at [https://portal.azure.com](https://portal.azure.com/).
+
+2. From the Azure Cosmos DB account, from the **Settings** menu, select **Advanced security**.
+
+ :::image type="content" source="./media/advanced-threat-protection/cosmos-db-atp.png" alt-text="Set up ATP":::
+
+3. In the **Advanced security** configuration blade:
+
+ * Click the **Advanced Threat Protection** option to set it to **ON**.
+ * Click **Save** to save the new or updated Advanced Threat Protection policy.
+
+# [REST API](#tab/rest-api)
+
+Use Rest API commands to create, update, or get the Advanced Threat Protection setting for a specific Azure Cosmos DB account.
+
+* [Advanced Threat Protection - Create](/rest/api/securitycenter/advancedthreatprotection/create)
+* [Advanced Threat Protection - Get](/rest/api/securitycenter/advancedthreatprotection/get)
+
+# [PowerShell](#tab/azure-powershell)
+
+Use the following PowerShell cmdlets:
+
+* [Enable Advanced Threat Protection](/powershell/module/az.security/enable-azsecurityadvancedthreatprotection)
+* [Get Advanced Threat Protection](/powershell/module/az.security/get-azsecurityadvancedthreatprotection)
+* [Disable Advanced Threat Protection](/powershell/module/az.security/disable-azsecurityadvancedthreatprotection)
+
+# [ARM template](#tab/arm-template)
+
+Use an Azure Resource Manager (ARM) template to set up Cosmos DB with Advanced Threat Protection enabled.
+For more information, see
+[Create a CosmosDB Account with Advanced Threat Protection](https://azure.microsoft.com/resources/templates/cosmosdb-advanced-threat-protection-create-account/).
+
+# [Azure Policy](#tab/azure-policy)
+
+Use an Azure Policy to enable Advanced Threat Protection for Cosmos DB.
+
+1. Launch the Azure **Policy - Definitions** page, and search for the **Deploy Advanced Threat Protection for Cosmos DB** policy.
+
+ :::image type="content" source="./media/advanced-threat-protection/cosmos-db.png" alt-text="Search Policy":::
+
+1. Click on the **Deploy Advanced Threat Protection for CosmosDB** policy, and then click **Assign**.
+
+ :::image type="content" source="./media/advanced-threat-protection/cosmos-db-atp-policy.png" alt-text="Select Subscription Or Group":::
++
+1. From the **Scope** field, click the three dots, select an Azure subscription or resource group, and then click **Select**.
+
+ :::image type="content" source="./media/advanced-threat-protection/cosmos-db-atp-details.png" alt-text="Policy Definitions Page":::
++
+1. Enter the other parameters, and click **Assign**.
++++
+## Manage ATP security alerts
+
+When Azure Cosmos DB activity anomalies occur, a security alert is triggered with information about the suspicious security event.
+
+ From Azure Security Center, you can review and manage your current [security alerts](../../security-center/security-center-alerts-overview.md). Click on a specific alert in [Security Center](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/0) to view possible causes and recommended actions to investigate and mitigate the potential threat. The following image shows an example of alert details provided in Security Center.
+
+ :::image type="content" source="./media/advanced-threat-protection/cosmos-db-alert-details.png" alt-text="Threat details":::
+
+An email notification is also sent with the alert details and recommended actions. The following image shows an example of an alert email.
+
+ :::image type="content" source="./media/advanced-threat-protection/cosmos-db-alert.png" alt-text="Alert details":::
+
+## Cosmos DB ATP alerts
+
+ To see a list of the alerts generated when monitoring Azure Cosmos DB accounts, see the [Cosmos DB alerts](../../security-center/alerts-reference.md#alerts-azurecosmos) section in the Azure Security Center documentation.
+
+## Next steps
+
+* Learn more about [Diagnostic logging in Azure Cosmos DB](../cosmosdb-monitor-resource-logs.md)
+* Learn more about [Azure Security Center](../../security-center/security-center-introduction.md)
cosmos-db Best Practice Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/best-practice-dotnet.md
+
+ Title: Azure Cosmos DB best practices for .NET SDK v3
+description: Learn the best practices for using the Azure Cosmos DB .NET SDK v3
++++ Last updated : 08/26/2021++++
+# Best practices for Azure Cosmos DB .NET SDK
+
+This article walks through the best practices for using the Azure Cosmos DB .NET SDK. Following these practices, will help improve your latency, availability, and boost overall performance.
+
+Watch the video below to learn more about using the .NET SDK from a Cosmos DB engineer!
++
+> [!VIDEO https://www.youtube.com/embed/McZIQhZpvew?start=118]
+>
+
+## Checklist
+|Checked | Topic |Details/Links |
+||||
+|<input type="checkbox"/> | SDK Version | Always using the [latest version](sql-api-sdk-dotnet-standard.md) of the Cosmos DB SDK available for optimal performance. |
+| <input type="checkbox"/> | Singleton Client | Use a [single instance](/dotnet/api/microsoft.azure.cosmos.cosmosclient?view=azure-dotnet&preserve-view=true) of `CosmosClient` for the lifetime of your application for [better performance](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage). |
+| <input type="checkbox"/> | Regions | Make sure to run your application in the same [Azure region](../distribute-data-globally.md) as your Azure Cosmos DB account, whenever possible to reduce latency. Enable 2-4 regions and replicate your accounts in multiple regions for [best availability](../distribute-data-globally.md). For production workloads, enable [automatic failover](../how-to-manage-database-account.md#configure-multiple-write-regions). In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover will not succeed due to lack of region connectivity. To learn how to add multiple regions using the .NET SDK visit [here](tutorial-global-distribution-sql-api.md) |
+| <input type="checkbox"/> | Availability and Failovers | Set the [ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions?view=azure-dotnet&preserve-view=true) or [ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion?view=azure-dotnet&preserve-view=true) in the v3 SDK, and the [PreferredLocations](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.preferredlocations?view=azure-dotnet&preserve-view=true) in the v2 SDK using the [preferred regions list](./tutorial-global-distribution-sql-api.md?tabs=dotnetv3%2capi-async#preferred-locations). During failovers, write operations are sent to the current write region and all reads are sent to the first region within your preferred regions list. For more information about regional failover mechanics see the [availability troubleshooting guide](troubleshoot-sdk-availability.md). |
+| <input type="checkbox"/> | CPU | You may run into connectivity/availability issues due to lack of resources on your client machine. Monitor your CPU utilization on nodes running the Azure Cosmos DB client, and scale up/out if usage is very high. |
+| <input type="checkbox"/> | Hosting | Use [Windows 64-bit host](performance-tips.md#hosting) processing for best performance, whenever possible. |
+| <input type="checkbox"/> | Connectivity Modes | Use [Direct mode](sql-sdk-connection-modes.md) for the best performance. For instructions on how to do this, see the [V3 SDK documentation](performance-tips-dotnet-sdk-v3-sql.md#networking) or the [V2 SDK documentation](performance-tips.md#networking).|
+|<input type="checkbox"/> | Networking | If using a virtual machine to run your application, enable [Accelerated Networking](../../virtual-network/create-vm-accelerated-networking-powershell.md) on your VM to help with bottlenecks due to high traffic and reduce latency or CPU jitter. You might also want to consider using a higher end Virtual Machine where the max CPU usage is under 70%. |
+|<input type="checkbox"/> | Ephemeral Port Exhaustion | For sparse or sporadic connections, we set the [`IdleConnectionTimeout`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout?view=azure-dotnet&preserve-view=true) and [`PortReuseMode`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.portreusemode?view=azure-dotnet&preserve-view=true) to `PrivatePortPool`. The `IdleConnectionTimeout` property helps which control the time unused connections are closed. This will reduce the number of unused connections. By default, idle connections are kept open indefinitely. The value set must be greater than or equal to 10 minutes. We recommended values between 20 minutes and 24 hours. The `PortReuseMode` property allows the SDK to use a small pool of ephemeral ports for various Azure Cosmos DB destination endpoints. |
+|<input type="checkbox"/> | Use Async/Await | Avoid blocking calls: `Task.Result`, `Task.Wait`, and `Task.GetAwaiter().GetResult()`. The entire call stack is asynchronous in order to benefit from [async/await](/dotnet/csharp/programming-guide/concepts/async/) patterns. Many synchronous blocking calls lead to [Thread Pool starvation](/archive/blogs/vancem/diagnosing-net-core-threadpool-starvation-with-perfview-why-my-service-is-not-saturating-all-cores-or-seems-to-stall) and degraded response times. |
+|<input type="checkbox"/> | End-to-End Timeouts | To get end-to-end timeouts, you'll need to use both `RequestTimeout` and `CancellationToken` parameters. For more details on timeouts with Cosmos DB [visit](troubleshoot-dot-net-sdk-request-timeout.md) |
+|<input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. The SDK will not retry on writes for transient failures as writes are not idempotent. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit](troubleshoot-dot-net-sdk.md#retry-logics) |
+|<input type="checkbox"/> | Caching database/collection names | Retrieve the names of your databases and containers from configuration or cache them on start. Calls like `ReadDatabaseAsync` or `ReadDocumentCollectionAsync` and `CreateDatabaseQuery` or `CreateDocumentCollectionQuery` will result in metadata calls to the service, which consume from the system-reserved RU limit. `CreateIfNotExist` should also only be used once for setting up the database. Overall, these operations should be performed infrequently. |
+|&#10003; | Bulk Support | In scenarios where you may not need to optimize for latency, we recommend enabling [Bulk support](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/) for dumping large volumes of data. |
+| <input type="checkbox"/> | Parallel Queries | The Cosmos DB SDK supports [running queries in parallel](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage) for better latency and throughput on your queries. We recommend setting the `MaxConcurrency` property within the `QueryRequestsOptions` to the number of partitions you have. If you are not aware of the number of partitions, start by using `int.MaxValue` which will give you the best latency. Then decrease the number until it fits the resource restrictions of the environment to avoid high CPU issues. Also, set the `MaxBufferedItemCount` to the expected number of results returned to limit the number of pre-fetched results. |
+| <input type="checkbox"/> | Performance Testing Backoffs | When performing testing on your application, you should implement backoffs at [`RetryAfter`](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage) intervals. Respecting the backoff helps ensure that you'll spend a minimal amount of time waiting between retries. |
+| <input type="checkbox"/> | Indexing | The Azure Cosmos DB indexing policy also allows you to specify which document paths to include or exclude from indexing by using indexing paths (IndexingPolicy.IncludedPaths and IndexingPolicy.ExcludedPaths). Ensure that you exclude unused paths from indexing for faster writes. For a sample on how to create indexes using the SDK [visit](performance-tips-dotnet-sdk-v3-sql.md#indexing-policy) |
+| <input type="checkbox"/> | Document Size | The request charge of a specified operation correlates directly to the size of the document. We recommend reducing the size of your documents as operations on large documents cost more than operations on smaller documents. |
+| <input type="checkbox"/> | Increase the number of threads/tasks | Because calls to Azure Cosmos DB are made over the network, you might need to vary the degree of concurrency of your requests so that the client application spends minimal time waiting between requests. For example, if you're using the [.NET Task Parallel Library](/dotnet/standard/parallel-programming/task-parallel-library-tpl), create on the order of hundreds of tasks that read from or write to Azure Cosmos DB. |
+| <input type="checkbox"/> | Enabling Query Metrics | For additional logging of your backend query executions, you can enable SQL Query Metrics using our .NET SDK. For instructions on how to collect SQL Query Metrics [visit](profile-sql-api-query.md) |
+| <input type="checkbox"/> | SDK Logging | Use SDK logging to capture additional diagnostics information and troubleshoot latency issues. Log the [diagnostics string](/dotnet/api/microsoft.azure.documents.client.resourceresponsebase.requestdiagnosticsstring?view=azure-dotnet&preserve-view=true) in the V2 SDK or [`Diagnostics`](/dotnet/api/microsoft.azure.cosmos.responsemessage.diagnostics?view=azure-dotnet&preserve-view=true) in v3 SDK for more detailed cosmos diagnostic information for the current request to the service. As an example use case, capture Diagnostics on any exception and on completed operations if the `Diagnostics.ElapsedTime` is greater than a designated threshold value (i.e. if you have an SLA of 10 seconds, then capture diagnostics when `ElapsedTime` > 10 seconds ). It is advised to only use these diagnostics during performance testing. |
+
+## Best practices when using Gateway mode
+Increase `System.Net MaxConnections` per host when you use Gateway mode. Azure Cosmos DB requests are made over HTTPS/REST when you use Gateway mode. They're subject to the default connection limit per hostname or IP address. You might need to set `MaxConnections` to a higher value (from 100 through 1,000) so that the client library can use multiple simultaneous connections to Azure Cosmos DB. In .NET SDK 1.8.0 and later, the default value for `ServicePointManager.DefaultConnectionLimit` is 50. To change the value, you can set `Documents.Client.ConnectionPolicy.MaxConnectionLimit` to a higher value.
+
+## Best practices for write-heavy workloads
+For workloads that have heavy create payloads, set the `EnableContentResponseOnWrite` request option to `false`. The service will no longer return the created or updated resource to the SDK. Normally, because the application has the object that's being created, it doesn't need the service to return it. The header values are still accessible, like a request charge. Disabling the content response can help improve performance, because the SDK no longer needs to allocate memory or serialize the body of the response. It also reduces the network bandwidth usage to further help performance.
+
+## Next steps
+For a sample application that's used to evaluate Azure Cosmos DB for high-performance scenarios on a few client machines, see [Performance and scale testing with Azure Cosmos DB](performance-testing.md).
+
+To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md).
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Bulk Executor Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/bulk-executor-dot-net.md
+
+ Title: Use bulk executor .NET library in Azure Cosmos DB for bulk import and update operations
+description: Bulk import and update the Azure Cosmos DB documents using the bulk executor .NET library.
+++
+ms.devlang: dotnet
+ Last updated : 03/23/2020+++++
+# Use the bulk executor .NET library to perform bulk operations in Azure Cosmos DB
+
+> [!NOTE]
+> This bulk executor library described in this article is maintained for applications using the .NET SDK 2.x version. For new applications, you can use the **bulk support** that is directly available with the [.NET SDK version 3.x](tutorial-sql-api-dotnet-bulk-import.md) and it does not require any external library.
+
+> If you are currently using the bulk executor library and planning to migrate to bulk support on the newer SDK, use the steps in the [Migration guide](how-to-migrate-from-bulk-executor-library.md) to migrate your application.
+
+This tutorial provides instructions on using the bulk executor .NET library to import and update documents to an Azure Cosmos container. To learn about the bulk executor library and how it helps you leverage massive throughput and storage, see the [bulk executor library overview](../bulk-executor-overview.md) article. In this tutorial, you will see a sample .NET application that bulk imports randomly generated documents into an Azure Cosmos container. After importing, it shows you how you can bulk update the imported data by specifying patches as operations to perform on specific document fields.
+
+Currently, bulk executor library is supported by the Azure Cosmos DB SQL API and Gremlin API accounts only. This article describes how to use the bulk executor .NET library with SQL API accounts. To learn about using the bulk executor .NET library with Gremlin API accounts, see [perform bulk operations in the Azure Cosmos DB Gremlin API](../graph/bulk-executor-graph-dotnet.md).
+
+## Prerequisites
+
+* If you don't already have Visual Studio 2019 installed, you can download and use the [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads/). Make sure that you enable "Azure development" during the Visual Studio setup.
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
+
+* You can [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription, free of charge and commitments. Or, you can use the [Azure Cosmos DB Emulator](../local-emulator.md) with the `https://localhost:8081` endpoint. The Primary Key is provided in [Authenticating requests](../local-emulator.md#authenticate-requests).
+
+* Create an Azure Cosmos DB SQL API account by using the steps described in [create database account](create-sql-api-dotnet.md#create-account) section of the .NET quickstart article.
+
+## Clone the sample application
+
+Now let's switch to working with code by downloading a sample .NET application from GitHub. This application performs bulk operations on the data stored in your Azure Cosmos account. To clone the application, open a command prompt, navigate to the directory where you want to copy it and run the following command:
+
+```bash
+git clone https://github.com/Azure/azure-cosmosdb-bulkexecutor-dotnet-getting-started.git
+```
+
+The cloned repository contains two samples "BulkImportSample" and "BulkUpdateSample". You can open either of the sample applications, update the connection strings in App.config file with your Azure Cosmos DB account's connection strings, build the solution, and run it.
+
+The "BulkImportSample" application generates random documents and bulk imports them to your Azure Cosmos account. The "BulkUpdateSample" application bulk updates the imported documents by specifying patches as operations to perform on specific document fields. In the next sections, you will review the code in each of these sample apps.
+
+## Bulk import data to an Azure Cosmos account
+
+1. Navigate to the "BulkImportSample" folder and open the "BulkImportSample.sln" file.
+
+2. The Azure Cosmos DB's connection strings are retrieved from the App.config file as shown in the following code:
+
+ ```csharp
+ private static readonly string EndpointUrl = ConfigurationManager.AppSettings["EndPointUrl"];
+ private static readonly string AuthorizationKey = ConfigurationManager.AppSettings["AuthorizationKey"];
+ private static readonly string DatabaseName = ConfigurationManager.AppSettings["DatabaseName"];
+ private static readonly string CollectionName = ConfigurationManager.AppSettings["CollectionName"];
+ private static readonly int CollectionThroughput = int.Parse(ConfigurationManager.AppSettings["CollectionThroughput"]);
+ ```
+
+ The bulk importer creates a new database and a container with the database name, container name, and the throughput values specified in the App.config file.
+
+3. Next the DocumentClient object is initialized with Direct TCP connection mode:
+
+ ```csharp
+ ConnectionPolicy connectionPolicy = new ConnectionPolicy
+ {
+ ConnectionMode = ConnectionMode.Direct,
+ ConnectionProtocol = Protocol.Tcp
+ };
+ DocumentClient client = new DocumentClient(new Uri(endpointUrl),authorizationKey,
+ connectionPolicy)
+ ```
+
+4. The BulkExecutor object is initialized with a high retry value for wait time and throttled requests. And then they are set to 0 to pass congestion control to BulkExecutor for its lifetime.
+
+ ```csharp
+ // Set retry options high during initialization (default values).
+ client.ConnectionPolicy.RetryOptions.MaxRetryWaitTimeInSeconds = 30;
+ client.ConnectionPolicy.RetryOptions.MaxRetryAttemptsOnThrottledRequests = 9;
+
+ IBulkExecutor bulkExecutor = new BulkExecutor(client, dataCollection);
+ await bulkExecutor.InitializeAsync();
+
+ // Set retries to 0 to pass complete control to bulk executor.
+ client.ConnectionPolicy.RetryOptions.MaxRetryWaitTimeInSeconds = 0;
+ client.ConnectionPolicy.RetryOptions.MaxRetryAttemptsOnThrottledRequests = 0;
+ ```
+
+5. The application invokes the BulkImportAsync API. The .NET library provides two overloads of the bulk import API - one that accepts a list of serialized JSON documents and the other that accepts a list of deserialized POCO documents. To learn more about the definitions of each of these overloaded methods, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkexecutor.bulkimportasync).
+
+ ```csharp
+ BulkImportResponse bulkImportResponse = await bulkExecutor.BulkImportAsync(
+ documents: documentsToImportInBatch,
+ enableUpsert: true,
+ disableAutomaticIdGeneration: true,
+ maxConcurrencyPerPartitionKeyRange: null,
+ maxInMemorySortingBatchSize: null,
+ cancellationToken: token);
+ ```
+ **BulkImportAsync method accepts the following parameters:**
+
+ |**Parameter** |**Description** |
+ |||
+ |enableUpsert | A flag to enable upsert operations on the documents. If a document with the given ID already exists, it's updated. By default, it is set to false. |
+ |disableAutomaticIdGeneration | A flag to disable automatic generation of ID. By default, it is set to true. |
+ |maxConcurrencyPerPartitionKeyRange | The maximum degree of concurrency per partition key range, setting to null will cause library to use a default value of 20. |
+ |maxInMemorySortingBatchSize | The maximum number of documents that are pulled from the document enumerator, which is passed to the API call in each stage. For in-memory sorting phase that happens before bulk importing, setting this parameter to null will cause library to use default minimum value (documents.count, 1000000). |
+ |cancellationToken | The cancellation token to gracefully exit the bulk import operation. |
+
+ **Bulk import response object definition**
+ The result of the bulk import API call contains the following attributes:
+
+ |**Parameter** |**Description** |
+ |||
+ |NumberOfDocumentsImported (long) | The total number of documents that were successfully imported out of the total documents supplied to the bulk import API call. |
+ |TotalRequestUnitsConsumed (double) | The total request units (RU) consumed by the bulk import API call. |
+ |TotalTimeTaken (TimeSpan) | The total time taken by the bulk import API call to complete the execution. |
+ |BadInputDocuments (List\<object>) | The list of bad-format documents that were not successfully imported in the bulk import API call. Fix the documents returned and retry import. Bad-formatted documents include documents whose ID value is not a string (null or any other datatype is considered invalid). |
+
+## Bulk update data in your Azure Cosmos account
+
+You can update existing documents by using the BulkUpdateAsync API. In this example, you will set the `Name` field to a new value and remove the `Description` field from the existing documents. For the full set of supported update operations, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkupdate).
+
+1. Navigate to the "BulkUpdateSample" folder and open the "BulkUpdateSample.sln" file.
+
+2. Define the update items along with the corresponding field update operations. In this example, you will use `SetUpdateOperation` to update the `Name` field and `UnsetUpdateOperation` to remove the `Description` field from all the documents. You can also perform other operations like increment a document field by a specific value, push specific values into an array field, or remove a specific value from an array field. To learn about different methods provided by the bulk update API, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkupdate).
+
+ ```csharp
+ SetUpdateOperation<string> nameUpdate = new SetUpdateOperation<string>("Name", "UpdatedDoc");
+ UnsetUpdateOperation descriptionUpdate = new UnsetUpdateOperation("description");
+
+ List<UpdateOperation> updateOperations = new List<UpdateOperation>();
+ updateOperations.Add(nameUpdate);
+ updateOperations.Add(descriptionUpdate);
+
+ List<UpdateItem> updateItems = new List<UpdateItem>();
+ for (int i = 0; i < 10; i++)
+ {
+ updateItems.Add(new UpdateItem(i.ToString(), i.ToString(), updateOperations));
+ }
+ ```
+
+3. The application invokes the BulkUpdateAsync API. To learn about the definition of the BulkUpdateAsync method, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.ibulkexecutor.bulkupdateasync).
+
+ ```csharp
+ BulkUpdateResponse bulkUpdateResponse = await bulkExecutor.BulkUpdateAsync(
+ updateItems: updateItems,
+ maxConcurrencyPerPartitionKeyRange: null,
+ maxInMemorySortingBatchSize: null,
+ cancellationToken: token);
+ ```
+ **BulkUpdateAsync method accepts the following parameters:**
+
+ |**Parameter** |**Description** |
+ |||
+ |maxConcurrencyPerPartitionKeyRange | The maximum degree of concurrency per partition key range, setting this parameter to null will make the library to use the default value(20). |
+ |maxInMemorySortingBatchSize | The maximum number of update items pulled from the update items enumerator passed to the API call in each stage. For the in-memory sorting phase that happens before bulk updating, setting this parameter to null will cause the library to use the default minimum value(updateItems.count, 1000000). |
+ | cancellationToken|The cancellation token to gracefully exit the bulk update operation. |
+
+ **Bulk update response object definition**
+ The result of the bulk update API call contains the following attributes:
+
+ |**Parameter** |**Description** |
+ |||
+ |NumberOfDocumentsUpdated (long) | The number of documents that were successfully updated out of the total documents supplied to the bulk update API call. |
+ |TotalRequestUnitsConsumed (double) | The total request units (RUs) consumed by the bulk update API call. |
+ |TotalTimeTaken (TimeSpan) | The total time taken by the bulk update API call to complete the execution. |
+
+## Performance tips
+
+Consider the following points for better performance when using the bulk executor library:
+
+* For best performance, run your application from an Azure virtual machine that is in the same region as your Azure Cosmos account's write region.
+
+* It is recommended that you instantiate a single `BulkExecutor` object for the whole application within a single virtual machine that corresponds to a specific Azure Cosmos container.
+
+* Since a single bulk operation API execution consumes a large chunk of the client machine's CPU and network IO (This happens by spawning multiple tasks internally). Avoid spawning multiple concurrent tasks within your application process that execute bulk operation API calls. If a single bulk operation API call that is running on a single virtual machine is unable to consume the entire container's throughput (if your container's throughput > 1 million RU/s), it's preferred to create separate virtual machines to concurrently execute the bulk operation API calls.
+
+* Ensure the `InitializeAsync()` method is invoked after instantiating a BulkExecutor object to fetch the target Cosmos container's partition map.
+
+* In your application's App.Config, ensure **gcServer** is enabled for better performance
+ ```xml
+ <runtime>
+ <gcServer enabled="true" />
+ </runtime>
+ ```
+* The library emits traces that can be collected either into a log file or on the console. To enable both, add the following code to your application's App.Config file.
+
+ ```xml
+ <system.diagnostics>
+ <trace autoflush="false" indentsize="4">
+ <listeners>
+ <add name="logListener" type="System.Diagnostics.TextWriterTraceListener" initializeData="application.log" />
+ <add name="consoleListener" type="System.Diagnostics.ConsoleTraceListener" />
+ </listeners>
+ </trace>
+ </system.diagnostics>
+ ```
+
+## Next steps
+
+* To learn about the NuGet package details and the release notes, see the [bulk executor SDK details](sql-api-sdk-bulk-executor-dot-net.md).
cosmos-db Bulk Executor Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/bulk-executor-java.md
+
+ Title: Use bulk executor Java library in Azure Cosmos DB to perform bulk import and update operations
+description: Bulk import and update Azure Cosmos DB documents using bulk executor Java library
+++
+ms.devlang: java
+ Last updated : 08/26/2020+++++
+# Use bulk executor Java library to perform bulk operations on Azure Cosmos DB data
+
+This tutorial provides instructions on using the Azure Cosmos DB's bulk executor Java library to import, and update Azure Cosmos DB documents. To learn about bulk executor library and how it helps you leverage massive throughput and storage, see [bulk executor Library overview](../bulk-executor-overview.md) article. In this tutorial, you build a Java application that generates random documents and they are bulk imported into an Azure Cosmos container. After importing, you will bulk update some properties of a document.
+
+Currently, the bulk executor library is supported only by Azure Cosmos DB SQL API and Gremlin API accounts. This article describes how to use bulk executor Java library with SQL API accounts. To learn about using bulk executor .NET library with Gremlin API, see [perform bulk operations in Azure Cosmos DB Gremlin API](../graph/bulk-executor-graph-dotnet.md). The bulk executor library described is available is only available for the [Azure Cosmos DB Java sync SDK v2](sql-api-sdk-java.md) and it is the current recommended solution for Java bulk support. It is currently not available for the 3.x, 4.x or other higher SDK versions.
+
+## Prerequisites
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
+
+* You can [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription, free of charge and commitments. Or, you can use the [Azure Cosmos DB Emulator](../local-emulator.md) with the `https://localhost:8081` endpoint. The Primary Key is provided in [Authenticating requests](../local-emulator.md#authenticate-requests).
+
+* [Java Development Kit (JDK) 1.7+](/java/azure/jdk/)
+ - On Ubuntu, run `apt-get install default-jdk` to install the JDK.
+
+ - Be sure to set the JAVA_HOME environment variable to point to the folder where the JDK is installed.
+
+* [Download](https://maven.apache.org/download.cgi) and [install](https://maven.apache.org/install.html) a [Maven](https://maven.apache.org/) binary archive
+
+ - On Ubuntu, you can run `apt-get install maven` to install Maven.
+
+* Create an Azure Cosmos DB SQL API account by using the steps described in the [create database account](create-sql-api-java.md#create-a-database-account) section of the Java quickstart article.
+
+## Clone the sample application
+
+Now let's switch to working with code by downloading a sample Java application from GitHub. This application performs bulk operations on Azure Cosmos DB data. To clone the application, open a command prompt, navigate to the directory where you want to copy the application and run the following command:
+
+```bash
+ git clone https://github.com/Azure/azure-cosmosdb-bulkexecutor-java-getting-started.git
+```
+
+The cloned repository contains two samples "bulkimport" and "bulkupdate" relative to the "\azure-cosmosdb-bulkexecutor-java-getting-started\samples\bulkexecutor-sample\src\main\java\com\microsoft\azure\cosmosdb\bulkexecutor" folder. The "bulkimport" application generates random documents and imports them to Azure Cosmos DB. The "bulkupdate" application updates some documents in Azure Cosmos DB. In the next sections, we will review the code in each of these sample apps.
+
+## Bulk import data to Azure Cosmos DB
+
+1. The Azure Cosmos DB's connection strings are read as arguments and assigned to variables defined in CmdLineConfiguration.java file.
+
+2. Next the DocumentClient object is initialized by using the following statements:
+
+ ```java
+ ConnectionPolicy connectionPolicy = new ConnectionPolicy();
+ connectionPolicy.setMaxPoolSize(1000);
+ DocumentClient client = new DocumentClient(
+ HOST,
+ MASTER_KEY,
+ connectionPolicy,
+ ConsistencyLevel.Session)
+ ```
+
+3. The DocumentBulkExecutor object is initialized with a high retry values for wait time and throttled requests. And then they are set to 0 to pass congestion control to DocumentBulkExecutor for its lifetime.
+
+ ```java
+ // Set client's retry options high for initialization
+ client.getConnectionPolicy().getRetryOptions().setMaxRetryWaitTimeInSeconds(30);
+ client.getConnectionPolicy().getRetryOptions().setMaxRetryAttemptsOnThrottledRequests(9);
+
+ // Builder pattern
+ Builder bulkExecutorBuilder = DocumentBulkExecutor.builder().from(
+ client,
+ DATABASE_NAME,
+ COLLECTION_NAME,
+ collection.getPartitionKey(),
+ offerThroughput) // throughput you want to allocate for bulk import out of the container's total throughput
+
+ // Instantiate DocumentBulkExecutor
+ DocumentBulkExecutor bulkExecutor = bulkExecutorBuilder.build()
+
+ // Set retries to 0 to pass complete control to bulk executor
+ client.getConnectionPolicy().getRetryOptions().setMaxRetryWaitTimeInSeconds(0);
+ client.getConnectionPolicy().getRetryOptions().setMaxRetryAttemptsOnThrottledRequests(0);
+ ```
+
+4. Call the importAll API that generates random documents to bulk import into an Azure Cosmos container. You can configure the command line configurations within the CmdLineConfiguration.java file.
+
+ ```java
+ BulkImportResponse bulkImportResponse = bulkExecutor.importAll(documents, false, true, null);
+ ```
+ The bulk import API accepts a collection of JSON-serialized documents and it has the following syntax, for more details, see the [API documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor):
+
+ ```java
+ public BulkImportResponse importAll(
+ Collection<String> documents,
+ boolean isUpsert,
+ boolean disableAutomaticIdGeneration,
+ Integer maxConcurrencyPerPartitionRange) throws DocumentClientException;
+ ```
+
+ The importAll method accepts the following parameters:
+
+ |**Parameter** |**Description** |
+ |||
+ |isUpsert | A flag to enable upsert of the documents. If a document with given ID already exists, it's updated. |
+ |disableAutomaticIdGeneration | A flag to disable automatic generation of ID. By default, it is set to true. |
+ |maxConcurrencyPerPartitionRange | The maximum degree of concurrency per partition key range. The default value is 20. |
+
+ **Bulk import response object definition**
+ The result of the bulk import API call contains the following get methods:
+
+ |**Parameter** |**Description** |
+ |||
+ |int getNumberOfDocumentsImported() | The total number of documents that were successfully imported out of the documents supplied to the bulk import API call. |
+ |double getTotalRequestUnitsConsumed() | The total request units (RU) consumed by the bulk import API call. |
+ |Duration getTotalTimeTaken() | The total time taken by the bulk import API call to complete execution. |
+ |List\<Exception> getErrors() | Gets the list of errors if some documents out of the batch supplied to the bulk import API call failed to get inserted. |
+ |List\<Object> getBadInputDocuments() | The list of bad-format documents that were not successfully imported in the bulk import API call. User should fix the documents returned and retry import. Bad-formatted documents include documents whose ID value is not a string (null or any other datatype is considered invalid). |
+
+5. After you have the bulk import application ready, build the command-line tool from source by using the 'mvn clean package' command. This command generates a jar file in the target folder:
+
+ ```bash
+ mvn clean package
+ ```
+
+6. After the target dependencies are generated, you can invoke the bulk importer application by using the following command:
+
+ ```bash
+ java -Xmx12G -jar bulkexecutor-sample-1.0-SNAPSHOT-jar-with-dependencies.jar -serviceEndpoint *<Fill in your Azure Cosmos DB's endpoint>* -masterKey *<Fill in your Azure Cosmos DB's primary key>* -databaseId bulkImportDb -collectionId bulkImportColl -operation import -shouldCreateCollection -collectionThroughput 1000000 -partitionKey /profileid -maxConnectionPoolSize 6000 -numberOfDocumentsForEachCheckpoint 1000000 -numberOfCheckpoints 10
+ ```
+
+ The bulk importer creates a new database and a collection with the database name, collection name, and throughput values specified in the App.config file.
+
+## Bulk update data in Azure Cosmos DB
+
+You can update existing documents by using the BulkUpdateAsync API. In this example, you will set the Name field to a new value and remove the Description field from the existing documents. For the full set of supported field update operations, see [API documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor).
+
+1. Defines the update items along with corresponding field update operations. In this example, you will use SetUpdateOperation to update the Name field and UnsetUpdateOperation to remove the Description field from all the documents. You can also perform other operations like increment a document field by a specific value, push specific values into an array field, or remove a specific value from an array field. To learn about different methods provided by the bulk update API, see the [API documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor).
+
+ ```java
+ SetUpdateOperation<String> nameUpdate = new SetUpdateOperation<>("Name","UpdatedDocValue");
+ UnsetUpdateOperation descriptionUpdate = new UnsetUpdateOperation("description");
+
+ ArrayList<UpdateOperationBase> updateOperations = new ArrayList<>();
+ updateOperations.add(nameUpdate);
+ updateOperations.add(descriptionUpdate);
+
+ List<UpdateItem> updateItems = new ArrayList<>(cfg.getNumberOfDocumentsForEachCheckpoint());
+ IntStream.range(0, cfg.getNumberOfDocumentsForEachCheckpoint()).mapToObj(j -> {
+ return new UpdateItem(Long.toString(prefix + j), Long.toString(prefix + j), updateOperations);
+ }).collect(Collectors.toCollection(() -> updateItems));
+ ```
+
+2. Call the updateAll API that generates random documents to be then bulk imported into an Azure Cosmos container. You can configure the command-line configurations to be passed in CmdLineConfiguration.java file.
+
+ ```java
+ BulkUpdateResponse bulkUpdateResponse = bulkExecutor.updateAll(updateItems, null)
+ ```
+
+ The bulk update API accepts a collection of items to be updated. Each update item specifies the list of field update operations to be performed on a document identified by an ID and a partition key value. for more details, see the [API documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor):
+
+ ```java
+ public BulkUpdateResponse updateAll(
+ Collection<UpdateItem> updateItems,
+ Integer maxConcurrencyPerPartitionRange) throws DocumentClientException;
+ ```
+
+ The updateAll method accepts the following parameters:
+
+ |**Parameter** |**Description** |
+ |||
+ |maxConcurrencyPerPartitionRange | The maximum degree of concurrency per partition key range. The default value is 20. |
+
+ **Bulk import response object definition**
+ The result of the bulk import API call contains the following get methods:
+
+ |**Parameter** |**Description** |
+ |||
+ |int getNumberOfDocumentsUpdated() | The total number of documents that were successfully updated out of the documents supplied to the bulk update API call. |
+ |double getTotalRequestUnitsConsumed() | The total request units (RU) consumed by the bulk update API call. |
+ |Duration getTotalTimeTaken() | The total time taken by the bulk update API call to complete execution. |
+ |List\<Exception> getErrors() | Gets the list of operational or networking issues related to the update operation. |
+ |List\<BulkUpdateFailure> getFailedUpdates() | Gets the list of updates which could not be completed along with the specific exceptions leading to the failures.|
+
+3. After you have the bulk update application ready, build the command-line tool from source by using the 'mvn clean package' command. This command generates a jar file in the target folder:
+
+ ```bash
+ mvn clean package
+ ```
+
+4. After the target dependencies are generated, you can invoke the bulk update application by using the following command:
+
+ ```bash
+ java -Xmx12G -jar bulkexecutor-sample-1.0-SNAPSHOT-jar-with-dependencies.jar -serviceEndpoint **<Fill in your Azure Cosmos DB's endpoint>* -masterKey **<Fill in your Azure Cosmos DB's primary key>* -databaseId bulkUpdateDb -collectionId bulkUpdateColl -operation update -collectionThroughput 1000000 -partitionKey /profileid -maxConnectionPoolSize 6000 -numberOfDocumentsForEachCheckpoint 1000000 -numberOfCheckpoints 10
+ ```
+
+## Performance tips
+
+Consider the following points for better performance when using bulk executor library:
+
+* For best performance, run your application from an Azure VM in the same region as your Cosmos DB account write region.
+* For achieving higher throughput:
+
+ * Set the JVM's heap size to a large enough number to avoid any memory issue in handling large number of documents. Suggested heap size: max(3GB, 3 * sizeof(all documents passed to bulk import API in one batch)).
+ * There is a preprocessing time, due to which you will get higher throughput when performing bulk operations with a large number of documents. So, if you want to import 10,000,000 documents, running bulk import 10 times on 10 bulk of documents each of size 1,000,000 is preferable than running bulk import 100 times on 100 bulk of documents each of size 100,000 documents.
+
+* It is recommended to instantiate a single DocumentBulkExecutor object for the entire application within a single virtual machine that corresponds to a specific Azure Cosmos container.
+
+* Since a single bulk operation API execution consumes a large chunk of the client machine's CPU and network IO. This happens by spawning multiple tasks internally, avoid spawning multiple concurrent tasks within your application process each executing bulk operation API calls. If a single bulk operation API call running on a single virtual machine is unable to consume your entire container's throughput (if your container's throughput > 1 million RU/s), it's preferable to create separate virtual machines to concurrently execute bulk operation API calls.
+
+
+## Next steps
+* To learn about maven package details and release notes of bulk executor Java library, see[bulk executor SDK details](sql-api-sdk-bulk-executor-java.md).
cosmos-db Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/certificate-based-authentication.md
+
+ Title: Certificate-based authentication with Azure Cosmos DB and Active Directory
+description: Learn how to configure an Azure AD identity for certificate-based authentication to access keys from Azure Cosmos DB.
++++ Last updated : 06/11/2019++++++
+# Certificate-based authentication for an Azure AD identity to access keys from an Azure Cosmos DB account
+
+Certificate-based authentication enables your client application to be authenticated by using Azure Active Directory (Azure AD) with a client certificate. You can perform certificate-based authentication on a machine where you need an identity, such as an on-premises machine or virtual machine in Azure. Your application can then read Azure Cosmos DB keys without having the keys directly in the application. This article describes how to create a sample Azure AD application, configure it for certificate-based authentication, sign into Azure using the new application identity, and then it retrieves the keys from your Azure Cosmos account. This article uses Azure PowerShell to set up the identities and provides a C# sample app that authenticates and accesses keys from your Azure Cosmos account.
+
+## Prerequisites
+
+* Install the [latest version](/powershell/azure/install-az-ps) of Azure PowerShell.
+
+* If you don't have an [Azure subscription](../../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing), create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
+
+## Register an app in Azure AD
+
+In this step, you will register a sample web application in your Azure AD account. This application is later used to read the keys from your Azure Cosmos DB account. Use the following steps to register an application:
+
+1. Sign into the [Azure portal](https://portal.azure.com/).
+
+1. Open the Azure **Active Directory** pane, go to **App registrations** pane, and select **New registration**.
+
+ :::image type="content" source="./media/certificate-based-authentication/new-app-registration.png" alt-text="New application registration in Active Directory":::
+
+1. Fill the **Register an application** form with the following details:
+
+ * **Name** ΓÇô Provide a name for your application, it can be any name such as "sampleApp".
+ * **Supported account types** ΓÇô Choose **Accounts in this organizational directory only (Default Directory)** to allow resources in your current directory to access this application.
+ * **Redirect URL** ΓÇô Choose application of type **Web** and provide a URL where your application is hosted, it can be any URL. For this example, you can provide a test URL such as `https://sampleApp.com` it's okay even if the app doesn't exist.
+
+ :::image type="content" source="./media/certificate-based-authentication/register-sample-web-app.png" alt-text="Registering a sample web application":::
+
+1. Select **Register** after you fill the form.
+
+1. After the app is registered, make a note of the **Application(client) ID** and **Object ID**, you will use these details in the next steps.
+
+ :::image type="content" source="./media/certificate-based-authentication/get-app-object-ids.png" alt-text="Get the application and object IDs":::
+
+## Install the AzureAD module
+
+In this step, you will install the Azure AD PowerShell module. This module is required to get the ID of the application you registered in the previous step and associate a self-signed certificate to that application.
+
+1. Open Windows PowerShell ISE with administrator rights. If you haven't already done, install the AZ PowerShell module and connect to your subscription. If you have multiple subscriptions, you can set the context of current subscription as shown in the following commands:
+
+ ```powershell
+ Install-Module -Name Az -AllowClobber
+ Connect-AzAccount
+
+ Get-AzSubscription
+ $context = Get-AzSubscription -SubscriptionId <Your_Subscription_ID>
+ Set-AzContext $context
+ ```
+
+1. Install and import the [AzureAD](/powershell/module/azuread/) module
+
+ ```powershell
+ Install-Module AzureAD
+ Import-Module AzureAD
+ ```
+
+## Sign into your Azure AD
+
+Sign into your Azure AD where you have registered the application. Use the Connect-AzureAD command to sign into your account, enter your Azure account credentials in the pop-up window.
+
+```powershell
+Connect-AzureAD
+```
+
+## Create a self-signed certificate
+
+Open another instance of Windows PowerShell ISE, and run the following commands to create a self-signed certificate and read the key associated with the certificate:
+
+```powershell
+$cert = New-SelfSignedCertificate -CertStoreLocation "Cert:\CurrentUser\My" -Subject "CN=sampleAppCert" -KeySpec KeyExchange
+$keyValue = [System.Convert]::ToBase64String($cert.GetRawCertData())
+```
+
+## Create the certificate-based credential
+
+Next run the following commands to get the object ID of your application and create the certificate-based credential. In this example, we set the certificate to expire after a year, you can set it to any required end date.
+
+```powershell
+$application = Get-AzureADApplication -ObjectId <Object_ID_of_Your_Application>
+
+New-AzureADApplicationKeyCredential -ObjectId $application.ObjectId -CustomKeyIdentifier "Key1" -Type AsymmetricX509Cert -Usage Verify -Value $keyValue -EndDate "2020-01-01"
+```
+
+The above command results in the output similar to the screenshot below:
++
+## Configure your Azure Cosmos account to use the new identity
+
+1. Sign into the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to your Azure Cosmos account, open the **Access control (IAM)** blade.
+
+1. Select **Add** and **Add role assignment**. Add the sampleApp you created in the previous step with **Contributor** role as shown in the following screenshot:
+
+ :::image type="content" source="./media/certificate-based-authentication/configure-cosmos-account-with-identify.png" alt-text="Configure Azure Cosmos account to use the new identity":::
+
+1. Select **Save** after you fill out the form
+
+## Register your certificate with Azure AD
+
+You can associate the certificate-based credential with the client application in Azure AD from the Azure portal. To associate the credential, you must upload the certificate file with the following steps:
+
+In the Azure app registration for the client application:
+
+1. Sign into the [Azure portal](https://portal.azure.com/).
+
+1. Open the Azure **Active Directory** pane, go to the **App registrations** pane, and open the sample app you created in the previous step.
+
+1. Select **Certificates & secrets** and then **Upload certificate**. Browse the certificate file you created in the previous step to upload.
+
+1. Select **Add**. After the certificate is uploaded, the thumbprint, start date, and expiration values are displayed.
+
+## Access the keys from PowerShell
+
+In this step, you will sign into Azure by using the application and the certificate you created and access your Azure Cosmos account's keys.
+
+1. Initially clear the Azure account's credentials you have used to sign into your account. You can clear credentials by using the following command:
+
+ ```powershell
+ Disconnect-AzAccount -Username <Your_Azure_account_email_id>
+ ```
+
+1. Next validate that you can sign into Azure portal by using the application's credentials and access the Azure Cosmos DB keys:
+
+ ```powershell
+ Login-AzAccount -ApplicationId <Your_Application_ID> -CertificateThumbprint $cert.Thumbprint -ServicePrincipal -Tenant <Tenant_ID_of_your_application>
+
+ Get-AzCosmosDBAccountKey `
+ -ResourceGroupName "<Resource_Group_Name_of_your_Azure_Cosmos_account>" `
+ -Name "<Your_Azure_Cosmos_Account_Name>" `
+ -Type "Keys"
+ ```
+
+The previous command will display the primary and secondary primary keys of your Azure Cosmos account. You can view the Activity log of your Azure Cosmos account to validate that the get keys request succeeded and the event is initiated by the "sampleApp" application.
++
+## Access the keys from a C# application
+
+You can also validate this scenario by accessing keys from a C# application. The following C# console application, that can access Azure Cosmos DB keys by using the app registered in Active Directory. Make sure to update the tenantId, clientID, certName, resource group name, subscription ID, Azure Cosmos account name details before you run the code.
+
+```csharp
+using System;
+using Microsoft.IdentityModel.Clients.ActiveDirectory;
+using System.Linq;
+using System.Net.Http;
+using System.Security.Cryptography.X509Certificates;
+using System.Threading;
+using System.Threading.Tasks;
+
+namespace TodoListDaemonWithCert
+{
+ class Program
+ {
+ private static string aadInstance = "https://login.windows.net/";
+ private static string tenantId = "<Your_Tenant_ID>";
+ private static string clientId = "<Your_Client_ID>";
+ private static string certName = "<Your_Certificate_Name>";
+
+ private static int errorCode = 0;
+ static int Main(string[] args)
+ {
+ MainAync().Wait();
+ Console.ReadKey();
+
+ return 0;
+ }
+
+ static async Task MainAync()
+ {
+ string authContextURL = aadInstance + tenantId;
+ AuthenticationContext authContext = new AuthenticationContext(authContextURL);
+ X509Certificate2 cert = ReadCertificateFromStore(certName);
+
+ ClientAssertionCertificate credential = new ClientAssertionCertificate(clientId, cert);
+ AuthenticationResult result = await authContext.AcquireTokenAsync("https://management.azure.com/", credential);
+ if (result == null)
+ {
+ throw new InvalidOperationException("Failed to obtain the JWT token");
+ }
+
+ string token = result.AccessToken;
+ string subscriptionId = "<Your_Subscription_ID>";
+ string rgName = "<ResourceGroup_of_your_Cosmos_account>";
+ string accountName = "<Your_Cosmos_account_name>";
+ string cosmosDBRestCall = $"https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{rgName}/providers/Microsoft.DocumentDB/databaseAccounts/{accountName}/listKeys?api-version=2015-04-08";
+
+ Uri restCall = new Uri(cosmosDBRestCall);
+ HttpClient httpClient = new HttpClient();
+ httpClient.DefaultRequestHeaders.Remove("Authorization");
+ httpClient.DefaultRequestHeaders.Add("Authorization", "Bearer " + token);
+ HttpResponseMessage response = await httpClient.PostAsync(restCall, null);
+
+ Console.WriteLine("Got result {0} and keys {1}", response.StatusCode.ToString(), response.Content.ReadAsStringAsync().Result);
+ }
+
+ /// <summary>
+ /// Reads the certificate
+ /// </summary>
+ private static X509Certificate2 ReadCertificateFromStore(string certName)
+ {
+ X509Certificate2 cert = null;
+ X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
+ store.Open(OpenFlags.ReadOnly);
+ X509Certificate2Collection certCollection = store.Certificates;
+
+ // Find unexpired certificates.
+ X509Certificate2Collection currentCerts = certCollection.Find(X509FindType.FindByTimeValid, DateTime.Now, false);
+
+ // From the collection of unexpired certificates, find the ones with the correct name.
+ X509Certificate2Collection signingCert = currentCerts.Find(X509FindType.FindBySubjectName, certName, false);
+
+ // Return the first certificate in the collection, has the right name and is current.
+ cert = signingCert.OfType<X509Certificate2>().OrderByDescending(c => c.NotBefore).FirstOrDefault();
+ store.Close();
+ return cert;
+ }
+ }
+}
+```
+
+This script outputs the primary and secondary primary keys as shown in the following screenshot:
++
+Similar to the previous section, you can view the Activity log of your Azure Cosmos account to validate that the get keys request event is initiated by the "sampleApp" application.
++
+## Next steps
+
+* [Secure Azure Cosmos keys using Azure Key Vault](../access-secrets-from-keyvault.md)
+
+* [Security baseline for Azure Cosmos DB](../security-baseline.md)
cosmos-db Change Feed Design Patterns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/change-feed-design-patterns.md
+
+ Title: Change feed design patterns in Azure Cosmos DB
+description: Overview of common change feed design patterns
+++++ Last updated : 08/26/2021+
+# Change feed design patterns in Azure Cosmos DB
+
+The Azure Cosmos DB change feed enables efficient processing of large datasets with a high volume of writes. Change feed also offers an alternative to querying an entire dataset to identify what has changed. This document focuses on common change feed design patterns, design tradeoffs, and change feed limitations.
+
+Azure Cosmos DB is well-suited for IoT, gaming, retail, and operational logging applications. A common design pattern in these applications is to use changes to the data to trigger additional actions. Examples of additional actions include:
+
+* Triggering a notification or a call to an API, when an item is inserted or updated.
+* Real-time stream processing for IoT or real-time analytics processing on operational data.
+* Data movement such as synchronizing with a cache, a search engine, a data warehouse, or cold storage.
+
+The change feed in Azure Cosmos DB enables you to build efficient and scalable solutions for each of these patterns, as shown in the following image:
++
+## Event computing and notifications
+
+The Azure Cosmos DB change feed can simplify scenarios that need to trigger a notification or send a call to an API based on a certain event. You can use the [Change Feed Process Library](change-feed-processor.md) to automatically poll your container for changes and call an external API each time there is a write or update.
+
+You can also selectively trigger a notification or send a call to an API based on specific criteria. For example, if you are reading from the change feed using [Azure Functions](change-feed-functions.md), you can put logic into the function to only send a notification if a specific criteria has been met. While the Azure Function code would execute during each write and update, the notification would only be sent if specific criteria had been met.
+
+## Real-time stream processing
+
+The Azure Cosmos DB change feed can be used for real-time stream processing for IoT or real-time analytics processing on operational data.
+For example, you might receive and store event data from devices, sensors, infrastructure and applications, and process these events in real time, using [Spark](../../hdinsight/spark/apache-spark-overview.md). The following image shows how you can implement a lambda architecture using the Azure Cosmos DB via change feed:
++
+In many cases, stream processing implementations first receive a high volume of incoming data into a temporary message queue such as Azure Event Hub or Apache Kafka. The change feed is a great alternative due to Azure Cosmos DB's ability to support a sustained high rate of data ingestion with guaranteed low read and write latency. The advantages of the Azure Cosmos DB change feed over a message queue include:
+
+### Data persistence
+
+Data written to Azure Cosmos DB will show up in the change feed and be retained until deleted. Message queues typically have a maximum retention period. For example, [Azure Event Hub](https://azure.microsoft.com/services/event-hubs/) offers a maximum data retention of 90 days.
+
+### Querying ability
+
+In addition to reading from a Cosmos container's change feed, you can also run SQL queries on the data stored in Azure Cosmos DB. The change feed isn't a duplication of data already in the container but rather just a different mechanism of reading the data. Therefore, if you read data from the change feed, it will always be consistent with queries of the same Azure Cosmos DB container.
+
+### High availability
+
+Azure Cosmos DB offers up to 99.999% read and write availability. Unlike many message queues, Azure Cosmos DB data can be easily globally distributed and configured with an [RTO (Recovery Time Objective)](../consistency-levels.md#rto) of zero.
+
+After processing items in the change feed, you can build a materialized view and persist aggregated values back in Azure Cosmos DB. If you're using Azure Cosmos DB to build a game, you can, for example, use change feed to implement real-time leaderboards based on scores from completed games.
+
+## Data movement
+
+You can also read from the change feed for real-time data movement.
+
+For example, the change feed helps you perform the following tasks efficiently:
+
+* Update a cache, search index, or data warehouse with data stored in Azure Cosmos DB.
+
+* Perform zero down-time migrations to another Azure Cosmos account or another Azure Cosmos container with a different logical partition key.
+
+* Implement an application-level data tiering and archival. For example, you can store "hot data" in Azure Cosmos DB and age out "cold data" to other storage systems such as [Azure Blob Storage](../../storage/common/storage-introduction.md).
+
+When you have to [denormalize data across partitions and containers](how-to-model-partition-example.md#v2-introducing-denormalization-to-optimize-read-queries
+), you can read from your container's change feed as a source for this data replication. Real-time data replication with the change feed can only guarantee eventual consistency. You can [monitor how far the Change Feed Processor lags behind](how-to-use-change-feed-estimator.md) in processing changes in your Cosmos container.
+
+## Event sourcing
+
+The [event sourcing pattern](/azure/architecture/patterns/event-sourcing) involves using an append-only store to record the full series of actions on that data. Azure Cosmos DB's change feed is a great choice as a central data store in event sourcing architectures where all data ingestion is modeled as writes (no updates or deletes). In this case, each write to Azure Cosmos DB is an "event" and you'll have a full record of past events in the change feed. Typical uses of the events published by the central event store are for maintaining materialized views or for integration with external systems. Because there is no time limit for retention in the change feed, you can replay all past events by reading from the beginning of your Cosmos container's change feed.
+
+You can have [multiple change feed consumers subscribe to the same container's change feed](how-to-create-multiple-cosmos-db-triggers.md#optimizing-containers-for-multiple-triggers). Aside from the [lease container's](change-feed-processor.md#components-of-the-change-feed-processor) provisioned throughput, there is no cost to utilize the change feed. The change feed is available in every container regardless of whether it is utilized.
+
+Azure Cosmos DB is a great central append-only persistent data store in the event sourcing pattern because of its strengths in horizontal scalability and high availability. In addition, the change Feed Processor library offers an ["at least once"](change-feed-processor.md#error-handling) guarantee, ensuring that you won't miss processing any events.
+
+## Current limitations
+
+The change feed has important limitations that you should understand. While items in a Cosmos container will always remain in the change feed, the change feed is not a full operation log. There are important areas to consider when designing an application that utilizes the change feed.
+
+### Intermediate updates
+
+Only the most recent change for a given item is included in the change feed. When processing changes, you will read the latest available item version. If there are multiple updates to the same item in a short period of time, it is possible to miss processing intermediate updates. If you would like to track updates and be able to replay past updates to an item, we recommend modeling these updates as a series of writes instead.
+
+### Deletes
+
+The change feed does not capture deletes. If you delete an item from your container, it is also removed from the change feed. The most common method of handling this is adding a soft marker on the items that are being deleted. You can add a property called "deleted" and set it to "true" at the time of deletion. This document update will show up in the change feed. You can set a TTL on this item so that it can be automatically deleted later.
+
+### Guaranteed order
+
+There is guaranteed order in the change feed within a partition key value but not across partition key values. You should select a partition key that gives you a meaningful order guarantee.
+
+For example, consider a retail application using the event sourcing design pattern. In this application, different user actions are each "events" which are modeled as writes to Azure Cosmos DB. Imagine if some example events occurred in the following sequence:
+
+1. Customer adds Item A to their shopping cart
+2. Customer adds Item B to their shopping cart
+3. Customer removes Item A from their shopping cart
+4. Customer checks out and shopping cart contents are shipped
+
+A materialized view of current shopping cart contents is maintained for each customer. This application must ensure that these events are processed in the order in which they occur. If, for example, the cart checkout were to be processed before Item A's removal, it is likely that the customer would have had Item A shipped, as opposed to the desired Item B. In order to guarantee that these four events are processed in order of their occurrence, they should fall within the same partition key value. If you select **username** (each customer has a unique username) as the partition key, you can guarantee that these events show up in the change feed in the same order in which they are written to Azure Cosmos DB.
+
+## Examples
+
+Here are some real-world change feed code examples that extend beyond the scope of the samples provided in Microsoft docs:
+
+- [Introduction to the change feed](https://azurecosmosdb.github.io/labs/dotnet/labs/08-change_feed_with_azure_functions.html)
+- [IoT use case centered around the change feed](https://github.com/AzureCosmosDB/scenario-based-labs)
+- [Retail use case centered around the change feed](https://github.com/AzureCosmosDB/scenario-based-labs)
+
+## Next steps
+
+* [Change feed overview](../change-feed.md)
+* [Options to read change feed](read-change-feed.md)
+* [Using change feed with Azure Functions](change-feed-functions.md)
+* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Change Feed Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/change-feed-functions.md
+
+ Title: How to use Azure Cosmos DB change feed with Azure Functions
+description: Use Azure Functions to connect to Azure Cosmos DB change feed. Later you can create reactive Azure functions that are triggered on every new event.
+++++ Last updated : 12/03/2019+++
+# Serverless event-based architectures with Azure Cosmos DB and Azure Functions
+
+Azure Functions provides the simplest way to connect to the [change feed](../change-feed.md). You can create small reactive Azure Functions that will be automatically triggered on each new event in your Azure Cosmos container's change feed.
++
+With the [Azure Functions trigger for Cosmos DB](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md), you can leverage the [Change Feed Processor](change-feed-processor.md)'s scaling and reliable event detection functionality without the need to maintain any [worker infrastructure](change-feed-processor.md). Just focus on your Azure Function's logic without worrying about the rest of the event-sourcing pipeline. You can even mix the Trigger with any other [Azure Functions bindings](../../azure-functions/functions-triggers-bindings.md#supported-bindings).
+
+> [!NOTE]
+> Currently, the Azure Functions trigger for Cosmos DB is supported for use with the Core (SQL) API only.
+
+## Requirements
+
+To implement a serverless event-based flow, you need:
+
+* **The monitored container**: The monitored container is the Azure Cosmos container being monitored, and it stores the data from which the change feed is generated. Any inserts, updates to the monitored container are reflected in the change feed of the container.
+* **The lease container**: The lease container maintains state across multiple and dynamic serverless Azure Function instances and enables dynamic scaling. This lease container can be manually or automatically created by the Azure Functions trigger for Cosmos DB. To automatically create the lease container, set the *CreateLeaseCollectionIfNotExists* flag in the [configuration](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration). Partitioned lease containers are required to have a `/id` partition key definition.
+
+## Create your Azure Functions trigger for Cosmos DB
+
+Creating your Azure Function with an Azure Functions trigger for Cosmos DB is now supported across all Azure Functions IDE and CLI integrations:
+
+* [Visual Studio Extension](../../azure-functions/functions-develop-vs.md) for Visual Studio users.
+* [Visual Studio Code Extension](/azure/developer/javascript/tutorial-vscode-serverless-node-01) for Visual Studio Code users.
+* And finally [Core CLI tooling](../../azure-functions/functions-run-local.md#create-func) for a cross-platform IDE agnostic experience.
+
+## Run your trigger locally
+
+You can run your [Azure Function locally](../../azure-functions/functions-develop-local.md) with the [Azure Cosmos DB Emulator](../local-emulator.md) to create and develop your serverless event-based flows without an Azure Subscription or incurring any costs.
+
+If you want to test live scenarios in the cloud, you can [Try Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without any credit card or Azure subscription required.
+
+## Next steps
+
+You can now continue to learn more about change feed in the following articles:
+
+* [Overview of change feed](../change-feed.md)
+* [Ways to read change feed](read-change-feed.md)
+* [Using change feed processor library](change-feed-processor.md)
+* [How to work with change feed processor library](change-feed-processor.md)
+* [Serverless database computing using Azure Cosmos DB and Azure Functions](serverless-computing-database.md)
cosmos-db Change Feed Processor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/change-feed-processor.md
+
+ Title: Change feed processor in Azure Cosmos DB
+description: Learn how to use the Azure Cosmos DB change feed processor to read the change feed, the components of the change feed processor
++++
+ms.devlang: dotnet
+ Last updated : 07/20/2021++++
+# Change feed processor in Azure Cosmos DB
+
+The change feed processor is part of the [Azure Cosmos DB SDK V3](https://github.com/Azure/azure-cosmos-dotnet-v3). It simplifies the process of reading the change feed and distribute the event processing across multiple consumers effectively.
+
+The main benefit of change feed processor library is its fault-tolerant behavior that assures an "at-least-once" delivery of all the events in the change feed.
+
+## Components of the change feed processor
+
+There are four main components of implementing the change feed processor:
+
+1. **The monitored container:** The monitored container has the data from which the change feed is generated. Any inserts and updates to the monitored container are reflected in the change feed of the container.
+
+1. **The lease container:** The lease container acts as a state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account.
+
+1. **The host:** A host is an application instance that uses the change feed processor to listen for changes. Multiple instances with the same lease configuration can run in parallel, but each instance should have a different **instance name**.
+
+1. **The delegate:** The delegate is the code that defines what you, the developer, want to do with each batch of changes that the change feed processor reads.
+
+To further understand how these four elements of change feed processor work together, let's look at an example in the following diagram. The monitored container stores documents and uses 'City' as the partition key. We see that the partition key values are distributed in ranges that contain items.
+There are two host instances and the change feed processor is assigning different ranges of partition key values to each instance to maximize compute distribution.
+Each range is being read in parallel and its progress is maintained separately from other ranges in the lease container.
++
+## Implementing the change feed processor
+
+The point of entry is always the monitored container, from a `Container` instance you call `GetChangeFeedProcessorBuilder`:
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-change-feed-processor/src/Program.cs?name=DefineProcessor)]
+
+Where the first parameter is a distinct name that describes the goal of this processor and the second name is the delegate implementation that will handle changes.
+
+An example of a delegate would be:
++
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-change-feed-processor/src/Program.cs?name=Delegate)]
+
+Finally you define a name for this processor instance with `WithInstanceName` and which is the container to maintain the lease state with `WithLeaseContainer`.
+
+Calling `Build` will give you the processor instance that you can start by calling `StartAsync`.
+
+## Processing life cycle
+
+The normal life cycle of a host instance is:
+
+1. Read the change feed.
+1. If there are no changes, sleep for a predefined amount of time (customizable with `WithPollInterval` in the Builder) and go to #1.
+1. If there are changes, send them to the **delegate**.
+1. When the delegate finishes processing the changes **successfully**, update the lease store with the latest processed point in time and go to #1.
+
+## Error handling
+
+The change feed processor is resilient to user code errors. That means that if your delegate implementation has an unhandled exception (step #4), the thread processing that particular batch of changes will be stopped, and a new thread will be created. The new thread will check which was the latest point in time the lease store has for that range of partition key values, and restart from there, effectively sending the same batch of changes to the delegate. This behavior will continue until your delegate processes the changes correctly and it's the reason the change feed processor has an "at least once" guarantee, because if the delegate code throws an exception, it will retry that batch.
+
+> [!NOTE]
+> There is only one scenario where a batch of changes will not be retried. If the failure happens on the first ever delegate execution, the lease store has no previous saved state to be used on the retry. On those cases, the retry would use the [initial starting configuration](#starting-time), which might or might not include the last batch.
+
+To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to a dead-letter queue. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The dead-letter queue might be another Cosmos container. The exact data store does not matter, simply that the unprocessed changes are persisted.
+
+In addition, you can use the [change feed estimator](how-to-use-change-feed-estimator.md) to monitor the progress of your change feed processor instances as they read the change feed. You can use this estimation to understand if your change feed processor is "stuck" or lagging behind due to available resources like CPU, memory, and network bandwidth.
+
+## Deployment unit
+
+A single change feed processor deployment unit consists of one or more instances with the same `processorName` and lease container configuration. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.
+
+For example, you might have one deployment unit that triggers an external API anytime there is a change in your container. Another deployment unit might move data, in real time, each time there is a change. When a change happens in your monitored container, all your deployment units will get notified.
+
+## Dynamic scaling
+
+As mentioned before, within a deployment unit you can have one or more instances. To take advantage of the compute distribution within the deployment unit, the only key requirements are:
+
+1. All instances should have the same lease container configuration.
+1. All instances should have the same `processorName`.
+1. Each instance needs to have a different instance name (`WithInstanceName`).
+
+If these three conditions apply, then the change feed processor will, using an equal distribution algorithm, distribute all the leases in the lease container across all running instances of that deployment unit and parallelize compute. One lease can only be owned by one instance at a given time, so the maximum number of instances equals to the number of leases.
+
+The number of instances can grow and shrink, and the change feed processor will dynamically adjust the load by redistributing accordingly.
+
+Moreover, the change feed processor can dynamically adjust to containers scale due to throughput or storage increases. When your container grows, the change feed processor transparently handles these scenarios by dynamically increasing the leases and distributing the new leases among existing instances.
+
+## Change feed and provisioned throughput
+
+Change feed read operations on the monitored container will consume RUs.
+
+Operations on the lease container consume RUs. The higher the number of instances using the same lease container, the higher the potential RU consumption will be. Remember to monitor your RU consumption on the leases container if you decide to scale and increment the number of instances.
+
+## Starting time
+
+By default, when a change feed processor starts the first time, it will initialize the leases container, and start its [processing life cycle](#processing-life-cycle). Any changes that happened in the monitored container before the change feed processor was initialized for the first time won't be detected.
+
+### Reading from a previous date and time
+
+It's possible to initialize the change feed processor to read changes starting at a **specific date and time**, by passing an instance of a `DateTime` to the `WithStartTime` builder extension:
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=TimeInitialization)]
+
+The change feed processor will be initialized for that specific date and time and start reading the changes that happened after.
+
+> [!NOTE]
+> Starting the change feed processor at a specific date and time is not supported in multi-region write accounts.
+
+### Reading from the beginning
+
+In other scenarios like data migrations or analyzing the entire history of a container, we need to read the change feed from **the beginning of that container's lifetime**. To do that, we can use `WithStartTime` on the builder extension, but passing `DateTime.MinValue.ToUniversalTime()`, which would generate the UTC representation of the minimum `DateTime` value, like so:
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartFromBeginningInitialization)]
+
+The change feed processor will be initialized and start reading changes from the beginning of the lifetime of the container.
+
+> [!NOTE]
+> These customization options only work to setup the starting point in time of the change feed processor. Once the leases container is initialized for the first time, changing them has no effect.
+
+## Where to host the change feed processor
+
+The change feed processor can be hosted in any platform that supports long running processes or tasks:
+
+* A continuous running [Azure WebJob](/learn/modules/run-web-app-background-task-with-webjobs/).
+* A process in an [Azure Virtual Machine](/azure/architecture/best-practices/background-jobs#azure-virtual-machines).
+* A background job in [Azure Kubernetes Service](/azure/architecture/best-practices/background-jobs#azure-kubernetes-service).
+* An [ASP.NET hosted service](/aspnet/core/fundamentals/host/hosted-services).
+
+While change feed processor can run in short lived environments, because the lease container maintains the state, the startup cycle of these environments will add delay to receiving the notifications (due to the overhead of starting the processor every time the environment is started).
+
+## Additional resources
+
+* [Azure Cosmos DB SDK](sql-api-sdk-dotnet.md)
+* [Complete sample application on GitHub](https://github.com/Azure-Samples/cosmos-dotnet-change-feed-processor)
+* [Additional usage samples on GitHub](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed)
+* [Cosmos DB workshop labs for change feed processor](https://azurecosmosdb.github.io/labs/dotnet/labs/08-change_feed_with_azure_functions.html#consume-cosmos-db-change-feed-via-the-change-feed-processor)
+
+## Next steps
+
+You can now proceed to learn more about change feed processor in the following articles:
+
+* [Overview of change feed](../change-feed.md)
+* [Change feed pull model](change-feed-pull-model.md)
+* [How to migrate from the change feed processor library](how-to-migrate-from-change-feed-library.md)
+* [Using the change feed estimator](how-to-use-change-feed-estimator.md)
+* [Change feed processor start time](#starting-time)
cosmos-db Change Feed Pull Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/change-feed-pull-model.md
+
+ Title: Change feed pull model
+description: Learn how to use the Azure Cosmos DB change feed pull model to read the change feed and the differences between the pull model and Change Feed Processor
++++
+ms.devlang: dotnet
+ Last updated : 08/02/2021+++
+# Change feed pull model in Azure Cosmos DB
+
+With the change feed pull model, you can consume the Azure Cosmos DB change feed at your own pace. As you can already do with the [change feed processor](change-feed-processor.md), you can use the change feed pull model to parallelize the processing of changes across multiple change feed consumers.
+
+## Comparing with change feed processor
+
+Many scenarios can process the change feed using either the [change feed processor](change-feed-processor.md) or the pull model. The pull model's continuation tokens and the change feed processor's lease container are both "bookmarks" for the last processed item (or batch of items) in the change feed.
+
+However, you can't convert continuation tokens to a lease container (or vice versa).
+
+> [!NOTE]
+> In most cases when you need to read from the change feed, the simplest option is to use the [change feed processor](change-feed-processor.md).
+
+You should consider using the pull model in these scenarios:
+
+- Read changes from a particular partition key
+- Control the pace at which your client receives changes for processing
+- Perform a one-time read of the existing data in the change feed (for example, to do a data migration)
+
+Here's some key differences between the change feed processor and pull model:
+
+|Feature | Change feed processor| Pull model |
+| | | |
+| Keeping track of current point in processing change feed | Lease (stored in an Azure Cosmos DB container) | Continuation token (stored in memory or manually persisted) |
+| Ability to replay past changes | Yes, with push model | Yes, with pull model|
+| Polling for future changes | Automatically checks for changes based on user-specified `WithPollInterval` | Manual |
+| Behavior where there are no new changes | Automatically wait `WithPollInterval` and recheck | Must check status and manually recheck |
+| Process changes from entire container | Yes, and automatically parallelized across multiple threads/machine consuming from the same container| Yes, and manually parallelized using FeedRange |
+| Process changes from just a single partition key | Not supported | Yes|
+
+> [!NOTE]
+> Unlike when reading using the change feed processor, you must explicitly handle cases where there are no new changes.
+
+## Consuming an entire container's changes
+
+You can create a `FeedIterator` to process the change feed using the pull model. When you initially create a `FeedIterator`, you must specify a required `ChangeFeedStartFrom` value, which consists of both the starting position for reading changes and the desired `FeedRange`. The `FeedRange` is a range of partition key values and specifies the items that will be read from the change feed using that specific `FeedIterator`.
+
+You can optionally specify `ChangeFeedRequestOptions` to set a `PageSizeHint`. When set, this property sets the maximum number of items received per page. If operations in the monitored collection are performed
+through stored procedures, transaction scope is preserved when reading items from the Change Feed. As a result, the number of items received could be higher than the specified value so that the items changed by the same transaction are returned as part of
+one atomic batch.
+
+The `FeedIterator` comes in two flavors. In addition to the examples below that return entity objects, you can also obtain the response with `Stream` support. Streams allow you to read data without having it first deserialized, saving on client resources.
+
+Here's an example for obtaining a `FeedIterator` that returns entity objects, in this case a `User` object:
+
+```csharp
+FeedIterator<User> InteratorWithPOCOS = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
+```
+
+Here's an example for obtaining a `FeedIterator` that returns a `Stream`:
+
+```csharp
+FeedIterator iteratorWithStreams = container.GetChangeFeedStreamIterator<User>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
+```
+
+If you don't supply a `FeedRange` to a `FeedIterator`, you can process an entire container's change feed at your own pace. Here's an example, which starts reading all changes starting at the current time:
+
+```csharp
+FeedIterator iteratorForTheEntireContainer = container.GetChangeFeedStreamIterator<User>(ChangeFeedStartFrom.Now(), ChangeFeedMode.Incremental);
+
+while (iteratorForTheEntireContainer.HasMoreResults)
+{
+ FeedResponse<User> response = await iteratorForTheEntireContainer.ReadNextAsync();
+
+ if (response.StatusCode == HttpStatusCode.NotModified)
+ {
+ Console.WriteLine($"No new changes");
+ await Task.Delay(TimeSpan.FromSeconds(5));
+ }
+ else
+ {
+ foreach (User user in response)
+ {
+ Console.WriteLine($"Detected change for user with id {user.id}");
+ }
+ }
+}
+```
+
+Because the change feed is effectively an infinite list of items encompassing all future writes and updates, the value of `HasMoreResults` is always true. When you try to read the change feed and there are no new changes available, you'll receive a response with `NotModified` status. In the above example, it is handled by waiting 5 seconds before rechecking for changes.
+
+## Consuming a partition key's changes
+
+In some cases, you may only want to process a specific partition key's changes. You can obtain a `FeedIterator` for a specific partition key and process the changes the same way that you can for an entire container.
+
+```csharp
+FeedIterator<User> iteratorForPartitionKey = container.GetChangeFeedIterator<User>(
+ ChangeFeedStartFrom.Beginning(FeedRange.FromPartitionKey(new PartitionKey("PartitionKeyValue")), ChangeFeedMode.Incremental));
+
+while (iteratorForThePartitionKey.HasMoreResults)
+{
+ FeedResponse<User> response = await iteratorForThePartitionKey.ReadNextAsync();
+
+ if (response.StatusCode == HttpStatusCode.NotModified)
+ {
+ Console.WriteLine($"No new changes");
+ await Task.Delay(TimeSpan.FromSeconds(5));
+ }
+ else
+ {
+ foreach (User user in response)
+ {
+ Console.WriteLine($"Detected change for user with id {user.id}");
+ }
+ }
+}
+```
+
+## Using FeedRange for parallelization
+
+In the [change feed processor](change-feed-processor.md), work is automatically spread across multiple consumers. In the change feed pull model, you can use the `FeedRange` to parallelize the processing of the change feed. A `FeedRange` represents a range of partition key values.
+
+Here's an example showing how to obtain a list of ranges for your container:
+
+```csharp
+IReadOnlyList<FeedRange> ranges = await container.GetFeedRangesAsync();
+```
+
+When you obtain of list of FeedRanges for your container, you'll get one `FeedRange` per [physical partition](../partitioning-overview.md#physical-partitions).
+
+Using a `FeedRange`, you can then create a `FeedIterator` to parallelize the processing of the change feed across multiple machines or threads. Unlike the previous example that showed how to obtain a `FeedIterator` for the entire container or a single partition key, you can use FeedRanges to obtain multiple FeedIterators, which can process the change feed in parallel.
+
+In the case where you want to use FeedRanges, you need to have an orchestrator process that obtains FeedRanges and distributes them to those machines. This distribution could be:
+
+* Using `FeedRange.ToJsonString` and distributing this string value. The consumers can use this value with `FeedRange.FromJsonString`
+* If the distribution is in-process, passing the `FeedRange` object reference.
+
+Here's a sample that shows how to read from the beginning of the container's change feed using two hypothetical separate machines that are reading in parallel:
+
+Machine 1:
+
+```csharp
+FeedIterator<User> iteratorA = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(ranges[0]), ChangeFeedMode.Incremental);
+while (iteratorA.HasMoreResults)
+{
+ FeedResponse<User> response = await iteratorA.ReadNextAsync();
+
+ if (response.StatusCode == HttpStatusCode.NotModified)
+ {
+ Console.WriteLine($"No new changes");
+ await Task.Delay(TimeSpan.FromSeconds(5));
+ }
+ else
+ {
+ foreach (User user in response)
+ {
+ Console.WriteLine($"Detected change for user with id {user.id}");
+ }
+ }
+}
+```
+
+Machine 2:
+
+```csharp
+FeedIterator<User> iteratorB = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(ranges[1]), ChangeFeedMode.Incremental);
+while (iteratorB.HasMoreResults)
+{
+ FeedResponse<User> response = await iteratorA.ReadNextAsync();
+
+ if (response.StatusCode == HttpStatusCode.NotModified)
+ {
+ Console.WriteLine($"No new changes");
+ await Task.Delay(TimeSpan.FromSeconds(5));
+ }
+ else
+ {
+ foreach (User user in response)
+ {
+ Console.WriteLine($"Detected change for user with id {user.id}");
+ }
+ }
+}
+```
+
+## Saving continuation tokens
+
+You can save the position of your `FeedIterator` by obtaining the continuation token. A continuation token is a string value that keeps of track of your FeedIterator's last processed changes and allows the `FeedIterator` to resume at this point later. The following code will read through the change feed since container creation. After no more changes are available, it will persist a continuation token so that change feed consumption can be later resumed.
+
+```csharp
+FeedIterator<User> iterator = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
+
+string continuation = null;
+
+while (iterator.HasMoreResults)
+{
+ FeedResponse<User> response = await iterator.ReadNextAsync();
+
+ if (response.StatusCode == HttpStatusCode.NotModified)
+ {
+ Console.WriteLine($"No new changes");
+ continuation = response.ContinuationToken;
+ // Stop the consumption since there are no new changes
+ break;
+ }
+ else
+ {
+ foreach (User user in response)
+ {
+ Console.WriteLine($"Detected change for user with id {user.id}");
+ }
+ }
+}
+
+// Some time later when I want to check changes again
+FeedIterator<User> iteratorThatResumesFromLastPoint = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.ContinuationToken(continuation), ChangeFeedMode.Incremental);
+```
+
+As long as the Cosmos container still exists, a FeedIterator's continuation token never expires.
+
+## Next steps
+
+* [Overview of change feed](../change-feed.md)
+* [Using the change feed processor](change-feed-processor.md)
+* [Trigger Azure Functions](change-feed-functions.md)
cosmos-db Changefeed Ecommerce Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/changefeed-ecommerce-solution.md
+
+ Title: Use Azure Cosmos DB change feed to visualize real-time data analytics
+description: This article describes how change feed can be used by a retail company to understand user patterns, perform real-time data analysis and visualization
+++
+ms.devlang: java
+ Last updated : 05/28/2019++++
+# Use Azure Cosmos DB change feed to visualize real-time data analytics
+
+The Azure Cosmos DB change feed is a mechanism to get a continuous and incremental feed of records from an Azure Cosmos container as those records are being created or modified. Change feed support works by listening to container for any changes. It then outputs the sorted list of documents that were changed in the order in which they were modified. To learn more about change feed, see [working with change feed](../change-feed.md) article.
+
+This article describes how change feed can be used by an e-commerce company to understand user patterns, perform real-time data analysis and visualization. You will analyze events such as a user viewing an item, adding an item to their cart, or purchasing an item. When one of these events occurs, a new record is created, and the change feed logs that record. Change feed then triggers a series of steps resulting in visualization of metrics that analyze the company performance and activity. Sample metrics that you can visualize include revenue, unique site visitors, most popular items, and average price of the items that are viewed versus added to a cart versus purchased. These sample metrics can help an e-commerce company evaluate its site popularity, develop its advertising and pricing strategies, and make decisions regarding what inventory to invest in.
+
+Interested in watching a video about the solution before getting started, see the following video:
+
+> [!VIDEO https://www.youtube.com/embed/AYOiMkvxlzo]
+>
+
+## Solution components
+The following diagram represents the data flow and components involved in the solution:
+
+
+1. **Data Generation:** Data simulator is used to generate retail data that represents events such as a user viewing an item, adding an item to their cart, and purchasing an item. You can generate large set of sample data by using the data generator. The generated sample data contains documents in the following format:
+
+ ```json
+ {
+ "CartID": 2486,
+ "Action": "Viewed",
+ "Item": "Women's Denim Jacket",
+ "Price": 31.99
+ }
+ ```
+
+2. **Cosmos DB:** The generated data is stored in an Azure Cosmos container.
+
+3. **Change Feed:** The change feed will listen for changes to the Azure Cosmos container. Each time a new document is added into the collection (that is when an event occurs such a user viewing an item, adding an item to their cart, or purchasing an item), the change feed will trigger an [Azure Function](../../azure-functions/functions-overview.md).
+
+4. **Azure Function:** The Azure Function processes the new data and sends it to an [Azure Event Hub](../../event-hubs/event-hubs-about.md).
+
+5. **Event Hub:** The Azure Event Hub stores these events and sends them to [Azure Stream Analytics](../../stream-analytics/stream-analytics-introduction.md) to perform further analysis.
+
+6. **Azure Stream Analytics:** Azure Stream Analytics defines queries to process the events and perform real-time data analysis. This data is then sent to [Microsoft Power BI](/power-bi/desktop-what-is-desktop).
+
+7. **Power BI:** Power BI is used to visualize the data sent by Azure Stream Analytics. You can build a dashboard to see how the metrics change in real time.
+
+## Prerequisites
+
+* Microsoft .NET Framework 4.7.1 or higher
+
+* Microsoft .NET Core 2.1 (or higher)
+
+* Visual Studio with Universal Windows Platform development, .NET desktop development, and ASP.NET and web development workloads
+
+* Microsoft Azure Subscription
+
+* Microsoft Power BI Account
+
+* Download the [Azure Cosmos DB change feed lab](https://github.com/Azure-Samples/azure-cosmos-db-change-feed-dotnet-retail-sample) from GitHub.
+
+## Create Azure resources
+
+Create the Azure resources - Azure Cosmos DB, Storage account, Event Hub, Stream Analytics required by the solution. You will deploy these resources through an Azure Resource Manager template. Use the following steps to deploy these resources:
+
+1. Set the Windows PowerShell execution policy to **Unrestricted**. To do so, open **Windows PowerShell as an Administrator** and run the following commands:
+
+ ```powershell
+ Get-ExecutionPolicy
+ Set-ExecutionPolicy Unrestricted
+ ```
+
+2. From the GitHub repository you downloaded in the previous step, navigate to the **Azure Resource Manager** folder, and open the file called **parameters.json** file.
+
+3. Provide values for cosmosdbaccount_name, eventhubnamespace_name, storageaccount_name, parameters as indicated in **parameters.json** file. You'll need to use the names that you give to each of your resources later.
+
+4. From **Windows PowerShell**, navigate to the **Azure Resource Manager** folder and run the following command:
+
+ ```powershell
+ .\deploy.ps1
+ ```
+5. When prompted, enter your Azure **Subscription ID**, **changefeedlab** for the resource group name, and **run1** for the deployment name. Once the resources begin to deploy, it may take up to 10 minutes for it to complete.
+
+## Create a database and the collection
+
+You will now create a collection to hold e-commerce site events. When a user views an item, adds an item to their cart, or purchases an item, the collection will receive a record that includes the action ("viewed", "added", or "purchased"), the name of the item involved, the price of the item involved, and the ID number of the user cart involved.
+
+1. Go to [Azure portal](https://portal.azure.com/) and find the **Azure Cosmos DB Account** thatΓÇÖs created by the template deployment.
+
+2. From the **Data Explorer** pane, select **New Collection** and fill the form with the following details:
+
+ * For the **Database id** field, select **Create new**, then enter **changefeedlabdatabase**. Leave the **Provision database throughput** box unchecked.
+ * For the **Collection** id field, enter **changefeedlabcollection**.
+ * For the **Partition key** field, enter **/Item**. This is case-sensitive, so make sure you enter it correctly.
+ * For the **Throughput** field, enter **10000**.
+ * Select the **OK** button.
+
+3. Next create another collection named **leases** for change feed processing. The leases collection coordinates processing the change feed across multiple workers. A separate collection is used to store the leases with one lease per partition.
+
+4. Return to the **Data Explorer** pane and select **New Collection** and fill the form with the following details:
+
+ * For the **Database id** field, select **Use existing**, then enter **changefeedlabdatabase**.
+ * For the **Collection id** field, enter **leases**.
+ * For **Storage capacity**, select **Fixed**.
+ * Leave the **Throughput** field set to its default value.
+ * Select the **OK** button.
+
+## Get the connection string and keys
+
+### Get the Azure Cosmos DB connection string
+
+1. Go to [Azure portal](https://portal.azure.com/) and find the **Azure Cosmos DB Account** thatΓÇÖs created by the template deployment.
+
+2. Navigate to the **Keys** pane, copy the PRIMARY CONNECTION STRING and copy it to a notepad or another document that you will have access to throughout the lab. You should label it **Cosmos DB Connection String**. You'll need to copy the string into your code later, so take a note and remember where you are storing it.
+
+### Get the storage account key and connection string
+
+Azure Storage Accounts allow users to store data. In this lab, you will use a storage account to store data that is used by the Azure Function. The Azure Function is triggered when any modification is made to the collection.
+
+1. Return to your resource group and open the storage account that you created earlier
+
+2. Select **Access keys** from the menu on the left-hand side.
+
+3. Copy the values under **key 1** to a notepad or another document that you will have access to throughout the lab. You should label the **Key** as **Storage Key** and the **Connection string** as **Storage Connection String**. You'll need to copy these strings into your code later, so take a note and remember where you are storing them.
+
+### Get the event hub namespace connection string
+
+An Azure Event Hub receives the event data, stores, processes, and forwards the data. In this lab, the Azure Event Hub will receive a document every time a new event occurs (i.e. an item is viewed by a user, added to a user's cart, or purchased by a user) and then will forward that document to Azure Stream Analytics.
+
+1. Return to your resource group and open the **Event Hub Namespace** that you created and named in the prelab.
+
+2. Select **Shared access policies** from the menu on the left-hand side.
+
+3. Select **RootManageSharedAccessKey**. Copy the **Connection string-primary key** to a notepad or another document that you will have access to throughout the lab. You should label it **Event Hub Namespace** connection string. You'll need to copy the string into your code later, so take a note and remember where you are storing it.
+
+## Set up Azure Function to read the change feed
+
+When a new document is created, or a current document is modified in a Cosmos container, the change feed automatically adds that modified document to its history of collection changes. You will now build and run an Azure Function that processes the change feed. When a document is created or modified in the collection you created, the Azure Function will be triggered by the change feed. Then the Azure Function will send the modified document to the Event Hub.
+
+1. Return to the repository that you cloned on your device.
+
+2. Right-click the file named **ChangeFeedLabSolution.sln** and select **Open With Visual Studio**.
+
+3. Navigate to **local.settings.json** in Visual Studio. Then use the values you recorded earlier to fill in the blanks.
+
+4. Navigate to **ChangeFeedProcessor.cs**. In the parameters for the **Run** function, perform the following actions:
+
+ * Replace the text **YOUR COLLECTION NAME HERE** with the name of your collection. If you followed earlier instructions, the name of your collection is changefeedlabcollection.
+ * Replace the text **YOUR LEASES COLLECTION NAME HERE** with the name of your leases collection. If you followed earlier instructions, the name of your leases collection is **leases**.
+ * At the top of Visual Studio, make sure that the Startup Project box on the left of the green arrow says **ChangeFeedFunction**.
+ * Select **Start** at the top of the page to run the program
+ * You can confirm that the function is running when the console app says "Job host started".
+
+## Insert data into Azure Cosmos DB
+
+To see how change feed processes new actions on an e-commerce site, have to simulate data that represents users viewing items from the product catalog, adding those items to their carts, and purchasing the items in their carts. This data is arbitrary and for the purpose of replicating what data on an Ecommerce site would look like.
+
+1. Navigate back to the repository in File Explorer, and right-click **ChangeFeedFunction.sln** to open it again in a new Visual Studio window.
+
+2. Navigate to the **App.config** file.Within the `<appSettings>` block, add the endpoint and unique **PRIMARY KEY** that of your Azure Cosmos DB account that you retrieved earlier.
+
+3. Add in the **collection** and **database** names. (These names should be **changefeedlabcollection** and **changefeedlabdatabase** unless you choose to name yours differently.)
+
+ :::image type="content" source="./media/changefeed-ecommerce-solution/update-connection-string.png" alt-text="Update connection strings":::
+
+4. Save the changes on all the files edited.
+
+5. At the top of Visual Studio, make sure that the **Startup Project** box on the left of the green arrow says **DataGenerator**. Then select **Start** at the top of the page to run the program.
+
+6. Wait for the program to run. The stars mean that data is coming in! Keep the program running - it is important that lots of data is collected.
+
+7. If you navigate to [Azure portal](https://portal.azure.com/) , then to the Cosmos DB account within your resource group, then to **Data Explorer**, you will see the randomized data imported in your **changefeedlabcollection** .
+
+ :::image type="content" source="./media/changefeed-ecommerce-solution/data-generated-in-portal.png" alt-text="Data generated in portal":::
+
+## Set up a stream analytics job
+
+Azure Stream Analytics is a fully managed cloud service for real-time processing of streaming data. In this lab, you will use stream analytics to process new events from the Event Hub (i.e. when an item is viewed, added to a cart, or purchased), incorporate those events into real-time data analysis, and send them into Power BI for visualization.
+
+1. From the [Azure portal](https://portal.azure.com/), navigate to your resource group, then to **streamjob1** (the stream analytics job that you created in the prelab).
+
+2. Select **Inputs** as demonstrated below.
+
+ :::image type="content" source="./media/changefeed-ecommerce-solution/create-input.png" alt-text="Create input":::
+
+3. Select **+ Add stream input**. Then select **Event Hub** from the drop-down menu.
+
+4. Fill the new input form with the following details:
+
+ * In the **Input** alias field, enter **input**.
+ * Select the option for **Select Event Hub from your subscriptions**.
+ * Set the **Subscription** field to your subscription.
+ * In the **Event Hub namespace** field, enter the name of your Event Hub Namespace that you created during the prelab.
+ * In the **Event Hub name** field, select the option for **Use existing** and choose **event-hub1** from the drop-down menu.
+ * Leave **Event Hub policy** name field set to its default value.
+ * Leave **Event serialization format** as **JSON**.
+ * Leave **Encoding field** set to **UTF-8**.
+ * Leave **Event compression type** field set to **None**.
+ * Select the **Save** button.
+
+5. Navigate back to the stream analytics job page, and select **Outputs**.
+
+6. Select **+ Add**. Then select **Power BI** from the drop-down menu.
+
+7. To create a new Power BI output to visualize average price, perform the following actions:
+
+ * In the **Output alias** field, enter **averagePriceOutput**.
+ * Leave the **Group workspace** field set to **Authorize connection to load workspaces**.
+ * In the **Dataset name** field, enter **averagePrice**.
+ * In the **Table name** field, enter **averagePrice**.
+ * Select the **Authorize** button, then follow the instructions to authorize the connection to Power BI.
+ * Select the **Save** button.
+
+8. Then go back to **streamjob1** and select **Edit query**.
+
+ :::image type="content" source="./media/changefeed-ecommerce-solution/edit-query.png" alt-text="Edit query":::
+
+9. Paste the following query into the query window. The **AVERAGE PRICE** query calculates the average price of all items that are viewed by users, the average price of all items that are added to users' carts, and the average price of all items that are purchased by users. This metric can help e-commerce companies decide what prices to sell items at and what inventory to invest in. For example, if the average price of items viewed is much higher than the average price of items purchased, then a company might choose to add less expensive items to its inventory.
+
+ ```sql
+ /*AVERAGE PRICE*/
+ SELECT System.TimeStamp AS Time, Action, AVG(Price)
+ INTO averagePriceOutput
+ FROM input
+ GROUP BY Action, TumblingWindow(second,5)
+ ```
+10. Then select **Save** in the upper left-hand corner.
+
+11. Now return to **streamjob1** and select the **Start** button at the top of the page. Azure Stream Analytics can take a few minutes to start up, but eventually you will see it change from "Starting" to "Running".
+
+## Connect to Power BI
+
+Power BI is a suite of business analytics tools to analyze data and share insights. It's a great example of how you can strategically visualize the analyzed data.
+
+1. Sign in to Power BI and navigate to **My Workspace** by opening the menu on the left-hand side of the page.
+
+2. Select **+ Create** in the top right-hand corner and then select **Dashboard** to create a dashboard.
+
+3. Select **+ Add tile** in the top right-hand corner.
+
+4. Select **Custom Streaming Data**, then select the **Next** button.
+
+5. Select **averagePrice** from **YOUR DATASETS**, then select **Next**.
+
+6. In the **Visualization Type** field, choose **Clustered bar chart** from the drop-down menu. Under **Axis**, add action. Skip **Legend** without adding anything. Then, under the next section called **Value**, add **avg**. Select **Next**, then title your chart, and select **Apply**. You should see a new chart on your dashboard!
+
+7. Now, if you want to visualize more metrics, you can go back to **streamjob1** and create three more outputs with the following fields.
+
+ a. **Output alias:** incomingRevenueOutput, Dataset name: incomingRevenue, Table name: incomingRevenue
+ b. **Output alias:** top5Output, Dataset name: top5, Table name: top5
+ c. **Output alias:** uniqueVisitorCountOutput, Dataset name: uniqueVisitorCount, Table name: uniqueVisitorCount
+
+ Then select **Edit query** and paste the following queries **above** the one you already wrote.
+
+ ```sql
+ /*TOP 5*/
+ WITH Counter AS
+ (
+ SELECT Item, Price, Action, COUNT(*) AS countEvents
+ FROM input
+ WHERE Action = 'Purchased'
+ GROUP BY Item, Price, Action, TumblingWindow(second,30)
+ ),
+ top5 AS
+ (
+ SELECT DISTINCT
+ CollectTop(5) OVER (ORDER BY countEvents) AS topEvent
+ FROM Counter
+ GROUP BY TumblingWindow(second,30)
+ ),
+ arrayselect AS
+ (
+ SELECT arrayElement.ArrayValue
+ FROM top5
+ CROSS APPLY GetArrayElements(top5.topevent) AS arrayElement
+ )
+ SELECT arrayvalue.value.item, arrayvalue.value.price, arrayvalue.value.countEvents
+ INTO top5Output
+ FROM arrayselect
+
+ /*REVENUE*/
+ SELECT System.TimeStamp AS Time, SUM(Price)
+ INTO incomingRevenueOutput
+ FROM input
+ WHERE Action = 'Purchased'
+ GROUP BY TumblingWindow(hour, 1)
+
+ /*UNIQUE VISITORS*/
+ SELECT System.TimeStamp AS Time, COUNT(DISTINCT CartID) as uniqueVisitors
+ INTO uniqueVisitorCountOutput
+ FROM input
+ GROUP BY TumblingWindow(second, 5)
+ ```
+
+ The TOP 5 query calculates the top 5 items, ranked by the number of times that they have been purchased. This metric can help e-commerce companies evaluate which items are most popular and can influence the company's advertising, pricing, and inventory decisions.
+
+ The REVENUE query calculates revenue by summing up the prices of all items purchased each minute. This metric can help e-commerce companies evaluate its financial performance and also understand what times of day contribute to most revenue. This can impact the overall company strategy, marketing in particular.
+
+ The UNIQUE VISITORS query calculates how many unique visitors are on the site every 5 seconds by detecting unique cart ID's. This metric can help e-commerce companies evaluate their site activity and strategize how to acquire more customers.
+
+8. You can now add tiles for these datasets as well.
+
+ * For Top 5, it would make sense to do a clustered column chart with the items as the axis and the count as the value.
+ * For Revenue, it would make sense to do a line chart with time as the axis and the sum of the prices as the value. The time window to display should be the largest possible in order to deliver as much information as possible.
+ * For Unique Visitors, it would make sense to do a card visualization with the number of unique visitors as the value.
+
+ This is how a sample dashboard looks with these charts:
+
+ :::image type="content" source="./media/changefeed-ecommerce-solution/visualizations.png" alt-text="Screenshot shows a sample dashboard with charts named Average Price of Items by Action, Unique Visitors, Revenue, and Top 5 Items Purchased.":::
+
+## Optional: Visualize with an E-commerce site
+
+You will now observe how you can use your new data analysis tool to connect with a real e-commerce site. In order to build the e-commerce site, use an Azure Cosmos database to store the list of product categories (Women's, Men's, Unisex), the product catalog, and a list of the most popular items.
+
+1. Navigate back to the [Azure portal](https://portal.azure.com/), then to your **Cosmos DB account**, then to **Data Explorer**.
+
+ Add two collections under **changefeedlabdatabase** - **products** and **categories** with Fixed storage capacity.
+
+ Add another collection under **changefeedlabdatabase** named **topItems** and **/Item** as the partition key.
+
+2. Select the **topItems** collection, and under **Scale and Settings** set the **Time to Live** to be **30 seconds** so that topItems updates every 30 seconds.
+
+ :::image type="content" source="./media/changefeed-ecommerce-solution/time-to-live.png" alt-text="Time to live":::
+
+3. In order to populate the **topItems** collection with the most frequently purchased items, navigate back to **streamjob1** and add a new **Output**. Select **Cosmos DB**.
+
+4. Fill in the required fields as pictured below.
+
+ :::image type="content" source="./media/changefeed-ecommerce-solution/cosmos-output.png" alt-text="Cosmos output":::
+
+5. If you added the optional TOP 5 query in the previous part of the lab, proceed to part 5a. If not, proceed to part 5b.
+
+ 5a. In **streamjob1**, select **Edit query** and paste the following query in your Azure Stream Analytics query editor below the TOP 5 query but above the rest of the queries.
+
+ ```sql
+ SELECT arrayvalue.value.item AS Item, arrayvalue.value.price, arrayvalue.value.countEvents
+ INTO topItems
+ FROM arrayselect
+ ```
+ 5b. In **streamjob1**, select **Edit query** and paste the following query in your Azure Stream Analytics query editor above all other queries.
+
+ ```sql
+ /*TOP 5*/
+ WITH Counter AS
+ (
+ SELECT Item, Price, Action, COUNT(*) AS countEvents
+ FROM input
+ WHERE Action = 'Purchased'
+ GROUP BY Item, Price, Action, TumblingWindow(second,30)
+ ),
+ top5 AS
+ (
+ SELECT DISTINCT
+ CollectTop(5) OVER (ORDER BY countEvents) AS topEvent
+ FROM Counter
+ GROUP BY TumblingWindow(second,30)
+ ),
+ arrayselect AS
+ (
+ SELECT arrayElement.ArrayValue
+ FROM top5
+ CROSS APPLY GetArrayElements(top5.topevent) AS arrayElement
+ )
+ SELECT arrayvalue.value.item AS Item, arrayvalue.value.price, arrayvalue.value.countEvents
+ INTO topItems
+ FROM arrayselect
+ ```
+
+6. Open **EcommerceWebApp.sln** and navigate to the **Web.config** file in the **Solution Explorer**.
+
+7. Within the `<appSettings>` block, add the **URI** and **PRIMARY KEY** that you saved earlier where it says **your URI here** and **your primary key here**. Then add in your **database name** and **collection name** as indicated. (These names should be **changefeedlabdatabase** and **changefeedlabcollection** unless you chose to name yours differently.)
+
+ Fill in your **products collection name**, **categories collection name**, and **top items collection name** as indicated. (These names should be **products, categories, and topItems** unless you chose to name yours differently.)
+
+8. Navigate to and open the **Checkout folder** within **EcommerceWebApp.sln.** Then open the **Web.config** file within that folder.
+
+9. Within the `<appSettings>` block, add the **URI** and **PRIMARY KEY** that you saved earlier where indicated. Then add in your **database name** and **collection name** as indicated. (These names should be **changefeedlabdatabase** and **changefeedlabcollection** unless you chose to name yours differently.)
+
+10. Press **Start** at the top of the page to run the program.
+
+11. Now you can play around on the e-commerce site. When you view an item, add an item to your cart, change the quantity of an item in your cart, or purchase an item, these events will be passed through the Cosmos DB change feed to Event Hub, ASA, and then Power BI. We recommend continuing to run DataGenerator to generate significant web traffic data and provide a realistic set of "Hot Products" on the e-commerce site.
+
+## Delete the resources
+
+To delete the resources that you created during this lab, navigate to the resource group on [Azure portal](https://portal.azure.com/), then select **Delete resource group** from the menu at the top of the page and follow the instructions provided.
+
+## Next steps
+
+* To learn more about change feed, see [working with change feed support in Azure Cosmos DB](../change-feed.md)
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/cli-samples.md
+
+ Title: Azure CLI Samples for Azure Cosmos DB Core (SQL) API
+description: Azure CLI Samples for Azure Cosmos DB Core (SQL) API
++++ Last updated : 10/13/2020++++
+# Azure CLI samples for Azure Cosmos DB Core (SQL) API
+
+The following table includes links to sample Azure CLI scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb). Azure Cosmos DB CLI script samples can also be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
+
+These samples require Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli)
+
+For Azure CLI samples for other APIs see [CLI Samples for Cassandra](../cassandr)
+
+## Common Samples
+
+These samples apply to all Azure Cosmos DB APIs
+
+|Task | Description |
+|||
+| [Add or failover regions](../scripts/cli/common/regions.md?toc=%2fcli%2fazure%2ftoc.json) | Add a region, change failover priority, trigger a manual failover.|
+| [Account keys and connection strings](../scripts/cli/common/keys.md?toc=%2fcli%2fazure%2ftoc.json) | List account keys, read-only keys, regenerate keys and list connection strings.|
+| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account with IP firewall configured.|
+| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account and secure with service-endpoints.|
+| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md?toc=%2fcli%2fazure%2ftoc.json)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+|||
+
+## Core (SQL) API Samples
+
+|Task | Description |
+|||
+| [Create an Azure Cosmos account, database and container](../scripts/cli/sql/create.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account, database, and container for Core (SQL) API. |
+| [Create an Azure Cosmos account, database and container with autoscale](../scripts/cli/sql/autoscale.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account, database, and container with autoscale for Core (SQL) API. |
+| [Throughput operations](../scripts/cli/sql/throughput.md?toc=%2fcli%2fazure%2ftoc.json) | Read, update and migrate between autoscale and standard throughput on a database and container.|
+| [Lock resources from deletion](../scripts/cli/sql/lock.md?toc=%2fcli%2fazure%2ftoc.json)| Prevent resources from being deleted with resource locks.|
+|||
cosmos-db Couchbase Cosmos Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/couchbase-cosmos-migration.md
+
+ Title: 'Migrate from CouchBase to Azure Cosmos DB SQL API'
+description: Step-by-Step guidance for migrating from CouchBase to Azure Cosmos DB SQL API
+++ Last updated : 02/11/2020+++++
+# Migrate from CouchBase to Azure Cosmos DB SQL API
+
+Azure Cosmos DB is a scalable, globally distributed, fully managed database. It provides guaranteed low latency access to your data. To learn more about Azure Cosmos DB, see the [overview](../introduction.md) article. This article provides instructions to migrate Java applications that are connected to Couchbase to a SQL API account in Azure Cosmos DB.
+
+## Differences in nomenclature
+
+The following are the key features that work differently in Azure Cosmos DB when compared to Couchbase:
+
+| Couchbase | Azure Cosmos DB |
+|--|--|
+| Couchbase server | Account |
+| Bucket | Database |
+| Bucket | Container/Collection |
+| JSON Document | Item / Document |
+
+## Key differences
+
+* Azure Cosmos DB has an "ID" field within the document whereas Couchbase has the ID as a part of bucket. The "ID" field is unique across the partition.
+
+* Azure Cosmos DB scales by using the partitioning or sharding technique. Which means it splits the data into multiple shards/partitions. These partitions/shards are created based on the partition key property that you provide. You can select the partition key to optimize read as well write operations or read/write optimized too. To learn more, see the [partitioning](../partitioning-overview.md) article.
+
+* In Azure Cosmos DB, it is not required for the top-level hierarchy to denote the collection because the collection name already exists. This feature makes the JSON structure much simpler. The following is an example that shows differences in the data model between Couchbase and Azure Cosmos DB:
+
+ **Couchbase**: Document ID = "99FF4444"
+
+ ```json
+ {
+ "TravelDocument":
+ {
+ "Country":"India",
+ "Validity" : "2022-09-01",
+ "Person":
+ {
+ "Name": "Manish",
+ "Address": "AB Road, City-z"
+ },
+ "Visas":
+ [
+ {
+ "Country":"India",
+ "Type":"Multi-Entry",
+ "Validity":"2022-09-01"
+ },
+ {
+ "Country":"US",
+ "Type":"Single-Entry",
+ "Validity":"2022-08-01"
+ }
+ ]
+ }
+ }
+ ```
+
+ **Azure Cosmos DB**: Refer "ID" within the document as shown below
+
+ ```json
+ {
+ "id" : "99FF4444",
+
+ "Country":"India",
+ "Validity" : "2022-09-01",
+ "Person":
+ {
+ "Name": "Manish",
+ "Address": "AB Road, City-z"
+ },
+ "Visas":
+ [
+ {
+ "Country":"India",
+ "Type":"Multi-Entry",
+ "Validity":"2022-09-01"
+ },
+ {
+ "Country":"US",
+ "Type":"Single-Entry",
+ "Validity":"2022-08-01"
+ }
+ ]
+ }
+
+ ```
+
+## Java SDK support
+
+Azure Cosmos DB has following SDKs to support different Java frameworks:
+
+* Async SDK
+* Spring Boot SDK
+
+The following sections describe when to use each of these SDKs. Consider an example where we have three types of workloads:
+
+## Couchbase as document repository & spring data-based custom queries
+
+If the workload that you are migrating is based on Spring Boot Based SDK, then you can use the following steps:
+
+1. Add parent to the POM.xml file:
+
+ ```java
+ <parent>
+ <groupId>org.springframework.boot</groupId>
+ <artifactId>spring-boot-starter-parent</artifactId>
+ <version>2.1.5.RELEASE</version>
+ <relativePath/>
+ </parent>
+ ```
+
+1. Add properties to the POM.xml file:
+
+ ```java
+ <azure.version>2.1.6</azure.version>
+ ```
+
+1. Add dependencies to the POM.xml file:
+
+ ```java
+ <dependency>
+ <groupId>com.microsoft.azure</groupId>
+ <artifactId>azure-cosmosdb-spring-boot-starter</artifactId>
+ <version>2.1.6</version>
+ </dependency>
+ ```
+
+1. Add application properties under resources and specify the following. Make sure to replace the URL, key, and database name parameters:
+
+ ```java
+ azure.cosmosdb.uri=<your-cosmosDB-URL>
+ azure.cosmosdb.key=<your-cosmosDB-key>
+ azure.cosmosdb.database=<your-cosmosDB-dbName>
+ ```
+
+1. Define the name of the collection in the model. You can also specify further annotations. For example, ID, partition key to denote them explicitly:
+
+ ```java
+ @Document(collection = "mycollection")
+ public class User {
+ @id
+ private String id;
+ private String firstName;
+ @PartitionKey
+ private String lastName;
+ }
+ ```
+
+The following are the code snippets for CRUD operations:
+
+### Insert and update operations
+
+Where *_repo* is the object of repository and *doc* is the POJO classΓÇÖs object. You can use `.save` to insert or upsert (if document with specified ID found). The following code snippet shows how to insert or update a doc object:
+
+```_repo.save(doc);```
+
+### Delete Operation
+
+Consider the following code snippet, where doc object will have ID and partition key mandatory to locate and delete the object:
+
+```_repo.delete(doc);```
+
+### Read Operation
+
+You can read the document with or without specifying the partition key. If you donΓÇÖt specify the partition key, then it is treated as a cross-partition query. Consider the following code samples, first one will perform operation using ID and partition key field. Second example uses a regular field & without specifying the partition key field.
+
+* ```_repo.findByIdAndName(objDoc.getId(),objDoc.getName());```
+* ```_repo.findAllByStatus(objDoc.getStatus());```
+
+ThatΓÇÖs it, you can now use your application with Azure Cosmos DB. Complete code sample for the example described in this doc is available in the [CouchbaseToCosmosDB-SpringCosmos](https://github.com/Azure-Samples/couchbaseTocosmosdb/tree/main/SpringCosmos) GitHub repo.
+
+## Couchbase as a document repository & using N1QL queries
+
+N1QL queries is the way to define queries in the Couchbase.
+
+|N1QL Query | Azure CosmosDB Query|
+|-|-|
+|SELECT META(`TravelDocument`).id AS id, `TravelDocument`.* FROM `TravelDocument` WHERE `_type` = "com.xx.xx.xx.xxx.xxx.xxxx " and country = 'IndiaΓÇÖ and ANY m in Visas SATISFIES m.type == 'Multi-Entry' and m.Country IN ['India', BhutanΓÇÖ] ORDER BY ` Validity` DESC LIMIT 25 OFFSET 0 | SELECT c.id,c FROM c JOIN m in c.country=ΓÇÖIndiaΓÇÖ WHERE c._type = " com.xx.xx.xx.xxx.xxx.xxxx" and c.country = 'India' and m.type = 'Multi-Entry' and m.Country IN ('India', 'Bhutan') ORDER BY c.Validity DESC OFFSET 0 LIMIT 25 |
+
+You can notice the following changes in your N1QL queries:
+
+* You donΓÇÖt need to use the META keyword or refer to the first-level document. Instead you can create your own reference to the container. In this example, we have considered it as "c" (it can be anything). This reference is used as a prefix for all the first-level fields. Fr example, c.id, c.country etc.
+
+* Instead of "ANY" now you can do a join on subdocument and refer it with a dedicated alias such as "m". Once you have created alias for a subdocument you need to use alias. For example, m.Country.
+
+* The sequence of OFFSET is different in Azure Cosmos DB query, first you need to specify OFFSET then LIMIT.
+It is recommended not to use Spring Data SDK if you are using maximum custom defined queries as it can have unnecessary overhead at the client side while passing the query to Azure Cosmos DB. Instead we have a direct Async Java SDK, which can be utilized much efficiently in this case.
+
+### Read operation
+
+Use the Async Java SDK with the following steps:
+
+1. Configure the following dependency onto the POM.xml file:
+
+ ```java
+ <!-- https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb -->
+ <dependency>
+ <groupId>com.microsoft.azure</groupId>
+ <artifactId>azure-cosmos</artifactId>
+ <version>3.0.0</version>
+ </dependency>
+ ```
+
+1. Create a connection object for Azure Cosmos DB by using the `ConnectionBuilder` method as shown in the following example. Make sure you put this declaration into the bean such that the following code should get executed only once:
+
+ ```java
+ ConnectionPolicy cp=new ConnectionPolicy();
+ cp.connectionMode(ConnectionMode.DIRECT);
+
+ if(client==null)
+ client= CosmosClient.builder()
+ .endpoint(Host)//(Host, PrimaryKey, dbName, collName).Builder()
+ .connectionPolicy(cp)
+ .key(PrimaryKey)
+ .consistencyLevel(ConsistencyLevel.EVENTUAL)
+ .build();
+
+ container = client.getDatabase(_dbName).getContainer(_collName);
+ ```
+
+1. To execute the query, you need to run the following code snippet:
+
+ ```java
+ Flux<FeedResponse<CosmosItemProperties>> objFlux= container.queryItems(query, fo);
+ ```
+
+Now, with the help of above method you can pass multiple queries and execute without any hassle. In case you have the requirement to execute one large query, which can be split into multiple queries then try the following code snippet instead of the previous one:
+
+```java
+for(SqlQuerySpec query:queries)
+{
+ objFlux= container.queryItems(query, fo);
+ objFlux .publishOn(Schedulers.elastic())
+ .subscribe(feedResponse->
+ {
+ if(feedResponse.results().size()>0)
+ {
+ _docs.addAll(feedResponse.results());
+ }
+
+ },
+ Throwable::printStackTrace,latch::countDown);
+ lstFlux.add(objFlux);
+}
+
+ Flux.merge(lstFlux);
+ latch.await();
+}
+```
+
+With the previous code, you can run queries in parallel and increase the distributed executions to optimize. Further you can run the insert and update operations too:
+
+### Insert operation
+
+To insert the document, run the following code:
+
+```java
+Mono<CosmosItemResponse> objMono= container.createItem(doc,ro);
+```
+
+Then subscribe to Mono as:
+
+```java
+CountDownLatch latch=new CountDownLatch(1);
+objMono .subscribeOn(Schedulers.elastic())
+ .subscribe(resourceResponse->
+ {
+ if(resourceResponse.statusCode()!=successStatus)
+ {
+ throw new RuntimeException(resourceResponse.toString());
+ }
+ },
+ Throwable::printStackTrace,latch::countDown);
+latch.await();
+```
+
+### Upsert operation
+
+Upsert operation requires you to specify the document that needs to be updated. To fetch the complete document, you can use the snippet mentioned under heading read operation then modify the required field(s). The following code snippet upserts the document:
+
+```java
+Mono<CosmosItemResponse> obs= container.upsertItem(doc, ro);
+```
+Then subscribe to mono. Refer to the mono subscription snippet in insert operation.
+
+### Delete operation
+
+Following snippet will do delete operation:
+
+```java
+CosmosItem objItem= container.getItem(doc.Id, doc.Tenant);
+Mono<CosmosItemResponse> objMono = objItem.delete(ro);
+```
+
+Then subscribe to mono, refer mono subscription snippet in insert operation. The complete code sample is available in the [CouchbaseToCosmosDB-AsyncInSpring](https://github.com/Azure-Samples/couchbaseTocosmosdb/tree/main/AsyncInSpring) GitHub repo.
+
+## Couchbase as a key/value pair
+
+This is a simple type of workload in which you can perform lookups instead of queries. Use the following steps for key/value pairs:
+
+1. Consider having "/ID" as primary key, which will makes sure you can perform lookup operation directly in the specific partition. Create a collection and specify "/ID" as partition key.
+
+1. Switch off the indexing completely. Because you will execute lookup operations, there is no point of carrying indexing overhead. To turn off indexing, sign into Azure portal, goto Azure Cosmos DB Account. Open the **Data Explorer**, select your **Database** and the **Container**. Open the **Scale & Settings** tab and select the **Indexing Policy**. Currently indexing policy looks like the following:
+
+ ```json
+ {
+ "indexingMode": "consistent",
+ "automatic": true,
+ "includedPaths": [
+ {
+ "path": "/*"
+ }
+ ],
+ "excludedPaths": [
+ {
+ "path": "/\"_etag\"/?"
+ }
+ ]
+ }
+ ````
+
+ Replace the above indexing policy with the following policy:
+
+ ```json
+ {
+ "indexingMode": "none",
+ "automatic": false,
+ "includedPaths": [],
+ "excludedPaths": []
+ }
+ ```
+
+1. Use the following code snippet to create the connection object. Connection Object (to be placed in @Bean or make it static):
+
+ ```java
+ ConnectionPolicy cp=new ConnectionPolicy();
+ cp.connectionMode(ConnectionMode.DIRECT);
+
+ if(client==null)
+ client= CosmosClient.builder()
+ .endpoint(Host)//(Host, PrimaryKey, dbName, collName).Builder()
+ .connectionPolicy(cp)
+ .key(PrimaryKey)
+ .consistencyLevel(ConsistencyLevel.EVENTUAL)
+ .build();
+
+ container = client.getDatabase(_dbName).getContainer(_collName);
+ ```
+
+Now you can execute the CRUD operations as follows:
+
+### Read operation
+
+To read the item, use the following snippet:
+
+```java
+CosmosItemRequestOptions ro=new CosmosItemRequestOptions();
+ro.partitionKey(new PartitionKey(documentId));
+CountDownLatch latch=new CountDownLatch(1);
+
+var objCosmosItem= container.getItem(documentId, documentId);
+Mono<CosmosItemResponse> objMono = objCosmosItem.read(ro);
+objMono .subscribeOn(Schedulers.elastic())
+ .subscribe(resourceResponse->
+ {
+ if(resourceResponse.item()!=null)
+ {
+ doc= resourceResponse.properties().toObject(UserModel.class);
+ }
+ },
+ Throwable::printStackTrace,latch::countDown);
+latch.await();
+```
+
+### Insert operation
+
+To insert an item, you can perform the following code:
+
+```java
+Mono<CosmosItemResponse> objMono= container.createItem(doc,ro);
+```
+
+Then subscribe to mono as:
+
+```java
+CountDownLatch latch=new CountDownLatch(1);
+objMono.subscribeOn(Schedulers.elastic())
+ .subscribe(resourceResponse->
+ {
+ if(resourceResponse.statusCode()!=successStatus)
+ {
+ throw new RuntimeException(resourceResponse.toString());
+ }
+ },
+ Throwable::printStackTrace,latch::countDown);
+latch.await();
+```
+
+### Upsert operation
+
+To update the value of an item, refer the code snippet below:
+
+```java
+Mono<CosmosItemResponse> obs= container.upsertItem(doc, ro);
+```
+Then subscribe to mono, refer mono subscription snippet in insert operation.
+
+### Delete operation
+
+Use the following snippet to execute the delete operation:
+
+```java
+CosmosItem objItem= container.getItem(id, id);
+Mono<CosmosItemResponse> objMono = objItem.delete(ro);
+```
+
+Then subscribe to mono, refer mono subscription snippet in insert operation. The complete code sample is available in the [CouchbaseToCosmosDB-AsyncKeyValue](https://github.com/Azure-Samples/couchbaseTocosmosdb/tree/main/AsyncKeyValue) GitHub repo.
+
+## Data Migration
+
+There are two ways to migrate data.
+
+* **Use Azure Data Factory:** This is the most recommended method to migrate the data. Configure the source as Couchbase and sink as Azure Cosmos DB SQL API, see the Azure [Cosmos DB Data Factory connector](../../data-factory/connector-azure-cosmos-db.md) article for detailed steps.
+
+* **Use the Azure Cosmos DB data import tool:** This option is recommended to migrate using VMs with less amount of data. For detailed steps, see the [Data importer](../import-data.md) article.
+
+## Next Steps
+
+* To do performance testing, see [Performance and scale testing with Azure Cosmos DB](./performance-testing.md) article.
+* To optimize the code, see [Performance tips for Azure Cosmos DB](./performance-tips-async-java.md) article.
+* Explore Java Async V3 SDK, [SDK reference](https://github.com/Azure/azure-cosmosdb-java/tree/v3) GitHub repo.
cosmos-db Create Cosmosdb Resources Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-cosmosdb-resources-portal.md
+
+ Title: Quickstart - Create Azure Cosmos DB resources from the Azure portal
+description: This quickstart shows how to create an Azure Cosmos database, container, and items by using the Azure portal.
++++
+ms.devlang: dotnet
+ Last updated : 08/26/2021+
+# Quickstart: Create an Azure Cosmos account, database, container, and items from the Azure portal
+
+> [!div class="op_single_selector"]
+> * [Azure portal](create-cosmosdb-resources-portal.md)
+> * [.NET](create-sql-api-dotnet.md)
+> * [Java](create-sql-api-java.md)
+> * [Node.js](create-sql-api-nodejs.md)
+> * [Python](create-sql-api-python.md)
+> * [Xamarin](create-sql-api-xamarin-dotnet.md)
+>
+
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB.
+
+This quickstart demonstrates how to use the Azure portal to create an Azure Cosmos DB [SQL API](../introduction.md) account, create a document database, and container, and add data to the container.
+
+## Prerequisites
+
+An Azure subscription or free Azure Cosmos DB trial account
+- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+
+- [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
+
+<a id="create-account"></a>
+## Create an Azure Cosmos DB account
++
+<a id="create-container-database"></a>
+## Add a database and a container
+
+You can use the Data Explorer in the Azure portal to create a database and container.
+
+1. Select **Data Explorer** from the left navigation on your Azure Cosmos DB account page, and then select **New Container**.
+
+ You may need to scroll right to see the **Add Container** window.
+
+ :::image type="content" source="./media/create-cosmosdb-resources-portal/add-database-container.png" alt-text="The Azure portal Data Explorer, Add Container pane":::
+
+1. In the **Add container** pane, enter the settings for the new container.
+
+ |Setting|Suggested value|Description
+ ||||
+ |**Database ID**|ToDoList|Enter *ToDoList* as the name for the new database. Database names must contain from 1 through 255 characters, and they cannot contain `/, \\, #, ?`, or a trailing space. Check the **Share throughput across containers** option, it allows you to share the throughput provisioned on the database across all the containers within the database. This option also helps with cost savings. |
+ | **Database throughput**| You can provision **Autoscale** or **Manual** throughput. Manual throughput allows you to scale RU/s yourself whereas autoscale throughput allows the system to scale RU/s based on usage. Select **Manual** for this example. <br><br> Leave the throughput at 400 request units per second (RU/s). If you want to reduce latency, you can scale up the throughput later by estimating the required RU/s with the [capacity calculator](estimate-ru-with-capacity-planner.md).<br><br>**Note**: This setting is not available when creating a new container in a serverless account. |
+ |**Container ID**|Items|Enter *Items* as the name for your new container. Container IDs have the same character requirements as database names.|
+ |**Partition key**| /category| The sample described in this article uses */category* as the partition key.|
+
+ Don't add **Unique keys** or turn on **Analytical store** for this example. Unique keys let you add a layer of data integrity to the database by ensuring the uniqueness of one or more values per partition key. For more information, see [Unique keys in Azure Cosmos DB.](../unique-keys.md) [Analytical store](../analytical-store-introduction.md) is used to enable large-scale analytics against operational data without any impact to your transactional workloads.
+
+1. Select **OK**. The Data Explorer displays the new database and the container that you created.
+
+## Add data to your database
+
+Add data to your new database using Data Explorer.
+
+1. In **Data Explorer**, expand the **ToDoList** database, and expand the **Items** container. Next, select **Items**, and then select **New Item**.
+
+ :::image type="content" source="./media/create-sql-api-dotnet/azure-cosmosdb-new-document.png" alt-text="Create new documents in Data Explorer in the Azure portal":::
+
+1. Add the following structure to the document on the right side of the **Documents** pane:
+
+ ```json
+ {
+ "id": "1",
+ "category": "personal",
+ "name": "groceries",
+ "description": "Pick up apples and strawberries.",
+ "isComplete": false
+ }
+ ```
+
+1. Select **Save**.
+
+ :::image type="content" source="./media/create-sql-api-dotnet/azure-cosmosdb-save-document.png" alt-text="Copy in json data and select Save in Data Explorer in the Azure portal":::
+
+1. Select **New Document** again, and create and save another document with a unique `id`, and any other properties and values you want. Your documents can have any structure, because Azure Cosmos DB doesn't impose any schema on your data.
+
+## Query your data
++
+## Clean up resources
++
+If you wish to delete just the database and use the Azure Cosmos account in future, you can delete the database with the following steps:
+
+* Got to your Azure Cosmos account.
+* Open **Data Explorer**, right click on the database that you want to delete and select **Delete Database**.
+* Enter the Database ID/database name to confirm the delete operation.
+
+## Next steps
+
+In this quickstart, you learned how to create an Azure Cosmos DB account, create a database and container using the Data Explorer. You can now import additional data to your Azure Cosmos DB account.
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+> [!div class="nextstepaction"]
+> [Import data into Azure Cosmos DB](../import-data.md)
cosmos-db Create Notebook Visualize Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-notebook-visualize-data.md
+
+ Title: 'Tutorial: Create a notebook in Azure Cosmos DB to analyze and visualize the data'
+description: 'Tutorial: Learn how to use built-in Jupyter notebooks to import data to Azure Cosmos DB, analyze the data, and visualize the output.'
++++ Last updated : 11/05/2019++++
+# Tutorial: Create a notebook in Azure Cosmos DB to analyze and visualize the data
+
+This article describes how to use built-in Jupyter notebooks to import sample retail data to Azure Cosmos DB. You will see how to use the SQL and Azure Cosmos DB magic commands to run queries, analyze the data, and visualize the results.
+
+## Prerequisites
+
+* [Enable notebooks on an Azure Cosmos account](enable-notebooks.md)
+
+## Create the resources and import data
+
+In this section, you will create the Azure Cosmos database, container, and import the retail data to the container.
+
+1. Navigate to your Azure Cosmos account and open the **Data Explorer.**
+
+1. Go to the **Notebooks** tab, select `…` next to **My Notebooks** and create a **New Notebook**. Select **Python 3** as the default Kernel.
+
+ :::image type="content" source="./media/create-notebook-visualize-data/create-new-notebook.png" alt-text="Create a new notebook":::
+
+1. After a new notebook is created, you can rename it to something like **VisualizeRetailData.ipynb.**
+
+1. Next you will create a database named "RetailDemo" and a container named "WebsiteData" to store the retail data. You can use /CartID as the partition key. Copy and paste the following code to a new cell in your notebook and run it:
+
+ ```python
+ import azure.cosmos
+ from azure.cosmos.partition_key import PartitionKey
+
+ database = cosmos_client.create_database_if_not_exists('RetailDemo')
+ print('Database RetailDemo created')
+
+ container = database.create_container_if_not_exists(id='WebsiteData', partition_key=PartitionKey(path='/CartID'))
+ print('Container WebsiteData created')
+ ```
+
+ To run a cell, select `Shift + Enter` Or select the cell and choose **Run Active Cell** option at the data explorer navigation bar.
+
+ :::image type="content" source="./media/create-notebook-visualize-data/run-active-cell.png" alt-text="Run the active cell":::
+
+ The database and container are created in your current Azure Cosmos account. The container is provisioned with 400 RU/s. You will see the following output after the database and container is created.
+
+ ```console
+ Database RetailDemo created
+ Container WebsiteData created
+ ```
+
+ You can also refresh the **Data** tab and see the newly created resources:
+
+ :::image type="content" source="media/create-notebook-visualize-data/refresh-data-tab.png" alt-text="Refresh the data tab to see the new container":::
+
+1. Next you will import the sample retail data into Azure Cosmos container. Here is the format of an item from the retail data:
+
+ ```json
+ {
+ "CartID":5399,
+ "Action":"Viewed",
+ "Item":"Cosmos T-shirt",
+ "Price":350,
+ "UserName":"Demo.User10",
+ "Country":"Iceland",
+ "EventDate":"2015-06-25T00:00:00",
+ "Year":2015,"Latitude":-66.8673,
+ "Longitude":-29.8214,
+ "Address":"852 Modesto Loop, Port Ola, Iceland",
+ "id":"00ffd39c-7e98-4451-9b91-b2bcf2f9a32d"
+ }
+ ```
+
+ For the tutorial purpose, the sample retail data is stored in the Azure blob storage. You can import it to the Azure Cosmos container by pasting the following code into a new cell. You can confirm that the data is successfully imported by running a query to select the number of items.
+
+ ```python
+ # Read data from storage
+ import urllib.request, json
+
+ with urllib.request.urlopen("https://cosmosnotebooksdata.blob.core.windows.net/notebookdata/websiteData.json") as url:
+ data = json.loads(url.read().decode())
+
+ print("Importing data. This will take a few minutes...\n")
+
+ for event in data:
+ try:
+ container.upsert_item(body=event)
+ except errors.CosmosHttpResponseError as e:
+ raise
+
+ ## Run a query against the container to see number of documents
+ query = 'SELECT VALUE COUNT(1) FROM c'
+ result = list(container.query_items(query, enable_cross_partition_query=True))
+
+ print('Container with id \'{0}\' contains \'{1}\' items'.format(container.id, result[0]))
+ ```
+
+ When you run the previous query, it returns the following output:
+
+ ```console
+ Importing data. This will take a few minutes...
+
+ Container with id 'WebsiteData' contains '2654' items
+ ```
+
+## Get your data into a DataFrame
+
+Before running queries to analyze the data, you can read the data from container to a [Pandas DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) for analysis. Use the following sql magic command to read the data into a DataFrame:
+
+```bash
+%%sql --database {database_id} --container {container_id} --output outputDataframeVar
+{Query text}
+```
+
+To learn more, see the [built-in notebook commands and features in Azure Cosmos DB](use-python-notebook-features-and-commands.md) article. You will run the query- `SELECT c.Action, c.Price as ItemRevenue, c.Country, c.Item FROM c`. The results will be saved into a Pandas DataFrame named df_cosmos. Paste the following command in a new notebook cell and run it:
+
+```python
+%%sql --database RetailDemo --container WebsiteData --output df_cosmos
+SELECT c.Action, c.Price as ItemRevenue, c.Country, c.Item FROM c
+```
+
+In a new notebook cell, run the following code to read the first 10 items from the output:
+
+```python
+# See a sample of the result
+df_cosmos.head(10)
+```
++
+## Run queries and analyze your data
+
+In this section, you will run some queries on the data retrieved.
+
+* **Query1:** Run a Group by query on the DataFrame to get the sum of total sales revenue for each country/region and display 5 items from the results. In a new notebook cell, run the following code:
+
+ ```python
+ df_revenue = df_cosmos.groupby("Country").sum().reset_index()
+ display(df_revenue.head(5))
+ ```
+
+ :::image type="content" source="./media/create-notebook-visualize-data/total-sales-revenue-output.png" alt-text="Total sales revenue output":::
+
+* **Query2:** To get a list of top five purchased items, open a new notebook cell and run the following code:
+
+ ```python
+ import pandas as pd
+
+ ## What are the top 5 purchased items?
+ pd.DataFrame(df_cosmos[df_cosmos['Action']=='Purchased'].groupby('Item').size().sort_values(ascending=False).head(5), columns=['Count'])
+ ```
+
+ :::image type="content" source="./media/create-notebook-visualize-data/top5-purchased-items.png" alt-text="Top five purchased items":::
+
+## Visualize your data
+
+1. Now that we have our data on revenue from the Azure Cosmos container, you can visualize your data with a visualization library of your choice. In this tutorial, we will use Bokeh library. Open a new notebook cell and run the following code to install the Bokeh library. After all the requirements are satisfied, the library will be installed.
+
+ ```python
+ import sys
+ !{sys.executable} -m pip install bokeh --user
+ ```
+
+1. Next prepare to plot the data on a map. Join the data in Azure Cosmos DB with country/region information located in Azure Blob storage and convert the result to GeoJSON format. Copy the following code to a new notebook cell and run it.
+
+ ```python
+ import urllib.request, json
+ import geopandas as gpd
+
+ # Load country/region information for mapping
+ countries = gpd.read_file("https://cosmosnotebooksdata.blob.core.windows.net/notebookdata/countries.json")
+
+ # Merge the countries/regions dataframe with our data in Azure Cosmos DB, joining on country/region code
+ df_merged = countries.merge(df_revenue, left_on = 'admin', right_on = 'Country', how='left')
+
+ # Convert to GeoJSON so bokeh can plot it
+ merged_json = json.loads(df_merged.to_json())
+ json_data = json.dumps(merged_json)
+ ```
+
+1. Visualize the sales revenue of different countries/regions on a world map by running the following code in a new notebook cell:
+
+ ```python
+ from bokeh.io import output_notebook, show
+ from bokeh.plotting import figure
+ from bokeh.models import GeoJSONDataSource, LinearColorMapper, ColorBar
+ from bokeh.palettes import brewer
+
+ #Input GeoJSON source that contains features for plotting.
+ geosource = GeoJSONDataSource(geojson = json_data)
+
+ #Choose our choropleth color palette: https://bokeh.pydata.org/en/latest/docs/reference/palettes.html
+ palette = brewer['YlGn'][8]
+
+ #Reverse color order so that dark green is highest revenue
+ palette = palette[::-1]
+
+ #Instantiate LinearColorMapper that linearly maps numbers in a range, into a sequence of colors.
+ color_mapper = LinearColorMapper(palette = palette, low = 0, high = 1000)
+
+ #Define custom tick labels for color bar.
+ tick_labels = {'0': '$0', '250': '$250', '500':'$500', '750':'$750', '1000':'$1000', '1250':'$1250', '1500':'$1500','1750':'$1750', '2000': '>$2000'}
+
+ #Create color bar.
+ color_bar = ColorBar(color_mapper=color_mapper, label_standoff=8,width = 500, height = 20,
+ border_line_color=None,location = (0,0), orientation = 'horizontal', major_label_overrides = tick_labels)
+
+ #Create figure object.
+ p = figure(title = 'Sales revenue by country', plot_height = 600 , plot_width = 1150, toolbar_location = None)
+ p.xgrid.grid_line_color = None
+ p.ygrid.grid_line_color = None
+
+ #Add patch renderer to figure.
+ p.patches('xs','ys', source = geosource,fill_color = {'field' :'ItemRevenue', 'transform' : color_mapper},
+ line_color = 'black', line_width = 0.25, fill_alpha = 1)
+
+ #Specify figure layout.
+ p.add_layout(color_bar, 'below')
+
+ #Display figure inline in Jupyter Notebook.
+ output_notebook()
+
+ #Display figure.
+ show(p)
+ ```
+
+ The output displays the world map with different colors. The colors darker to lighter represent the countries/regions with highest revenue to lowest revenue.
+
+ :::image type="content" source="./media/create-notebook-visualize-data/countries-revenue-map-visualization.png" alt-text="Countries/regions revenue map visualization":::
+
+1. Let's see another case of data visualization. The WebsiteData container has record of users who viewed an item, added to their cart, and purchased the item. Let's plot the conversion rate of items purchased. Run the following code in a new cell to visualize the conversion rate for each item:
+
+ ```python
+ from bokeh.io import show, output_notebook
+ from bokeh.plotting import figure
+ from bokeh.palettes import Spectral3
+ from bokeh.transform import factor_cmap
+ from bokeh.models import ColumnDataSource, FactorRange
+
+ # Get the top 10 items as an array
+ top_10_items = df_cosmos[df_cosmos['Action']=='Purchased'].groupby('Item').size().sort_values(ascending=False)[:10].index.values.tolist()
+
+ # Filter our data to only these 10 items
+ df_top10 = df_cosmos[df_cosmos['Item'].isin(top_10_items)]
+
+ # Group by Item and Action, sorting by event count
+ df_top10_sorted = df_top10.groupby(['Item', 'Action']).count().rename(columns={'Country':'ResultCount'}, inplace=False).reset_index().sort_values(['Item', 'ResultCount'], ascending = False).set_index(['Item', 'Action'])
+
+ # Get sorted X-axis values - this way, we can display the funnel of view -> add -> purchase
+ x_axis_values = df_top10_sorted.index.values.tolist()
+
+ group = df_top10_sorted.groupby(['Item', 'Action'])
+
+ # Specifiy colors for X axis
+ index_cmap = factor_cmap('Item_Action', palette=Spectral3, factors=sorted(df_top10.Action.unique()), start=1, end=2)
+
+ # Create the plot
+
+ p = figure(plot_width=1200, plot_height=500, title="Conversion rate of items from View -> Add to cart -> Purchase", x_range=FactorRange(*x_axis_values), toolbar_location=None, tooltips=[("Number of events", "@ResultCount_max"), ("Item, Action", "@Item_Action")])
+
+ p.vbar(x='Item_Action', top='ItemRevenue_max', width=1, source=group,
+ line_color="white", fill_color=index_cmap, )
+
+ #Configure how the plot looks
+ p.y_range.start = 0
+ p.x_range.range_padding = 0.05
+ p.xgrid.grid_line_color = None
+ p.xaxis.major_label_orientation = 1.2
+ p.outline_line_color = "black"
+ p.xaxis.axis_label = "Item"
+ p.yaxis.axis_label = "Count"
+
+ #Display figure inline in Jupyter Notebook.
+ output_notebook()
+
+ #Display figure.
+ show(p)
+ ```
+
+ :::image type="content" source="./media/create-notebook-visualize-data/visualize-purchase-conversion-rate.png" alt-text="Visualize purchase conversion rate":::
+
+## Next steps
+
+* To learn more about Python notebook commands, see [how to use built-in notebook commands and features in Azure Cosmos DB](use-python-notebook-features-and-commands.md) article.
cosmos-db Create Real Time Weather Dashboard Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-real-time-weather-dashboard-powerbi.md
+
+ Title: Create a real-time dashboard using Azure Cosmos DB, Azure Analysis Services, and Power BI
+description: Learn how to create a live weather dashboard in Power BI using Azure Cosmos DB and Azure Analysis Services.
+++++ Last updated : 09/04/2019++++
+# Create a real-time dashboard using Azure Cosmos DB and Power BI
+
+This article describes the steps required to create a live weather dashboard in Power BI using Azure Cosmos DB OLTP connector and Azure Analysis Services. The Power BI dashboard will display charts to show near real-time information about temperature and rainfall in a region.
+
+Another option is to create near real-time reports using [Azure Synapse Link for Azure Cosmos DB](../synapse-link.md). With Azure Synapse Link, you can connect Power BI to analyze your Azure Cosmos DB data, with no performance or cost impact to your transactional workloads, and no ETL pipelines. You can use either [DirectQuery](/power-bi/connect-dat).
++
+## Reporting scenarios
+
+There are multiple ways to set up reporting dashboards on data stored in Azure Cosmos DB. Depending on the staleness requirements and the size of the data, the following table describes the reporting setup for each scenario:
++
+|Scenario |Setup |
+|||
+|1. Generating real time reports on large data sets with aggregates | **Option 1:** [Power BI and Azure Synapse Link with DirectQuery](../synapse-link-power-bi.md)<br /> **Option 2:** [Power BI and Spark connector with DirectQuery + Azure Databricks + Azure Cosmos DB Spark connector.](https://github.com/Azure/azure-cosmosdb-spark/wiki/Connecting-Cosmos-DB-with-PowerBI-using-spark-and-databricks-premium)<br /> **Option 3:** Power BI and Azure Analysis Services connector with DirectQuery + Azure Analysis Services + Azure Databricks + Cosmos DB Spark connector. |
+|2. Generating real time reports on large data sets (>= 10 GB) | **Option 1:** [Power BI and Azure Synapse Link with DirectQuery](../synapse-link-power-bi.md)<br /> **Option 2:** [Power BI and Azure Analysis Services connector with DirectQuery + Azure Analysis Services](create-real-time-weather-dashboard-powerbi.md) |
+|3. Generating ad-hoc reports on large data sets (< 10 GB) | [Power BI Azure Cosmos DB connector with import mode and incremental refresh](create-real-time-weather-dashboard-powerbi.md) |
+|4. Generating ad-hoc reports with periodic refresh | [Power BI Azure Cosmos DB connector with import mode (Scheduled periodic refresh)](powerbi-visualize.md) |
+|5. Generating ad-hoc reports (no refresh) | [Power BI Azure Cosmos DB connector with import mode](powerbi-visualize.md) |
++
+Scenarios 4 and 5 can be easily set up [using the Azure Cosmos DB Power BI connector](powerbi-visualize.md). This article describes below the setups for scenarios 2 (Option 2) and 3.
+
+### Power BI with incremental refresh
+
+Power BI has a mode where incremental refresh can be configured. This mode eliminates the need to create and manage Azure Analysis Services partitions. Incremental refresh can be set up to filter only the latest updates in large datasets. However, this mode works only with Power BI Premium service that has a dataset limitation of 10 GB.
+
+### Power BI Azure Analysis connector + Azure Analysis Services
+
+Azure Analysis Services provides a fully managed platform as a service that hosts enterprise-grade data models in the cloud. Massive data sets can be loaded from Azure Cosmos DB into Azure Analysis Services. To avoid querying the entire dataset all the time, the datasets can be subdivided into Azure Analysis Services partitions, which can be refreshed independently at different frequencies.
+
+## Power BI incremental refresh
+
+### Ingest weather data into Azure Cosmos DB
+
+Set up an ingestion pipeline to load [weather data](https://catalog.data.gov/dataset?groups=climate5434&#topic=climate_navigation) to Azure Cosmos DB. You can set up an [Azure Data Factory (ADF)](../../data-factory/connector-azure-cosmos-db.md) job to periodically load the latest weather data into Azure Cosmos DB using the HTTP Source and Cosmos DB sink.
++
+### Connect Power BI to Azure Cosmos DB
+
+1. **Connect Azure Cosmos account to Power BI** - Open the Power BI Desktop and use the Azure Cosmos DB connector to select the right database and container.
+
+ :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/cosmosdb-powerbi-connector.png" alt-text="Azure Cosmos DB Power BI connector":::
+
+1. **Configure incremental refresh** - Follow the steps in [incremental refresh with Power BI](/power-bi/service-premium-incremental-refresh) article to configure incremental refresh for the dataset. Add the **RangeStart** and **RangeEnd** parameters as shown in the following screenshot:
+
+ :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/configure-range-parameters.png" alt-text="Configure range parameters":::
+
+ Since the dataset has a Date column that is in text form, the **RangeStart** and **RangeEnd** parameters should be transformed to use the following filter. In the **Advanced Editor** pane, modify your query add the following text to filter the rows based on the RangeStart and RangeEnd parameters:
+
+ ```
+ #"Filtered Rows" = Table.SelectRows(#"Expanded Document", each [Document.date] > DateTime.ToText(RangeStart,"yyyy-MM-dd") and [Document.date] < DateTime.ToText(RangeEnd,"yyyy-MM-dd"))
+ ```
+
+ Depending on which column and data type is present in the source dataset, you can change the RangeStart and RangeEnd fields accordingly
+
+
+ |Property |Data type |Filter |
+ ||||
+ |_ts | Numeric | [_ts] > Duration.TotalSeconds(RangeStart - #datetime(1970, 1, 1, 0, 0, 0)) and [_ts] < Duration.TotalSeconds(RangeEnd - #datetime(1970, 1, 1, 0, 0, 0))) |
+ |Date (for example:- 2019-08-19) | String | [Document.date]> DateTime.ToText(RangeStart,"yyyy-MM-dd") and [Document.date] < DateTime.ToText(RangeEnd,"yyyy-MM-dd") |
+ |Date (for example:- 2019-08-11 12:00:00) | String | [Document.date]> DateTime.ToText(RangeStart," yyyy-mm-dd HH:mm:ss") and [Document.date] < DateTime.ToText(RangeEnd,"yyyy-mm-dd HH:mm:ss") |
++
+1. **Define the refresh policy** - Define the refresh policy by navigating to the **Incremental refresh** tab on the **context** menu for the table. Set the refresh policy to refresh **every day** and store the last month data.
+
+ :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/define-refresh-policy.png" alt-text="Define refresh policy":::
+
+ Ignore the warning that says *the M query cannot be confirmed to be folded*. The Azure Cosmos DB connector folds filter queries.
+
+1. **Load the data and generate the reports** - By using the data you have loaded earlier, create the charts to report on temperature and rainfall.
+
+ :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/load-data-generate-report.png" alt-text="Load data and generate report":::
+
+1. **Publish the report to Power BI premium** - Since incremental refresh is a Premium only feature, the publish dialog only allows selection of a workspace on Premium capacity. The first refresh may take longer to import the historical data. Subsequent data refreshes are much quicker because they use incremental refresh.
++
+## Power BI Azure Analysis connector + Azure Analysis Services
+
+### Ingest weather data into Azure Cosmos DB
+
+Set up an ingestion pipeline to load [weather data](https://catalog.data.gov/dataset?groups=climate5434&#topic=climate_navigation) to Azure Cosmos DB. You can set up an Azure Data Factory(ADF) job to periodically load the latest weather data into Azure Cosmos DB using the HTTP Source and Cosmos DB Sink.
+
+### Connect Azure Analysis Services to Azure Cosmos account
+
+1. **Create a new Azure Analysis Services cluster** - [Create an instance of Azure Analysis services](../../analysis-services/analysis-services-create-server.md) in the same region as the Azure Cosmos account and the Databricks cluster.
+
+1. **Create a new Analysis Services Tabular Project in Visual Studio** - [Install the SQL Server Data Tools (SSDT)](/sql/ssdt/download-sql-server-data-tools-ssdt) and create an Analysis Services Tabular project in Visual Studio.
+
+ :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/create-analysis-services-project.png" alt-text="Create Azure Analysis Services project":::
+
+ Choose the **Integrated Workspace** instance and the set the Compatibility Level to **SQL Server 2017 / Azure Analysis Services (1400)**
+
+ :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/tabular-model-designer.png" alt-text="Azure Analysis Services tabular model designer":::
+
+1. **Add the Azure Cosmos DB data source** - Navigate to **Models**> **Data Sources** > **New Data Source** and add the Azure Cosmos DB data source as shown in the following screenshot:
+
+ :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/add-data-source.png" alt-text="Add Cosmos DB data source":::
+
+ Connect to Azure Cosmos DB by providing the **account URI**, **database name**, and the **container name**. You can now see the data from Azure Cosmos container is imported into Power BI.
+
+ :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/preview-cosmosdb-data.png" alt-text="Preview Azure Cosmos DB data":::
+
+1. **Construct the Analysis Services model** - Open the query editor, perform the required operations to optimize the loaded data set:
+
+ * Extract only the weather-related columns (temperature and rainfall)
+
+ * Extract the month information from the table. This data is useful in creating partitions as described in the next section.
+
+ * Convert the temperature columns to number
+
+ The resulting M expression is as follows:
+
+ ```
+ let
+ Source=#"DocumentDB/https://[ACCOUNTNAME].documents.azure.com:443/",
+ #"Expanded Document" = Table.ExpandRecordColumn(Source, "Document", {"id", "_rid", "_self", "_etag", "fogground", "snowfall", "dust", "snowdepth", "mist", "drizzle", "hail", "fastest2minwindspeed", "thunder", "glaze", "snow", "ice", "fog", "temperaturemin", "fastest5secwindspeed", "freezingfog", "temperaturemax", "blowingsnow", "freezingrain", "rain", "highwind", "date", "precipitation", "fogheavy", "smokehaze", "avgwindspeed", "fastest2minwinddir", "fastest5secwinddir", "_attachments", "_ts"}, {"Document.id", "Document._rid", "Document._self", "Document._etag", "Document.fogground", "Document.snowfall", "Document.dust", "Document.snowdepth", "Document.mist", "Document.drizzle", "Document.hail", "Document.fastest2minwindspeed", "Document.thunder", "Document.glaze", "Document.snow", "Document.ice", "Document.fog", "Document.temperaturemin", "Document.fastest5secwindspeed", "Document.freezingfog", "Document.temperaturemax", "Document.blowingsnow", "Document.freezingrain", "Document.rain", "Document.highwind", "Document.date", "Document.precipitation", "Document.fogheavy", "Document.smokehaze", "Document.avgwindspeed", "Document.fastest2minwinddir", "Document.fastest5secwinddir", "Document._attachments", "Document._ts"}),
+ #"Select Columns" = Table.SelectColumns(#"Expanded Document",{"Document.temperaturemin", "Document.temperaturemax", "Document.rain", "Document.date"}),
+ #"Duplicated Column" = Table.DuplicateColumn(#"Select Columns", "Document.date", "Document.month"),
+ #"Extracted First Characters" = Table.TransformColumns(#"Duplicated Column", {{"Document.month", each Text.Start(_, 7), type text}}),
+ #"Sorted Rows" = Table.Sort(#"Extracted First Characters",{{"Document.date", Order.Ascending}}),
+ #"Changed Type" = Table.TransformColumnTypes(#"Sorted Rows",{{"Document.temperaturemin", type number}, {"Document.temperaturemax", type number}}),
+ #"Filtered Rows" = Table.SelectRows(#"Changed Type", each [Document.month] = "2019-07")
+ in
+ #"Filtered Rows"
+ ```
+
+ Additionally, change the data type of the temperature columns to Decimal to make sure that these values can be plotted in Power BI.
+
+1. **Create Azure Analysis partitions** - Create partitions in Azure Analysis Services to divide the dataset into logical partitions that can be refreshed independently and at different frequencies. In this example, you create two partitions that would divide the dataset into the most recent month's data and everything else.
+
+ :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/create-analysis-services-partitions.png" alt-text="Create analysis services partitions":::
+
+ Create the following two partitions in Azure Analysis
+
+ * **Latest Month** - `#"Filtered Rows" = Table.SelectRows(#"Sorted Rows", each [Document.month] = "2019-07")`
+ * **Historical** - `#"Filtered Rows" = Table.SelectRows(#"Sorted Rows", each [Document.month] <> "2019-07")`
+
+1. **Deploy the Model to the Azure Analysis Server** - Right click on the Azure Analysis Services project and choose **Deploy**. Add the server name in the **Deployment Server properties** pane.
+
+ :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/analysis-services-deploy-model.png" alt-text="Deploy Azure Analysis Services model":::
+
+1. **Configure partition refreshes and merges** - Azure Analysis Services allows independent processing of partitions. Since we want the **Latest Month** partition to be constantly updated with the most recent data, set the refresh interval to 5 minutes. You can refresh the data by using the [REST API](../../analysis-services/analysis-services-async-refresh.md), [Azure automation](../../analysis-services/analysis-services-refresh-azure-automation.md), or with a [Logic App](../../analysis-services/analysis-services-refresh-logic-app.md). It's not required to refresh the data in historical partition. Additionally, you need to write some code to consolidate the latest month partition to the historical partition and create a new latest month partition.
+
+## Connect Power BI to Analysis Services
+
+1. **Connect to the Azure Analysis Server using the Azure Analysis Services database Connector** - Choose the **Live mode** and connect to the Azure Analysis Services instance as shown in the following screenshot:
+
+ :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/analysis-services-get-data.png" alt-text="Get data from Azure Analysis Services":::
+
+1. **Load the data and generate reports** - By using the data you have loaded earlier, create charts to report on temperature and rainfall. Since you are creating a live connection, the queries should be executed on the data in the Azure Analysis Services model that you have deployed in the previous step. The temperature charts will be updated within five minutes after the new data is loaded into Azure Cosmos DB.
+
+ :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/load-data-generate-report.png" alt-text="Load the data and generate reports":::
+
+## Next steps
+
+* To learn more about Power BI, see [Get started with Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-get-started/).
+
+* [Connect Qlik Sense to Azure Cosmos DB and visualize your data](../visualize-qlik-sense.md)
cosmos-db Create Sql Api Dotnet V4 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-sql-api-dotnet-v4.md
+
+ Title: Manage Azure Cosmos DB SQL API resources using .NET V4 SDK
+description: Use this quickstart to build a console app by using the .NET V4 SDK to manage Azure Cosmos DB SQL API account resources.
++++
+ms.devlang: dotnet
+ Last updated : 08/26/2021++
+# Quickstart: Build a console app by using the .NET V4 SDK (preview) to manage Azure Cosmos DB SQL API account resources
+
+> [!div class="op_single_selector"]
+> * [.NET V3](create-sql-api-dotnet.md)
+> * [.NET V4](create-sql-api-dotnet-V4.md)
+> * [Java SDK v4](create-sql-api-java.md)
+> * [Spring Data v3](create-sql-api-spring-data.md)
+> * [Spark v3 connector](create-sql-api-spark.md)
+> * [Node.js](create-sql-api-nodejs.md)
+> * [Python](create-sql-api-python.md)
+> * [Xamarin](create-sql-api-xamarin-dotnet.md)
+
+Get started with the Azure Cosmos DB SQL API client library for .NET. Follow the steps in this article to install the .NET V4 (Azure.Cosmos) package and build an app. Then, try out the example code for basic create, read, update, and delete (CRUD) operations on the data stored in Azure Cosmos DB.
+
+> [!IMPORTANT]
+> The .NET V4 SDK for Azure Cosmos DB is currently in public preview. This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
+>
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Azure Cosmos DB is Microsoft's fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value, document, and graph databases. Use the Azure Cosmos DB SQL API client library for .NET to:
+
+* Create an Azure Cosmos database and a container.
+* Add sample data to the container.
+* Query the data.
+* Delete the database.
+
+[Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/v4) | [Package (NuGet)](https://www.nuget.org/packages/Azure.Cosmos)
+
+## Prerequisites
+
+* Azure subscription. [Create one for free](https://azure.microsoft.com/free/). You can also [try Azure Cosmos DB](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription, free of charge and commitments.
+* [NET Core 3 SDK](https://dotnet.microsoft.com/download/dotnet-core). You can verify which version is available in your environment by running `dotnet --version`.
+
+## Set up
+
+This section walks you through creating an Azure Cosmos account and setting up a project that uses the Azure Cosmos DB SQL API client library for .NET to manage resources.
+
+The example code described in this article creates a `FamilyDatabase` database and family members within that database. Each family member is an item and has properties such as `Id`, `FamilyName`, `FirstName`, `LastName`, `Parents`, `Children`, and `Address`. The `LastName` property is used as the partition key for the container.
+
+### <a id="create-account"></a>Create an Azure Cosmos account
+
+If you use the [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) option to create an Azure Cosmos account, you must create an Azure Cosmos account of type **SQL API**. An Azure Cosmos test account is already created for you. You don't have to create the account explicitly, so you can skip this section and move to the next section.
+
+If you have your own Azure subscription or created a subscription for free, you should create an Azure Cosmos account explicitly. The following code will create an Azure Cosmos account with session consistency. The account is replicated in `South Central US` and `North Central US`.
+
+You can use Azure Cloud Shell to create the Azure Cosmos account. Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work: either Bash or PowerShell.
+
+For this quickstart, use Bash. Azure Cloud Shell also requires a storage account. You can create one when prompted.
+
+1. Select the **Try It** button next to the following code, choose **Bash** mode, select **create a storage account**, and sign in to Cloud Shell.
+
+1. Copy and paste the following code to Azure Cloud Shell and run it. The Azure Cosmos account name must be globally unique, so be sure to update the `mysqlapicosmosdb` value before you run the command.
+
+ ```azurecli-interactive
+
+ # Set variables for the new SQL API account, database, and container
+ resourceGroupName='myResourceGroup'
+ location='southcentralus'
+
+ # The Azure Cosmos account name must be globally unique, so be sure to update the `mysqlapicosmosdb` value before you run the command
+ accountName='mysqlapicosmosdb'
+
+ # Create a resource group
+ az group create \
+ --name $resourceGroupName \
+ --location $location
+
+ # Create a SQL API Cosmos DB account with session consistency and multi-region writes enabled
+ az cosmosdb create \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --kind GlobalDocumentDB \
+ --locations regionName="South Central US" failoverPriority=0 --locations regionName="North Central US" failoverPriority=1 \
+ --default-consistency-level "Session" \
+ --enable-multiple-write-locations true
+
+ ```
+
+The creation of the Azure Cosmos account takes a while. After the operation is successful, you can see the confirmation output. Sign in to the [Azure portal](https://portal.azure.com/) and verify that the Azure Cosmos account with the specified name exists. You can close the Azure Cloud Shell window after the resource is created.
+
+### <a id="create-dotnet-core-app"></a>Create a .NET app
+
+Create a .NET application in your preferred editor or IDE. Open the Windows command prompt or a terminal window from your local computer. You'll run all the commands in the next sections from the command prompt or terminal.
+
+Run the following `dotnet new` command to create an app with the name `todo`. The `--langVersion` parameter sets the `LangVersion` property in the created project file.
+
+ ```bash
+ dotnet new console --langVersion:8 -n todo
+ ```
+
+Use the following commands to change your directory to the newly created app folder and build the application:
+
+ ```bash
+ cd todo
+ dotnet build
+ ```
+
+The expected output from the build should look something like this:
+
+```bash
+ Restore completed in 100.37 ms for C:\Users\user1\Downloads\CosmosDB_Samples\todo\todo.csproj.
+ todo -> C:\Users\user1\Downloads\CosmosDB_Samples\todo\bin\Debug\netcoreapp3.0\todo.dll
+
+Build succeeded.
+ 0 Warning(s)
+ 0 Error(s)
+
+Time Elapsed 00:00:34.17
+```
+
+### <a id="install-package"></a>Install the Azure Cosmos DB package
+
+While you're still in the application directory, install the Azure Cosmos DB client library for .NET Core by using the `dotnet add package` command:
+
+ ```bash
+ dotnet add package Azure.Cosmos --version 4.0.0-preview3
+ ```
+
+### Copy your Azure Cosmos account credentials from the Azure portal
+
+The sample application needs to authenticate to your Azure Cosmos account. To authenticate, pass the Azure Cosmos account credentials to the application. Get your Azure Cosmos account credentials by following these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Go to your Azure Cosmos account.
+
+1. Open the **Keys** pane and copy the **URI** and **PRIMARY KEY** values for your account. You'll add the URI and key values to an environment variable in the next procedure.
+
+## <a id="object-model"></a>Learn the object model
+
+Before you continue building the application, let's look into the hierarchy of resources in Azure Cosmos DB and the object model that's used to create and access these resources. Azure Cosmos DB creates resources in the following order:
+
+* Azure Cosmos account
+* Databases
+* Containers
+* Items
+
+To learn more about the hierarchy of entities, see the [Azure Cosmos DB resource model](../account-databases-containers-items.md) article. You'll use the following .NET classes to interact with these resources:
+
+* `CosmosClient`. This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.
+* `CreateDatabaseIfNotExistsAsync`. This method creates (if it doesn't exist) or gets (if it already exists) a database resource as an asynchronous operation.
+* `CreateContainerIfNotExistsAsync`. This method creates (if it doesn't exist) or gets (if it already exists) a container as an asynchronous operation. You can check the status code from the response to determine whether the container was newly created (201) or an existing container was returned (200).
+* `CreateItemAsync`. This method creates an item within the container.
+* `UpsertItemAsync`. This method creates an item within the container if it doesn't already exist or replaces the item if it already exists.
+* `GetItemQueryIterator`. This method creates a query for items under a container in an Azure Cosmos database by using a SQL statement with parameterized values.
+* `DeleteAsync`. This method deletes the specified database from your Azure Cosmos account.
+
+ ## <a id="code-examples"></a>Configure code examples
+
+The sample code described in this article creates a family database in Azure Cosmos DB. The family database contains family details such as name, address, location, parents, children, and pets.
+
+Before you populate the data for your Azure Cosmos account, define the properties of a family item. Create a new class named `Family.cs` at the root level of your sample application and add the following code to it:
+
+[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Family.cs)]
+
+### Add the using directives and define the client object
+
+From the project directory, open the *Program.cs* file in your editor and add the following `using` directives at the top of your application:
+
+[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=Usings)]
++
+Add the following global variables in your `Program` class. These variables will include the endpoint and authorization keys, the name of the database, and the container that you'll create. Be sure to replace the endpoint and authorization key values according to your environment.
+
+[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=Constants)]
+
+Finally, replace the `Main` method:
+
+[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=Main)]
+
+### Create a database
+
+Define the `CreateDatabaseAsync` method within the `program.cs` class. This method creates the `FamilyDatabase` database if it doesn't already exist.
+
+[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=CreateDatabaseAsync)]
+
+### Create a container
+
+Define the `CreateContainerAsync` method within the `Program` class. This method creates the `FamilyContainer` container if it doesn't already exist.
+
+[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=CreateContainerAsync)]
+
+### Create an item
+
+Create a family item by adding the `AddItemsToContainerAsync` method with the following code. You can use the `CreateItemAsync` or `UpsertItemAsync` method to create an item.
+
+[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=AddItemsToContainerAsync)]
+
+### Query the items
+
+After you insert an item, you can run a query to get the details of the Andersen family. The following code shows how to execute the query by using the SQL query directly. The SQL query to get the Andersen family details is `SELECT * FROM c WHERE c.LastName = 'Andersen'`. Define the `QueryItemsAsync` method within the `Program` class and add the following code to it:
+
+[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=QueryItemsAsync)]
+
+### Replace an item
+
+Read a family item and then update it by adding the `ReplaceFamilyItemAsync` method with the following code:
+
+[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=ReplaceFamilyItemAsync)]
+
+### Delete an item
+
+Delete a family item by adding the `DeleteFamilyItemAsync` method with the following code:
+
+[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=DeleteFamilyItemAsync)]
+
+### Delete the database
+
+You can delete the database by adding the `DeleteDatabaseAndCleanupAsync` method with the following code:
+
+[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=DeleteDatabaseAndCleanupAsync)]
+
+After you add all the required methods, save the *Program.cs* file.
+
+## Run the code
+
+Run the application to create the Azure Cosmos DB resources:
+
+ ```bash
+ dotnet run
+ ```
+
+The following output is generated when you run the application:
+
+ ```bash
+ Created Database: FamilyDatabase
+
+ Created Container: FamilyContainer
+
+ Created item in database with id: Andersen.1
+
+ Running query: SELECT * FROM c WHERE c.LastName = 'Andersen'
+
+ Read {"id":"Andersen.1","LastName":"Andersen","Parents":[{"FamilyName":null,"FirstName":"Thomas"},{"FamilyName":null "FirstName":"Mary Kay"}],"Children":[{"FamilyName":null,"FirstName":"Henriette Thaulow","Gender":"female","Grade":5,"Pets": [{"GivenName":"Fluffy"}]}],"Address":{"State":"WA","County":"King","City":"Seattle"},"IsRegistered":false}
+
+ Updated Family [Wakefield,Wakefield.7].
+ Body is now: {"id":"Wakefield.7","LastName":"Wakefield","Parents":[{"FamilyName":"Wakefield","FirstName":"Robin"} {"FamilyName":"Miller","FirstName":"Ben"}],"Children":[{"FamilyName":"Merriam","FirstName":"Jesse","Gender":"female","Grade":6 "Pets":[{"GivenName":"Goofy"},{"GivenName":"Shadow"}]},{"FamilyName":"Miller","FirstName":"Lisa","Gender":"female","Grade":1 "Pets":null}],"Address":{"State":"NY","County":"Manhattan","City":"NY"},"IsRegistered":true}
+
+ Deleted Family [Wakefield,Wakefield.7]
+
+ Deleted Database: FamilyDatabase
+
+ End of demo, press any key to exit.
+ ```
+
+You can validate that the data is created by signing in to the Azure portal and seeing the required items in your Azure Cosmos account.
+
+## Clean up resources
+
+When you no longer need the Azure Cosmos account and the corresponding resource group, you can use the Azure CLI or Azure PowerShell to remove them. The following command shows how to delete the resource group by using the Azure CLI:
+
+```azurecli
+az group delete -g "myResourceGroup"
+```
+
+## Next steps
+
+In this quickstart, you learned how to create an Azure Cosmos account, create a database, and create a container by using a .NET Core app. You can now import more data to your Azure Cosmos account by using the instructions in the following article:
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+> [!div class="nextstepaction"]
+> [Import data into Azure Cosmos DB](../import-data.md)
cosmos-db Create Sql Api Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-sql-api-dotnet.md
+
+ Title: Quickstart - Build a .NET console app to manage Azure Cosmos DB SQL API resources
+description: Learn how to build a .NET console app to manage Azure Cosmos DB SQL API account resources in this quickstart.
++++
+ms.devlang: dotnet
+ Last updated : 08/26/2021+++
+# Quickstart: Build a .NET console app to manage Azure Cosmos DB SQL API resources
+
+> [!div class="op_single_selector"]
+> * [.NET V3](create-sql-api-dotnet.md)
+> * [.NET V4](create-sql-api-dotnet-V4.md)
+> * [Java SDK v4](create-sql-api-java.md)
+> * [Spring Data v3](create-sql-api-spring-data.md)
+> * [Spark v3 connector](create-sql-api-spark.md)
+> * [Node.js](create-sql-api-nodejs.md)
+> * [Python](create-sql-api-python.md)
+> * [Xamarin](create-sql-api-xamarin-dotnet.md)
+
+Get started with the Azure Cosmos DB SQL API client library for .NET. Follow the steps in this doc to install the .NET package, build an app, and try out the example code for basic CRUD operations on the data stored in Azure Cosmos DB.
+
+Azure Cosmos DB is Microsoft's fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value, document, and graph databases. Use the Azure Cosmos DB SQL API client library for .NET to:
+
+* Create an Azure Cosmos database and a container
+* Add sample data to the container
+* Query the data
+* Delete the database
+
+[API reference documentation](/dotnet/api/microsoft.azure.cosmos) | [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos)
+
+## Prerequisites
+
+* Azure subscription - [create one for free](https://azure.microsoft.com/free/) or you can [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription, free of charge and commitments.
+* The [.NET Core 2.1 SDK or later](https://dotnet.microsoft.com/download/dotnet-core/2.1).
+
+## Setting up
+
+This section walks you through creating an Azure Cosmos account and setting up a project that uses Azure Cosmos DB SQL API client library for .NET to manage resources. The example code described in this article creates a `FamilyDatabase` database and family members (each family member is an item) within that database. Each family member has properties such as `Id, FamilyName, FirstName, LastName, Parents, Children, Address,`. The `LastName` property is used as the partition key for the container.
+
+### <a id="create-account"></a>Create an Azure Cosmos account
+
+If you use the [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) option to create an Azure Cosmos account, you must create an Azure Cosmos DB account of type **SQL API**. An Azure Cosmos DB test account is already created for you. You don't have to create the account explicitly, so you can skip this section and move to the next section.
+
+If you have your own Azure subscription or created a subscription for free, you should create an Azure Cosmos account explicitly. The following code will create an Azure Cosmos account with session consistency. The account is replicated in `South Central US` and `North Central US`.
+
+You can use Azure Cloud Shell to create the Azure Cosmos account. Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work, either Bash or PowerShell. For this quickstart, choose **Bash** mode. Azure Cloud Shell also requires a storage account, you can create one when prompted.
+
+Select the **Try It** button next to the following code, choose **Bash** mode select **create a storage account** and login to Cloud Shell. Next copy and paste the following code to Azure Cloud Shell and run it. The Azure Cosmos account name must be globally unique, make sure to update the `mysqlapicosmosdb` value before you run the command.
+
+```azurecli-interactive
+
+# Set variables for the new SQL API account, database, and container
+resourceGroupName='myResourceGroup'
+location='southcentralus'
+
+# The Azure Cosmos account name must be globally unique, make sure to update the `mysqlapicosmosdb` value before you run the command
+accountName='mysqlapicosmosdb'
+
+# Create a resource group
+az group create \
+ --name $resourceGroupName \
+ --location $location
+
+# Create a SQL API Cosmos DB account with session consistency and multi-region writes enabled
+az cosmosdb create \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --kind GlobalDocumentDB \
+ --locations regionName="South Central US" failoverPriority=0 --locations regionName="North Central US" failoverPriority=1 \
+ --default-consistency-level "Session" \
+ --enable-multiple-write-locations true
+
+```
+
+The creation of the Azure Cosmos account takes a while, once the operation is successful, you can see the confirmation output. After the command completes successfully, sign into the [Azure portal](https://portal.azure.com/) and verify that the Azure Cosmos account with the specified name exists. You can close the Azure Cloud Shell window after the resource is created.
+
+### <a id="create-dotnet-core-app"></a>Create a new .NET app
+
+Create a new .NET application in your preferred editor or IDE. Open the Windows command prompt or a Terminal window from your local computer. You will run all the commands in the next sections from the command prompt or terminal. Run the following dotnet new command to create a new app with the name `todo`. The --langVersion parameter sets the LangVersion property in the created project file.
+
+```console
+dotnet new console --langVersion 7.1 -n todo
+```
+
+Change your directory to the newly created app folder. You can build the application with:
+
+```console
+cd todo
+dotnet build
+```
+
+The expected output from the build should look something like this:
+
+```console
+ Restore completed in 100.37 ms for C:\Users\user1\Downloads\CosmosDB_Samples\todo\todo.csproj.
+ todo -> C:\Users\user1\Downloads\CosmosDB_Samples\todo\bin\Debug\netcoreapp2.2\todo.dll
+ todo -> C:\Users\user1\Downloads\CosmosDB_Samples\todo\bin\Debug\netcoreapp2.2\todo.Views.dll
+
+Build succeeded.
+ 0 Warning(s)
+ 0 Error(s)
+
+Time Elapsed 00:00:34.17
+```
+
+### <a id="install-package"></a>Install the Azure Cosmos DB package
+
+While still in the application directory, install the Azure Cosmos DB client library for .NET Core by using the dotnet add package command.
+
+```console
+dotnet add package Microsoft.Azure.Cosmos
+```
+
+### Copy your Azure Cosmos account credentials from the Azure portal
+
+The sample application needs to authenticate to your Azure Cosmos account. To authenticate, you should pass the Azure Cosmos account credentials to the application. Get your Azure Cosmos account credentials by following these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to your Azure Cosmos account.
+
+1. Open the **Keys** pane and copy the **URI** and **PRIMARY KEY** of your account. You will add the URI and keys values to an environment variable in the next step.
+
+### Set the environment variables
+
+After you have copied the **URI** and **PRIMARY KEY** of your account, save them to a new environment variable on the local machine running the application. To set the environment variable, open a console window, and run the following command. Make sure to replace `<Your_Azure_Cosmos_account_URI>` and `<Your_Azure_Cosmos_account_PRIMARY_KEY>` values.
+
+**Windows**
+
+```console
+setx EndpointUrl "<Your_Azure_Cosmos_account_URI>"
+setx PrimaryKey "<Your_Azure_Cosmos_account_PRIMARY_KEY>"
+```
+
+**Linux**
+
+```bash
+export EndpointUrl = "<Your_Azure_Cosmos_account_URI>"
+export PrimaryKey = "<Your_Azure_Cosmos_account_PRIMARY_KEY>"
+```
+
+**macOS**
+
+```bash
+export EndpointUrl = "<Your_Azure_Cosmos_account_URI>"
+export PrimaryKey = "<Your_Azure_Cosmos_account_PRIMARY_KEY>"
+```
+
+ ## <a id="object-model"></a>Object model
+
+Before you start building the application, let's look into the hierarchy of resources in Azure Cosmos DB and the object model used to create and access these resources. The Azure Cosmos DB creates resources in the following order:
+
+* Azure Cosmos account
+* Databases
+* Containers
+* Items
+
+To learn in more about the hierarchy of different entities, see the [working with databases, containers, and items in Azure Cosmos DB](../account-databases-containers-items.md) article. You will use the following .NET classes to interact with these resources:
+
+* [CosmosClient](/dotnet/api/microsoft.azure.cosmos.cosmosclient) - This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.
+
+* [CreateDatabaseIfNotExistsAsync](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseifnotexistsasync) - This method creates (if doesn't exist) or gets (if already exists) a database resource as an asynchronous operation.
+
+* [CreateContainerIfNotExistsAsync](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync)
+* [CreateItemAsync](/dotnet/api/microsoft.azure.cosmos.container.createitemasync) - This method creates an item within the container.
+
+* [UpsertItemAsync](/dotnet/api/microsoft.azure.cosmos.container.upsertitemasync) - This method creates an item within the container if it doesn't already exist or replaces the item if it already exists.
+
+* [GetItemQueryIterator](/dotnet/api/microsoft.azure.cosmos.container.GetItemQueryIterator) - This method creates a query for items under a container in an Azure Cosmos database using a SQL statement with parameterized values.
+
+* [DeleteAsync](/dotnet/api/microsoft.azure.cosmos.database.deleteasync) - Deletes the specified database from your Azure Cosmos account. `DeleteAsync` method only deletes the database. Disposing of the `Cosmosclient` instance should happen separately (which it does in the DeleteDatabaseAndCleanupAsync method.
+
+ ## <a id="code-examples"></a>Code examples
+
+The sample code described in this article creates a family database in Azure Cosmos DB. The family database contains family details such as name, address, location, the associated parents, children, and pets. Before populating the data to your Azure Cosmos account, define the properties of a family item. Create a new class named `Family.cs` at the root level of your sample application and add the following code to it:
+
+```csharp
+using Newtonsoft.Json;
+
+namespace todo
+{
+ public class Family
+ {
+ [JsonProperty(PropertyName = "id")]
+ public string Id { get; set; }
+ public string LastName { get; set; }
+ public Parent[] Parents { get; set; }
+ public Child[] Children { get; set; }
+ public Address Address { get; set; }
+ public bool IsRegistered { get; set; }
+ // The ToString() method is used to format the output, it's used for demo purpose only. It's not required by Azure Cosmos DB
+ public override string ToString()
+ {
+ return JsonConvert.SerializeObject(this);
+ }
+ }
+
+ public class Parent
+ {
+ public string FamilyName { get; set; }
+ public string FirstName { get; set; }
+ }
+
+ public class Child
+ {
+ public string FamilyName { get; set; }
+ public string FirstName { get; set; }
+ public string Gender { get; set; }
+ public int Grade { get; set; }
+ public Pet[] Pets { get; set; }
+ }
+
+ public class Pet
+ {
+ public string GivenName { get; set; }
+ }
+
+ public class Address
+ {
+ public string State { get; set; }
+ public string County { get; set; }
+ public string City { get; set; }
+ }
+}
+```
+
+### Add the using directives & define the client object
+
+From the project directory, open the `Program.cs` file in your editor and add the following using directives at the top of your application:
+
+```csharp
+
+using System;
+using System.Threading.Tasks;
+using System.Configuration;
+using System.Collections.Generic;
+using System.Net;
+using Microsoft.Azure.Cosmos;
+```
+
+To the **Program.cs** file, add code to read the environment variables that you have set in the previous step. Define the `CosmosClient`, `Database`, and the `Container` objects. Next add code to the main method that calls the `GetStartedDemoAsync` method where you manage Azure Cosmos account resources.
+
+```csharp
+namespace todo
+{
+public class Program
+{
+
+ /// The Azure Cosmos DB endpoint for running this GetStarted sample.
+ private string EndpointUrl = Environment.GetEnvironmentVariable("EndpointUrl");
+
+ /// The primary key for the Azure DocumentDB account.
+ private string PrimaryKey = Environment.GetEnvironmentVariable("PrimaryKey");
+
+ // The Cosmos client instance
+ private CosmosClient cosmosClient;
+
+ // The database we will create
+ private Database database;
+
+ // The container we will create.
+ private Container container;
+
+ // The name of the database and container we will create
+ private string databaseId = "FamilyDatabase";
+ private string containerId = "FamilyContainer";
+
+ public static async Task Main(string[] args)
+ {
+ try
+ {
+ Console.WriteLine("Beginning operations...\n");
+ Program p = new Program();
+ await p.GetStartedDemoAsync();
+
+ }
+ catch (CosmosException de)
+ {
+ Exception baseException = de.GetBaseException();
+ Console.WriteLine("{0} error occurred: {1}", de.StatusCode, de);
+ }
+ catch (Exception e)
+ {
+ Console.WriteLine("Error: {0}", e);
+ }
+ finally
+ {
+ Console.WriteLine("End of demo, press any key to exit.");
+ Console.ReadKey();
+ }
+ }
+}
+}
+```
+
+### Create a database
+
+Define the `CreateDatabaseAsync` method within the `program.cs` class. This method creates the `FamilyDatabase` if it doesn't already exist.
+
+```csharp
+private async Task CreateDatabaseAsync()
+{
+ // Create a new database
+ this.database = await this.cosmosClient.CreateDatabaseIfNotExistsAsync(databaseId);
+ Console.WriteLine("Created Database: {0}\n", this.database.Id);
+}
+```
+
+### Create a container
+
+Define the `CreateContainerAsync` method within the `program.cs` class. This method creates the `FamilyContainer` if it doesn't already exist.
+
+```csharp
+/// Create the container if it does not exist.
+/// Specifiy "/LastName" as the partition key since we're storing family information, to ensure good distribution of requests and storage.
+private async Task CreateContainerAsync()
+{
+ // Create a new container
+ this.container = await this.database.CreateContainerIfNotExistsAsync(containerId, "/LastName");
+ Console.WriteLine("Created Container: {0}\n", this.container.Id);
+}
+```
+
+### Create an item
+
+Create a family item by adding the `AddItemsToContainerAsync` method with the following code. You can use the `CreateItemAsync` or `UpsertItemAsync` methods to create an item:
+
+```csharp
+private async Task AddItemsToContainerAsync()
+{
+ // Create a family object for the Andersen family
+ Family andersenFamily = new Family
+ {
+ Id = "Andersen.1",
+ LastName = "Andersen",
+ Parents = new Parent[]
+ {
+ new Parent { FirstName = "Thomas" },
+ new Parent { FirstName = "Mary Kay" }
+ },
+ Children = new Child[]
+ {
+ new Child
+ {
+ FirstName = "Henriette Thaulow",
+ Gender = "female",
+ Grade = 5,
+ Pets = new Pet[]
+ {
+ new Pet { GivenName = "Fluffy" }
+ }
+ }
+ },
+ Address = new Address { State = "WA", County = "King", City = "Seattle" },
+ IsRegistered = false
+ };
+
+ try
+ {
+ // Create an item in the container representing the Andersen family. Note we provide the value of the partition key for this item, which is "Andersen".
+ ItemResponse<Family> andersenFamilyResponse = await this.container.CreateItemAsync<Family>(andersenFamily, new PartitionKey(andersenFamily.LastName));
+ // Note that after creating the item, we can access the body of the item with the Resource property of the ItemResponse. We can also access the RequestCharge property to see the amount of RUs consumed on this request.
+ Console.WriteLine("Created item in database with id: {0} Operation consumed {1} RUs.\n", andersenFamilyResponse.Resource.Id, andersenFamilyResponse.RequestCharge);
+ }
+ catch (CosmosException ex) when (ex.StatusCode == HttpStatusCode.Conflict)
+ {
+ Console.WriteLine("Item in database with id: {0} already exists\n", andersenFamily.Id);
+ }
+}
+
+```
+
+### Query the items
+
+After inserting an item, you can run a query to get the details of "Andersen" family. The following code shows how to execute the query using the SQL query directly. The SQL query to get the "Anderson" family details is: `SELECT * FROM c WHERE c.LastName = 'Andersen'`. Define the `QueryItemsAsync` method within the `program.cs` class and add the following code to it:
++
+```csharp
+private async Task QueryItemsAsync()
+{
+ var sqlQueryText = "SELECT * FROM c WHERE c.LastName = 'Andersen'";
+
+ Console.WriteLine("Running query: {0}\n", sqlQueryText);
+
+ QueryDefinition queryDefinition = new QueryDefinition(sqlQueryText);
+ FeedIterator<Family> queryResultSetIterator = this.container.GetItemQueryIterator<Family>(queryDefinition);
+
+ List<Family> families = new List<Family>();
+
+ while (queryResultSetIterator.HasMoreResults)
+ {
+ FeedResponse<Family> currentResultSet = await queryResultSetIterator.ReadNextAsync();
+ foreach (Family family in currentResultSet)
+ {
+ families.Add(family);
+ Console.WriteLine("\tRead {0}\n", family);
+ }
+ }
+}
+
+```
+
+### Delete the database
+
+Finally you can delete the database adding the `DeleteDatabaseAndCleanupAsync` method with the following code:
+
+```csharp
+private async Task DeleteDatabaseAndCleanupAsync()
+{
+ DatabaseResponse databaseResourceResponse = await this.database.DeleteAsync();
+ // Also valid: await this.cosmosClient.Databases["FamilyDatabase"].DeleteAsync();
+
+ Console.WriteLine("Deleted Database: {0}\n", this.databaseId);
+
+ //Dispose of CosmosClient
+ this.cosmosClient.Dispose();
+}
+```
+
+### Execute the CRUD operations
+
+After you have defined all the required methods, execute them with in the `GetStartedDemoAsync` method. The `DeleteDatabaseAndCleanupAsync` method commented out in this code because you will not see any resources if that method is executed. You can uncomment it after validating that your Azure Cosmos DB resources were created in the Azure portal.
+
+```csharp
+public async Task GetStartedDemoAsync()
+{
+ // Create a new instance of the Cosmos Client
+ this.cosmosClient = new CosmosClient(EndpointUrl, PrimaryKey);
+ await this.CreateDatabaseAsync();
+ await this.CreateContainerAsync();
+ await this.AddItemsToContainerAsync();
+ await this.QueryItemsAsync();
+}
+```
+
+After you add all the required methods, save the `Program.cs` file.
+
+## Run the code
+
+Next build and run the application to create the Azure Cosmos DB resources. Make sure to open a new command prompt window, don't use the same instance that you have used to set the environment variables. Because the environment variables are not set in the current open window. You will need to open a new command prompt to see the updates.
+
+```console
+dotnet build
+```
+
+```console
+dotnet run
+```
+
+The following output is generated when you run the application. You can also sign into the Azure portal and validate that the resources are created:
+
+```console
+Created Database: FamilyDatabase
+
+Created Container: FamilyContainer
+
+Created item in database with id: Andersen.1 Operation consumed 11.62 RUs.
+
+Running query: SELECT * FROM c WHERE c.LastName = 'Andersen'
+
+ Read {"id":"Andersen.1","LastName":"Andersen","Parents":[{"FamilyName":null,"FirstName":"Thomas"},{"FamilyName":null,"FirstName":"Mary Kay"}],"Children":[{"FamilyName":null,"FirstName":"Henriette Thaulow","Gender":"female","Grade":5,"Pets":[{"GivenName":"Fluffy"}]}],"Address":{"State":"WA","County":"King","City":"Seattle"},"IsRegistered":false}
+
+End of demo, press any key to exit.
+```
+
+You can validate that the data is created by signing into the Azure portal and see the required items in your Azure Cosmos account.
+
+## Clean up resources
+
+When no longer needed, you can use the Azure CLI or Azure PowerShell to remove the Azure Cosmos account and the corresponding resource group. The following command shows how to delete the resource group by using the Azure CLI:
+
+```azurecli
+az group delete -g "myResourceGroup"
+```
+
+## Next steps
+
+In this quickstart, you learned how to create an Azure Cosmos account, create a database and a container using a .NET Core app. You can now import additional data to your Azure Cosmos account with the instructions in the following article.
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+> [!div class="nextstepaction"]
+> [Import data into Azure Cosmos DB](../import-data.md)
cosmos-db Create Sql Api Java Changefeed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-sql-api-java-changefeed.md
+
+ Title: Create an end-to-end Azure Cosmos DB Java SDK v4 application sample by using Change Feed
+description: This guide walks you through a simple Java SQL API application which inserts documents into an Azure Cosmos DB container, while maintaining a materialized view of the container using Change Feed.
+++
+ms.devlang: java
+ Last updated : 06/11/2020++++
+# How to create a Java application that uses Azure Cosmos DB SQL API and change feed processor
+
+This how-to guide walks you through a simple Java application which uses the Azure Cosmos DB SQL API to insert documents into an Azure Cosmos DB container, while maintaining a materialized view of the container using Change Feed and Change Feed Processor. The Java application communicates with the Azure Cosmos DB SQL API using Azure Cosmos DB Java SDK v4.
+
+> [!IMPORTANT]
+> This tutorial is for Azure Cosmos DB Java SDK v4 only. Please view the Azure Cosmos DB Java SDK v4 [Release notes](sql-api-sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), Azure Cosmos DB Java SDK v4 [performance tips](performance-tips-java-sdk-v4-sql.md), and Azure Cosmos DB Java SDK v4 [troubleshooting guide](troubleshoot-java-sdk-v4-sql.md) for more information. If you are currently using an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4.
+>
+
+## Prerequisites
+
+* The URI and key for your Azure Cosmos DB account
+
+* Maven
+
+* Java 8
+
+## Background
+
+The Azure Cosmos DB change feed provides an event-driven interface to trigger actions in response to document insertion. This has many uses. For example in applications which are both read and write heavy, a chief use of change feed is to create a real-time **materialized view** of a container as it is ingesting documents. The materialized view container will hold the same data but partitioned for efficient reads, making the application both read and write efficient.
+
+The work of managing change feed events is largely taken care of by the change feed Processor library built into the SDK. This library is powerful enough to distribute change feed events among multiple workers, if that is desired. All you have to do is provide the change feed library a callback.
+
+This simple example demonstrates change feed Processor library with a single worker creating and deleting documents from a materialized view.
+
+## Setup
+
+If you have not already done so, clone the app example repo:
+
+```bash
+git clone https://github.com/Azure-Samples/azure-cosmos-java-sql-app-example.git
+```
+
+Open a terminal in the repo directory. Build the app by running
+
+```bash
+mvn clean package
+```
+
+## Walkthrough
+
+1. As a first check, you should have an Azure Cosmos DB account. Open the **Azure portal** in your browser, go to your Azure Cosmos DB account, and in the left pane navigate to **Data Explorer**.
+
+ :::image type="content" source="media/create-sql-api-java-changefeed/cosmos_account_empty.JPG" alt-text="Azure Cosmos DB account":::
+
+1. Run the app in the terminal using the following command:
+
+ ```bash
+ mvn exec:java -Dexec.mainClass="com.azure.cosmos.workedappexample.SampleGroceryStore" -DACCOUNT_HOST="your-account-uri" -DACCOUNT_KEY="your-account-key" -Dexec.cleanupDaemonThreads=false
+ ```
+
+1. Press enter when you see
+
+ ```bash
+ Press enter to create the grocery store inventory system...
+ ```
+
+ then return to the Azure portal Data Explorer in your browser. You will see a database **GroceryStoreDatabase** has been added with three empty containers:
+
+ * **InventoryContainer** - The inventory record for our example grocery store, partitioned on item ```id``` which is a UUID.
+ * **InventoryContainer-pktype** - A materialized view of the inventory record, optimized for queries over item ```type```
+ * **InventoryContainer-leases** - A leases container is always needed for change feed; leases track the app's progress in reading the change feed.
+
+ :::image type="content" source="media/create-sql-api-java-changefeed/cosmos_account_resources_lease_empty.JPG" alt-text="Empty containers":::
+
+1. In the terminal, you should now see a prompt
+
+ ```bash
+ Press enter to start creating the materialized view...
+ ```
+
+ Press enter. Now the following block of code will execute and initialize the change feed processor on another thread:
+
+ ### <a id="java4-connection-policy-async"></a>Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+ [!code-java[](~/azure-cosmos-java-sql-app-example/src/main/java/com/azure/cosmos/workedappexample/SampleGroceryStore.java?name=InitializeCFP)]
+
+ ```"SampleHost_1"``` is the name of the Change Feed processor worker. ```changeFeedProcessorInstance.start()``` is what actually starts the Change Feed processor.
+
+ Return to the Azure portal Data Explorer in your browser. Under the **InventoryContainer-leases** container, click **items** to see its contents. You will see that Change Feed Processor has populated the lease container, i.e. the processor has assigned the ```SampleHost_1``` worker a lease on some partitions of the **InventoryContainer**.
+
+ :::image type="content" source="media/create-sql-api-java-changefeed/cosmos_leases.JPG" alt-text="Leases":::
+
+1. Press enter again in the terminal. This will trigger 10 documents to be inserted into **InventoryContainer**. Each document insertion appears in the change feed as JSON; the following callback code handles these events by mirroring the JSON documents into a materialized view:
+
+ ### <a id="java4-connection-policy-async"></a>Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+ [!code-java[](~/azure-cosmos-java-sql-app-example/src/main/java/com/azure/cosmos/workedappexample/SampleGroceryStore.java?name=CFPCallback)]
+
+1. Allow the code to run 5-10sec. Then return to the Azure portal Data Explorer and navigate to **InventoryContainer > items**. You should see that items are being inserted into the inventory container; note the partition key (```id```).
+
+ :::image type="content" source="media/create-sql-api-java-changefeed/cosmos_items.JPG" alt-text="Feed container":::
+
+1. Now, in Data Explorer navigate to **InventoryContainer-pktype > items**. This is the materialized view - the items in this container mirror **InventoryContainer** because they were inserted programmatically by change feed. Note the partition key (```type```). So this materialized view is optimized for queries filtering over ```type```, which would be inefficient on **InventoryContainer** because it is partitioned on ```id```.
+
+ :::image type="content" source="media/create-sql-api-java-changefeed/cosmos_materializedview2.JPG" alt-text="Screenshot shows the Data Explorer page for an Azure Cosmos D B account with Items selected.":::
+
+1. We're going to delete a document from both **InventoryContainer** and **InventoryContainer-pktype** using just a single ```upsertItem()``` call. First, take a look at Azure portal Data Explorer. We'll delete the document for which ```/type == "plums"```; it is encircled in red below
+
+ :::image type="content" source="media/create-sql-api-java-changefeed/cosmos_materializedview-emph-todelete.JPG" alt-text="Screenshot shows the Data Explorer page for an Azure Cosmos D B account with a particular item I D selected.":::
+
+ Hit enter again to call the function ```deleteDocument()``` in the example code. This function, shown below, upserts a new version of the document with ```/ttl == 5```, which sets document Time-To-Live (TTL) to 5sec.
+
+ ### <a id="java4-connection-policy-async"></a>Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+ [!code-java[](~/azure-cosmos-java-sql-app-example/src/main/java/com/azure/cosmos/workedappexample/SampleGroceryStore.java?name=DeleteWithTTL)]
+
+ The change feed ```feedPollDelay``` is set to 100ms; therefore, change feed responds to this update almost instantly and calls ```updateInventoryTypeMaterializedView()``` shown above. That last function call will upsert the new document with TTL of 5sec into **InventoryContainer-pktype**.
+
+ The effect is that after about 5 seconds, the document will expire and be deleted from both containers.
+
+ This procedure is necessary because change feed only issues events on item insertion or update, not on item deletion.
+
+1. Press enter one more time to close the program and clean up its resources.
cosmos-db Create Sql Api Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-sql-api-java.md
+
+ Title: Quickstart - Use Java to create a document database using Azure Cosmos DB
+description: This quickstart presents a Java code sample you can use to connect to and query the Azure Cosmos DB SQL API
+++
+ms.devlang: java
+ Last updated : 08/26/2021++++
+# Quickstart: Build a Java app to manage Azure Cosmos DB SQL API data
+
+> [!div class="op_single_selector"]
+> * [.NET V3](create-sql-api-dotnet.md)
+> * [.NET V4](create-sql-api-dotnet-V4.md)
+> * [Java SDK v4](create-sql-api-java.md)
+> * [Spring Data v3](create-sql-api-spring-data.md)
+> * [Spark v3 connector](create-sql-api-spark.md)
+> * [Node.js](create-sql-api-nodejs.md)
+> * [Python](create-sql-api-python.md)
+> * [Xamarin](create-sql-api-xamarin-dotnet.md)
+
+In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Java app cloned from GitHub. First, you create an Azure Cosmos DB SQL API account using the Azure portal, then create a Java app using the SQL Java SDK, and then add resources to your Cosmos DB account by using the Java application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+
+> [!IMPORTANT]
+> This quickstart is for Azure Cosmos DB Java SDK v4 only. Please view the Azure Cosmos DB Java SDK v4 [Release notes](sql-api-sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), Azure Cosmos DB Java SDK v4 [performance tips](performance-tips-java-sdk-v4-sql.md), and Azure Cosmos DB Java SDK v4 [troubleshooting guide](troubleshoot-java-sdk-v4-sql.md) for more information. If you are currently using an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4.
+>
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.
+- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed.
+- A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven.
+- [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git.
+
+## Introductory notes
+
+*The structure of a Cosmos DB account.* Irrespective of API or programming language, a Cosmos DB *account* contains zero or more *databases*, a *database* (DB) contains zero or more *containers*, and a *container* contains zero or more items, as shown in the diagram below:
++
+You may read more about databases, containers and items [here.](../account-databases-containers-items.md) A few important properties are defined at the level of the container, among them *provisioned throughput* and *partition key*.
+
+The provisioned throughput is measured in Request Units (*RUs*) which have a monetary price and are a substantial determining factor in the operating cost of the account. Provisioned throughput can be selected at per-container granularity or per-database granularity, however container-level throughput specification is typically preferred. You may read more about throughput provisioning [here.](../set-throughput.md)
+
+As items are inserted into a Cosmos DB container, the database grows horizontally by adding more storage and compute to handle requests. Storage and compute capacity are added in discrete units known as *partitions*, and you must choose one field in your documents to be the partition key which maps each document to a partition. The way partitions are managed is that each partition is assigned a roughly equal slice out of the range of partition key values; therefore you are advised to choose a partition key which is relatively random or evenly-distributed. Otherwise, some partitions will see substantially more requests (*hot partition*) while other partitions see substantially fewer requests (*cold partition*), and this is to be avoided. You may learn more about partitioning [here](../partitioning-overview.md).
+
+## Create a database account
+
+Before you can create a document database, you need to create a SQL API account with Azure Cosmos DB.
++
+## Add a container
++
+<a id="add-sample-data"></a>
+## Add sample data
++
+## Query your data
++
+## Clone the sample application
+
+Now let's switch to working with code. Let's clone a SQL API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+
+Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+```bash
+git clone https://github.com/Azure-Samples/azure-cosmos-java-getting-started.git
+```
+
+## Review the code
+
+This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Run the app
+](#run-the-app).
++
+# [Sync API](#tab/sync)
+
+### Managing database resources using the synchronous (sync) API
+
+* `CosmosClient` initialization. The `CosmosClient` provides client-side logical representation for the Azure Cosmos database service. This client is used to configure and execute requests against the service.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateSyncClient)]
+
+* `CosmosDatabase` creation.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateDatabaseIfNotExists)]
+
+* `CosmosContainer` creation.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateContainerIfNotExists)]
+
+* Item creation by using the `createItem` method.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateItem)]
+
+* Point reads are performed using `readItem` method.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=ReadItem)]
+
+* SQL queries over JSON are performed using the `queryItems` method.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=QueryItems)]
+
+# [Async API](#tab/async)
+
+### Managing database resources using the asynchronous (async) API
+
+* Async API calls return immediately, without waiting for a response from the server. In light of this, the following code snippets show proper design patterns for accomplishing all of the preceding management tasks using async API.
+
+* `CosmosAsyncClient` initialization. The `CosmosAsyncClient` provides client-side logical representation for the Azure Cosmos database service. This client is used to configure and execute asynchronous requests against the service.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncMain.java?name=CreateAsyncClient)]
+
+* `CosmosAsyncDatabase` creation.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateDatabaseIfNotExists)]
+
+* `CosmosAsyncContainer` creation.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateContainerIfNotExists)]
+
+* As with the sync API, item creation is accomplished using the `createItem` method. This example shows how to efficiently issue numerous async `createItem` requests by subscribing to a Reactive Stream which issues the requests and prints notifications. Since this simple example runs to completion and terminates, `CountDownLatch` instances are used to ensure the program does not terminate during item creation. **The proper asynchronous programming practice is not to block on async calls - in realistic use-cases requests are generated from a main() loop that executes indefinitely, eliminating the need to latch on async calls.**
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncMain.java?name=CreateItem)]
+
+* As with the sync API, point reads are performed using `readItem` method.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncMain.java?name=ReadItem)]
+
+* As with the sync API, SQL queries over JSON are performed using the `queryItems` method.
+
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncMain.java?name=QueryItems)]
+++
+## Run the app
+
+Now go back to the Azure portal to get your connection string information and launch the app with your endpoint information. This enables your app to communicate with your hosted database.
+
+1. In the git terminal window, `cd` to the sample code folder.
+
+ ```bash
+ cd azure-cosmos-java-getting-started
+ ```
+
+2. In the git terminal window, use the following command to install the required Java packages.
+
+ ```bash
+ mvn package
+ ```
+
+3. In the git terminal window, use the following command to start the Java application (replace SYNCASYNCMODE with `sync` or `async` depending on which sample code you would like to run, replace YOUR_COSMOS_DB_HOSTNAME with the quoted URI value from the portal, and replace YOUR_COSMOS_DB_MASTER_KEY with the quoted primary key from portal)
+
+ ```bash
+ mvn exec:java@SYNCASYNCMODE -DACCOUNT_HOST=YOUR_COSMOS_DB_HOSTNAME -DACCOUNT_KEY=YOUR_COSMOS_DB_MASTER_KEY
+
+ ```
+
+ The terminal window displays a notification that the FamilyDB database was created.
+
+4. The app creates database with name `AzureSampleFamilyDB`
+5. The app creates container with name `FamilyContainer`
+6. The app will perform point reads using object IDs and partition key value (which is lastName in our sample).
+7. The app will query items to retrieve all families with last name in ('Andersen', 'Wakefield', 'Johnson')
+
+7. The app doesn't delete the created resources. Switch back to the portal to [clean up the resources](#clean-up-resources). from your account so that you don't incur charges.
+
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you've learned how to create an Azure Cosmos DB SQL API account, create a document database and container using the Data Explorer, and run a Java app to do the same thing programmatically. You can now import additional data into your Azure Cosmos DB account.
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+> [!div class="nextstepaction"]
+> [Import data into Azure Cosmos DB](../import-data.md)
cosmos-db Create Sql Api Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-sql-api-nodejs.md
+
+ Title: Quickstart- Use Node.js to query from Azure Cosmos DB SQL API account
+description: How to use Node.js to create an app that connects to Azure Cosmos DB SQL API account and queries data.
+++
+ms.devlang: nodejs
+ Last updated : 08/26/2021++++
+# Quickstart: Use Node.js to connect and query data from Azure Cosmos DB SQL API account
+
+> [!div class="op_single_selector"]
+> - [.NET V3](create-sql-api-dotnet.md)
+> - [.NET V4](create-sql-api-dotnet-V4.md)
+> - [Java SDK v4](create-sql-api-java.md)
+> * [Spring Data v3](create-sql-api-spring-data.md)
+> * [Spark v3 connector](create-sql-api-spark.md)
+> - [Node.js](create-sql-api-nodejs.md)
+> - [Python](create-sql-api-python.md)
+> - [Xamarin](create-sql-api-xamarin-dotnet.md)
+
+In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Node.js app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+
+## Walkthrough video
+
+Watch this video for a complete walkthrough of the content in this article.
+
+> [!VIDEO https://channel9.msdn.com/Shows/Docs-Azure/Quickstart-Use-Nodejs-to-connect-and-query-data-from-Azure-Cosmos-DB-SQL-API-account/player]
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.
+- [Node.js 6.0.0+](https://nodejs.org/).
+- [Git](https://www.git-scm.com/downloads).
+
+## Create an Azure Cosmos account
+
+For this quickstart purpose, you can use the [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) option to create an Azure Cosmos account.
+
+1. Navigate to the [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) page.
+
+1. Choose the **SQL** API account and select **Create**. Sign-in using your Microsoft account.
+
+1. After the sign-in is successful, your Azure Cosmos account should be ready. Select **Open in the Azure portal** to open the newly created account.
+
+The "try Azure Cosmos DB for free" option doesn't require an Azure subscription and it offers you an Azure Cosmos account for a limited period of 30 days. If you want to use the Azure Cosmos account for a longer period, you should instead [create the account](create-cosmosdb-resources-portal.md#create-an-azure-cosmos-db-account) within your Azure subscription.
+
+## Add a container
++
+## Add sample data
++
+## Query your data
++
+## Clone the sample application
+
+Now let's clone a Node.js app from GitHub, set the connection string, and run it.
+
+1. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/azure-cosmos-db-sql-api-nodejs-getting-started.git
+ ```
+
+## Review the code
+
+This step is optional. If you're interested in learning how the Azure Cosmos database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-string).
+
+If you're familiar with the previous version of the SQL JavaScript SDK, you may be used to seeing the terms _collection_ and _document_. Because Azure Cosmos DB supports [multiple API models](../introduction.md), [version 2.0+ of the JavaScript SDK](https://www.npmjs.com/package/@azure/cosmos) uses the generic terms _container_, which may be a collection, graph, or table, and _item_ to describe the content of the container.
+
+The Cosmos DB JavaScript SDK is called "@azure/cosmos" and can be installed from npm...
+
+```bash
+npm install @azure/cosmos
+```
+
+The following snippets are all taken from the _app.js_ file.
+
+- The `CosmosClient` is imported from the `@azure/cosmos` npm package.
+
+ ```javascript
+ const CosmosClient = require("@azure/cosmos").CosmosClient;
+ ```
+
+- A new `CosmosClient` object is initialized.
+
+ ```javascript
+ const client = new CosmosClient({ endpoint, key });
+ ```
+
+- Select the "Tasks" database.
+
+ ```javascript
+ const database = client.database(databaseId);
+ ```
+
+- Select the "Items" container/collection.
+
+ ```javascript
+ const container = database.container(containerId);
+ ```
+
+- Select all the items in the "Items" container.
+
+ ```javascript
+ // query to return all items
+ const querySpec = {
+ query: "SELECT * from c"
+ };
+
+ const { resources: items } = await container.items
+ .query(querySpec)
+ .fetchAll();
+ ```
+
+- Create a new item
+
+ ```javascript
+ const { resource: createdItem } = await container.items.create(newItem);
+ ```
+
+- Update an item
+
+ ```javascript
+ const { id, category } = createdItem;
+
+ createdItem.isComplete = true;
+ const { resource: updatedItem } = await container
+ .item(id, category)
+ .replace(createdItem);
+ ```
+
+- Delete an item
+
+ ```javascript
+ const { resource: result } = await container.item(id, category).delete();
+ ```
+
+> [!NOTE]
+> In both the "update" and "delete" methods, the item has to be selected from the database by calling `container.item()`. The two parameters passed in are the id of the item and the item's partition key. In this case, the parition key is the value of the "category" field.
+
+## Update your connection string
+
+Now go back to the Azure portal to get the connection string details of your Azure Cosmos account. Copy the connection string into the app so that it can connect to your database.
+
+1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Keys** from the left navigation, and then select **Read-write Keys**. Use the copy buttons on the right side of the screen to copy the URI and Primary Key into the _app.js_ file in the next step.
+
+ :::image type="content" source="./media/create-sql-api-dotnet/keys.png" alt-text="View and copy an access key in the Azure portal, Keys blade":::
+
+2. In Open the _config.js_ file.
+
+3. Copy your URI value from the portal (using the copy button) and make it the value of the endpoint key in _config.js_.
+
+ `endpoint: "<Your Azure Cosmos account URI>"`
+
+4. Then copy your PRIMARY KEY value from the portal and make it the value of the `config.key` in _config.js_. You've now updated your app with all the info it needs to communicate with Azure Cosmos DB.
+
+ `key: "<Your Azure Cosmos account key>"`
+
+## Run the app
+
+1. Run `npm install` in a terminal to install the "@azure/cosmos" npm package
+
+2. Run `node app.js` in a terminal to start your node application.
+
+3. The two items that you created earlier in this quickstart are listed out. A new item is created. The "isComplete" flag on that item is updated to "true" and then finally, the item is deleted.
+
+You can continue to experiment with this sample application or go back to Data Explorer, modify, and work with your data.
+
+## Review SLAs in the Azure portal
++
+## Next steps
+
+In this quickstart, you've learned how to create an Azure Cosmos DB account, create a container using the Data Explorer, and run a Node.js app. You can now import additional data to your Azure Cosmos DB account.
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+> [!div class="nextstepaction"]
+> [import data into azure cosmos db](../import-data.md)
cosmos-db Create Sql Api Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-sql-api-python.md
+
+ Title: 'Quickstart: Build a Python app using Azure Cosmos DB SQL API account'
+description: Presents a Python code sample you can use to connect to and query the Azure Cosmos DB SQL API
+++
+ms.devlang: python
+ Last updated : 08/26/2021++++
+# Quickstart: Build a Python application using an Azure Cosmos DB SQL API account
+
+> [!div class="op_single_selector"]
+> * [.NET V3](create-sql-api-dotnet.md)
+> * [.NET V4](create-sql-api-dotnet-V4.md)
+> * [Java SDK v4](create-sql-api-java.md)
+> * [Spring Data v3](create-sql-api-spring-data.md)
+> * [Spark v3 connector](create-sql-api-spark.md)
+> * [Node.js](create-sql-api-nodejs.md)
+> * [Python](create-sql-api-python.md)
+> * [Xamarin](create-sql-api-xamarin-dotnet.md)
+
+In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and from Visual Studio Code with a Python app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+
+## Prerequisites
+
+- A Cosmos DB Account. You options are:
+ * Within an Azure active subscription:
+ * [Create an Azure free Account](https://azure.microsoft.com/free) or use your existing subscription
+ * [Visual Studio Monthly Credits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers)
+ * [Azure Cosmos DB Free Tier](../optimize-dev-test.md#azure-cosmos-db-free-tier)
+ * Without an Azure active subscription:
+ * [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/), a tests environment that lasts for 30 days.
+ * [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator)
+- [Python 2.7 or 3.6+](https://www.python.org/downloads/), with the `python` executable in your `PATH`.
+- [Visual Studio Code](https://code.visualstudio.com/).
+- The [Python extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-python.python#overview).
+- [Git](https://www.git-scm.com/downloads).
+- [Azure Cosmos DB SQL API SDK for Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/cosmos/azure-cosmos)
+
+## Create a database account
++
+## Add a container
++
+## Add sample data
++
+## Query your data
++
+## Clone the sample application
+
+Now let's clone a SQL API app from GitHub, set the connection string, and run it. This quickstart uses version 4 of the [Python SDK](https://pypi.org/project/azure-cosmos/#history).
+
+1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+
+ ```cmd
+ md "git-samples"
+ ```
+ If you are using a bash prompt, you should instead use the following command:
+
+ ```bash
+ mkdir "git-samples"
+ ```
+
+2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
+
+ ```bash
+ cd "git-samples"
+ ```
+
+3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/azure-cosmos-db-python-getting-started.git
+ ```
+
+## Update your connection string
+
+Now go back to the Azure portal to get your connection string information and copy it into the app.
+
+1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Keys** from the left navigation. Use the copy buttons on the right side of the screen to copy the **URI** and **Primary Key** into the *cosmos_get_started.py* file in the next step.
+
+ :::image type="content" source="./media/create-sql-api-dotnet/access-key-and-uri-in-keys-settings-in-the-azure-portal.png" alt-text="Get an access key and URI in the Keys settings in the Azure portal":::
+
+2. In Visual Studio Code, open the *cosmos_get_started.py* file in *\git-samples\azure-cosmos-db-python-getting-started*.
+
+3. Copy your **URI** value from the portal (using the copy button) and make it the value of the **endpoint** variable in *cosmos_get_started.py*.
+
+ `endpoint = 'https://FILLME.documents.azure.com',`
+
+4. Then copy your **PRIMARY KEY** value from the portal and make it the value of the **key** in *cosmos_get_started.py*. You've now updated your app with all the info it needs to communicate with Azure Cosmos DB.
+
+ `key = 'FILLME'`
+
+5. Save the *cosmos_get_started.py* file.
+
+## Review the code
+
+This step is optional. Learn about the database resources created in code, or skip ahead to [Update your connection string](#update-your-connection-string).
+
+The following snippets are all taken from the *cosmos_get_started.py* file.
+
+* The CosmosClient is initialized. Make sure to update the "endpoint" and "key" values as described in the [Update your connection string](#update-your-connection-string) section.
+
+ [!code-python[](~/azure-cosmos-db-python-getting-started/cosmos_get_started.py?name=create_cosmos_client)]
+
+* A new database is created.
+
+ [!code-python[](~/azure-cosmos-db-python-getting-started/cosmos_get_started.py?name=create_database_if_not_exists)]
+
+* A new container is created, with 400 RU/s of [provisioned throughput](../request-units.md). We choose `lastName` as the [partition key](../partitioning-overview.md#choose-partitionkey), which allows us to do efficient queries that filter on this property.
+
+ [!code-python[](~/azure-cosmos-db-python-getting-started/cosmos_get_started.py?name=create_container_if_not_exists)]
+
+* Some items are added to the container. Containers are a collection of items (JSON documents) that can have varied schema. The helper methods ```get_[name]_family_item``` return representations of a family that are stored in Azure Cosmos DB as JSON documents.
+
+ [!code-python[](~/azure-cosmos-db-python-getting-started/cosmos_get_started.py?name=create_item)]
+
+* Point reads (key value lookups) are performed using the `read_item` method. We print out the [RU charge](../request-units.md) of each operation.
+
+ [!code-python[](~/azure-cosmos-db-python-getting-started/cosmos_get_started.py?name=read_item)]
+
+* A query is performed using SQL query syntax. Because we're using partition key values of ```lastName``` in the WHERE clause, Azure Cosmos DB will efficiently route this query to the relevant partitions, improving performance.
+
+ [!code-python[](~/azure-cosmos-db-python-getting-started/cosmos_get_started.py?name=query_items)]
+
+## Run the app
+
+1. In Visual Studio Code, select **View** > **Command Palette**.
+
+2. At the prompt, enter **Python: Select Interpreter** and then select the version of Python to use.
+
+ The Footer in Visual Studio Code is updated to indicate the interpreter selected.
+
+3. Select **View** > **Integrated Terminal** to open the Visual Studio Code integrated terminal.
+
+4. In the integrated terminal window, ensure you are in the *azure-cosmos-db-python-getting-started* folder. If not, run the following command to switch to the sample folder.
+
+ ```cmd
+ cd "\git-samples\azure-cosmos-db-python-getting-started"`
+ ```
+
+5. Run the following command to install the azure-cosmos package.
+
+ ```python
+ pip install --pre azure-cosmos
+ ```
+
+ If you get an error about access being denied when attempting to install azure-cosmos, you'll need to [run VS Code as an administrator](https://stackoverflow.com/questions/37700536/visual-studio-code-terminal-how-to-run-a-command-with-administrator-rights).
+
+6. Run the following command to run the sample and create and store new documents in Azure Cosmos DB.
+
+ ```python
+ python cosmos_get_started.py
+ ```
+
+7. To confirm the new items were created and saved, in the Azure portal, select **Data Explorer** > **AzureSampleFamilyDatabase** > **Items**. View the items that were created. For example, here is a sample JSON document for the Andersen family:
+
+ ```json
+ {
+ "id": "Andersen-1569479288379",
+ "lastName": "Andersen",
+ "district": "WA5",
+ "parents": [
+ {
+ "familyName": null,
+ "firstName": "Thomas"
+ },
+ {
+ "familyName": null,
+ "firstName": "Mary Kay"
+ }
+ ],
+ "children": null,
+ "address": {
+ "state": "WA",
+ "county": "King",
+ "city": "Seattle"
+ },
+ "registered": true,
+ "_rid": "8K5qAIYtZXeBhB4AAAAAAA==",
+ "_self": "dbs/8K5qAA==/colls/8K5qAIYtZXc=/docs/8K5qAIYtZXeBhB4AAAAAAA==/",
+ "_etag": "\"a3004d78-0000-0800-0000-5d8c5a780000\"",
+ "_attachments": "attachments/",
+ "_ts": 1569479288
+ }
+ ```
+
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you've learned how to create an Azure Cosmos DB account, create a container using the Data Explorer, and run a Python app in Visual Studio Code. You can now import additional data to your Azure Cosmos DB account.
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+> [!div class="nextstepaction"]
+> [Import data into Azure Cosmos DB for the SQL API](../import-data.md)
cosmos-db Create Sql Api Spark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-sql-api-spark.md
+
+ Title: Quickstart - Manage data with Azure Cosmos DB Spark 3 OLTP Connector for SQL API
+description: This quickstart presents a code sample for the Azure Cosmos DB Spark 3 OLTP Connector for SQL API that you can use to connect to and query data in your Azure Cosmos DB account
+++
+ms.devlang: java
+ Last updated : 05/27/2021++++
+# Quickstart: Manage data with Azure Cosmos DB Spark 3 OLTP Connector for SQL API
+
+> [!div class="op_single_selector"]
+> * [.NET V3](create-sql-api-dotnet.md)
+> * [.NET V4](create-sql-api-dotnet-V4.md)
+> * [Java SDK v4](create-sql-api-java.md)
+> * [Spring Data v3](create-sql-api-spring-data.md)
+> * [Spark 3 OLTP connector](create-sql-api-spark.md)
+> * [Node.js](create-sql-api-nodejs.md)
+> * [Python](create-sql-api-python.md)
+> * [Xamarin](create-sql-api-xamarin-dotnet.md)
++
cosmos-db Create Sql Api Spring Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-sql-api-spring-data.md
+
+ Title: Quickstart - Use Spring Data Azure Cosmos DB v3 to create a document database using Azure Cosmos DB
+description: This quickstart presents a Spring Data Azure Cosmos DB v3 code sample you can use to connect to and query the Azure Cosmos DB SQL API
+++
+ms.devlang: java
+ Last updated : 08/26/2021++++
+# Quickstart: Build a Spring Data Azure Cosmos DB v3 app to manage Azure Cosmos DB SQL API data
+
+> [!div class="op_single_selector"]
+> * [.NET V3](create-sql-api-dotnet.md)
+> * [.NET V4](create-sql-api-dotnet-V4.md)
+> * [Java SDK v4](create-sql-api-java.md)
+> * [Spring Data v3](create-sql-api-spring-data.md)
+> * [Spark v3 connector](create-sql-api-spark.md)
+> * [Node.js](create-sql-api-nodejs.md)
+> * [Python](create-sql-api-python.md)
+> * [Xamarin](create-sql-api-xamarin-dotnet.md)
+
+In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Spring Data Azure Cosmos DB v3 app cloned from GitHub. First, you create an Azure Cosmos DB SQL API account using the Azure portal, then create a Spring Boot app using the Spring Data Azure Cosmos DB v3 connector, and then add resources to your Cosmos DB account by using the Spring Boot application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+
+> [!IMPORTANT]
+> These release notes are for version 3 of Spring Data Azure Cosmos DB. You can find [release notes for version 2 here](sql-api-sdk-java-spring-v2.md).
+>
+> Spring Data Azure Cosmos DB supports only the SQL API.
+>
+> See these articles for information about Spring Data on other Azure Cosmos DB APIs:
+> * [Spring Data for Apache Cassandra with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-apache-cassandra-with-cosmos-db)
+> * [Spring Data MongoDB with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-mongodb-with-cosmos-db)
+>
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.
+- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed.
+- A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven.
+- [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git.
+
+## Introductory notes
+
+*The structure of a Cosmos DB account.* Irrespective of API or programming language, a Cosmos DB *account* contains zero or more *databases*, a *database* (DB) contains zero or more *containers*, and a *container* contains zero or more items, as shown in the diagram below:
++
+You may read more about databases, containers and items [here.](../account-databases-containers-items.md) A few important properties are defined at the level of the container, among them *provisioned throughput* and *partition key*.
+
+The provisioned throughput is measured in Request Units (*RUs*) which have a monetary price and are a substantial determining factor in the operating cost of the account. Provisioned throughput can be selected at per-container granularity or per-database granularity, however container-level throughput specification is typically preferred. You may read more about throughput provisioning [here.](../set-throughput.md)
+
+As items are inserted into a Cosmos DB container, the database grows horizontally by adding more storage and compute to handle requests. Storage and compute capacity are added in discrete units known as *partitions*, and you must choose one field in your documents to be the partition key which maps each document to a partition. The way partitions are managed is that each partition is assigned a roughly equal slice out of the range of partition key values; therefore you are advised to choose a partition key which is relatively random or evenly-distributed. Otherwise, some partitions will see substantially more requests (*hot partition*) while other partitions see substantially fewer requests (*cold partition*), and this is to be avoided. You may learn more about partitioning [here](../partitioning-overview.md).
+
+## Create a database account
+
+Before you can create a document database, you need to create a SQL API account with Azure Cosmos DB.
++
+## Add a container
++
+<a id="add-sample-data"></a>
+## Add sample data
++
+## Query your data
++
+## Clone the sample application
+
+Now let's switch to working with code. Let's clone a SQL API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+
+Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+```bash
+git clone https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-getting-started.git
+```
+
+## Review the code
+
+This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Run the app
+](#run-the-app).
+
+### Application configuration file
+
+Here we showcase how Spring Boot and Spring Data enhance user experience - the process of establishing a Cosmos client and connecting to Cosmos resources is now config rather than code. At application startup Spring Boot handles all of this boilerplate using the settings in **application.properties**:
+
+```xml
+cosmos.uri=${ACCOUNT_HOST}
+cosmos.key=${ACCOUNT_KEY}
+cosmos.secondaryKey=${SECONDARY_ACCOUNT_KEY}
+
+dynamic.collection.name=spel-property-collection
+# Populate query metrics
+cosmos.queryMetricsEnabled=true
+```
+
+Once you create an Azure Cosmos DB account, database, and container, just fill-in-the-blanks in the config file and Spring Boot/Spring Data will automatically do the following: (1) create an underlying Java SDK `CosmosClient` instance with the URI and key, and (2) connect to the database and container. You're all set - **no more resource management code!**
+
+### Java source
+
+The Spring Data value-add also comes from its simple, clean, standardized and platform-independent interface for operating on datastores. Building on the Spring Data GitHub sample linked above, below are CRUD and query samples for manipulating Azure Cosmos DB documents with Spring Data Azure Cosmos DB.
+
+* Item creation and updates by using the `save` method.
+
+ [!code-java[](~/spring-data-azure-cosmos-db-sql-tutorial/azure-spring-data-cosmos-java-getting-started/src/main/java/com/azure/spring/data/cosmostutorial/SampleApplication.java?name=Create)]
+
+* Point-reads using the derived query method defined in the repository. The `findByIdAndLastName` performs point-reads for `UserRepository`. The fields mentioned in the method name cause Spring Data to execute a point-read defined by the `id` and `lastName` fields:
+
+ [!code-java[](~/spring-data-azure-cosmos-db-sql-tutorial/azure-spring-data-cosmos-java-getting-started/src/main/java/com/azure/spring/data/cosmostutorial/SampleApplication.java?name=Read)]
+
+* Item deletes using `deleteAll`:
+
+ [!code-java[](~/spring-data-azure-cosmos-db-sql-tutorial/azure-spring-data-cosmos-java-getting-started/src/main/java/com/azure/spring/data/cosmostutorial/SampleApplication.java?name=Delete)]
+
+* Derived query based on repository method name. Spring Data implements the `UserRepository` `findByFirstName` method as a Java SDK SQL query on the `firstName` field (this query could not be implemented as a point-read):
+
+ [!code-java[](~/spring-data-azure-cosmos-db-sql-tutorial/azure-spring-data-cosmos-java-getting-started/src/main/java/com/azure/spring/data/cosmostutorial/SampleApplication.java?name=Query)]
+
+## Run the app
+
+Now go back to the Azure portal to get your connection string information and launch the app with your endpoint information. This enables your app to communicate with your hosted database.
+
+1. In the git terminal window, `cd` to the sample code folder.
+
+ ```bash
+ cd azure-spring-data-cosmos-java-sql-api-getting-started/azure-spring-data-cosmos-java-getting-started/
+ ```
+
+2. In the git terminal window, use the following command to install the required Spring Data Azure Cosmos DB packages.
+
+ ```bash
+ mvn clean package
+ ```
+
+3. In the git terminal window, use the following command to start the Spring Data Azure Cosmos DB application:
+
+ ```bash
+ mvn spring-boot:run
+ ```
+
+4. The app loads **application.properties** and connects the resources in your Azure Cosmos DB account.
+5. The app will perform point CRUD operations described above.
+6. The app will perform a derived query.
+7. The app doesn't delete your resources. Switch back to the portal to [clean up the resources](#clean-up-resources) from your account if you want to avoid incurring charges.
+
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you've learned how to create an Azure Cosmos DB SQL API account, create a document database and container using the Data Explorer, and run a Spring Data app to do the same thing programmatically. You can now import additional data into your Azure Cosmos DB account.
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+> [!div class="nextstepaction"]
+> [Import data into Azure Cosmos DB](../import-data.md)
cosmos-db Create Sql Api Xamarin Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-sql-api-xamarin-dotnet.md
+
+ Title: 'Azure Cosmos DB: Build a todo app with Xamarin'
+description: Presents a Xamarin code sample you can use to connect to and query Azure Cosmos DB
+++
+ms.devlang: dotnet
+ Last updated : 03/07/2021++++
+# Quickstart: Build a todo app with Xamarin using Azure Cosmos DB SQL API account
+
+> [!div class="op_single_selector"]
+> * [.NET V3](create-sql-api-dotnet.md)
+> * [.NET V4](create-sql-api-dotnet-V4.md)
+> * [Java SDK v4](create-sql-api-java.md)
+> * [Spring Data v3](create-sql-api-spring-data.md)
+> * [Spark v3 connector](create-sql-api-spark.md)
+> * [Node.js](create-sql-api-nodejs.md)
+> * [Python](create-sql-api-python.md)
+> * [Xamarin](create-sql-api-xamarin-dotnet.md)
+
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can quickly create and query document, key/value, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB.
+
+> [!NOTE]
+> Sample code for an entire canonical sample Xamarin app showcasing multiple Azure offerings, including CosmosDB, can be found on GitHub [here](https://github.com/xamarinhq/app-geocontacts). This app demonstrates viewing geographically dispersed contacts, and allowing those contacts to update their location.
+
+This quickstart demonstrates how to create an Azure Cosmos DB SQL API account, document database, and container using the Azure portal. You'll then build and deploy a todo list mobile app built on the [SQL .NET API](sql-api-sdk-dotnet.md) and [Xamarin](/xamarin/) utilizing [Xamarin.Forms](/xamarin/) and the [MVVM architectural pattern](/xamarin/xamarin-forms/xaml/xaml-basics/data-bindings-to-mvvm).
++
+## Prerequisites
+
+If you are developing on Windows and don't already have Visual Studio 2019 installed, you can download and use the **free** [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads/). Make sure that you enable **Azure development** and **Mobile Development with .NET** workloads during the Visual Studio setup.
+
+If you are using a Mac, you can download the **free** [Visual Studio for Mac](https://www.visualstudio.com/vs/mac/).
++
+## Create a database account
++
+## Add a container
++
+## Add sample data
++
+## Query your data
++
+## Clone the sample application
+
+Now let's clone the Xamarin SQL API app from GitHub, review the code, obtain the API keys, and run it. You'll see how easy it is to work with data programmatically.
+
+1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+
+ ```bash
+ mkdir "C:\git-samples"
+ ```
+
+2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
+
+ ```bash
+ cd "C:\git-samples"
+ ```
+
+3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/azure-cosmos-db-sql-xamarin-getting-started.git
+ ```
+
+4. In Visual Studio, open **C:\git-samples\azure-cosmos-db-sql-xamarin-getting-started\src\ToDoItems.sln**
+
+## Obtain your API keys
+
+Go back to the Azure portal to get your API key information and copy it into the app.
+
+1. In the [Azure portal](https://portal.azure.com/), in your Azure Cosmos DB SQL API account, in the left navigation click **Keys**, and then click **Read-write Keys**. You'll use the copy buttons on the right side of the screen to copy the URI and Primary Key into the APIKeys.cs file in the next step.
+
+ :::image type="content" source="./media/create-sql-api-xamarin-dotnet/keys.png" alt-text="View and copy an access key in the Azure portal, Keys blade":::
+
+2. In Visual Studio, open **ToDoItems.Core/Helpers/APIKeys.cs**.
+
+3. In the Azure portal, using the copy button, copy the **URI** value and make it the value of the `CosmosEndpointUrl` variable in APIKeys.cs.
+
+ ```csharp
+ //#error Enter the URL of your Azure Cosmos DB endpoint here
+ public static readonly string CosmosEndpointUrl = "[URI Copied from Azure portal]";
+ ```
+
+4. In the Azure portal, using the copy button, copy the **PRIMARY KEY** value and make it the value of the `Cosmos Auth Key` in APIKeys.cs.
+
+ ```csharp
+ //#error Enter the read/write authentication key of your Azure Cosmos DB endpoint here
+ public static readonly string CosmosAuthKey = "[PRIMARY KEY copied from Azure portal";
+ ```
++
+## Review the code
+
+This solution demonstrates how to create a ToDo app using the Azure Cosmos DB SQL API and Xamarin.Forms. The app has two tabs, the first tab contains a list view showing todo items that are not yet complete. The second tab displays todo items that have been completed. In addition to viewing not completed todo items in the first tab, you can also add new todo items, edit existing ones, and mark items as completed.
++
+The code in the ToDoItems solution contains:
+
+* **ToDoItems.Core**
+ * This is a .NET Standard project holding a Xamarin.Forms project and shared application logic code that maintains todo items within Azure Cosmos DB.
+* **ToDoItems.Android**
+ * This project contains the Android app.
+* **ToDoItems.iOS**
+ * This project contains the iOS app.
+
+Now let's take a quick review of how the app communicates with Azure Cosmos DB.
+
+* The [Microsoft.Azure.DocumentDb.Core](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB.Core/) NuGet package is required to be added to all projects.
+* The `ToDoItem` class in the **ToDoItems.Core/Models** folder models the documents in the **Items** container created above. Note that property naming is case-sensitive.
+* The `CosmosDBService` class in the **ToDoItems.Core/Services** folder encapsulates the communication to Azure Cosmos DB.
+* Within the `CosmosDBService` class there is a `DocumentClient` type variable. The `DocumentClient` is used to configure and execute requests against the Azure Cosmos DB account, and is instantiated:
+
+ ```csharp
+ docClient = new DocumentClient(new Uri(APIKeys.CosmosEndpointUrl), APIKeys.CosmosAuthKey);
+ ```
+
+* When querying a container for documents, the `DocumentClient.CreateDocumentQuery<T>` method is used, as seen here in the `CosmosDBService.GetToDoItems` function:
+
+ [!code-csharp[](~/samples-cosmosdb-xamarin/src/ToDoItems.Core/Services/CosmosDBService.cs?name=GetToDoItems)]
+
+ The `CreateDocumentQuery<T>` takes a URI that points to the container created in the previous section. And you are also able to specify LINQ operators such as a `Where` clause. In this case only todo items that are not completed are returned.
+
+ The `CreateDocumentQuery<T>` function is executed synchronously, and returns an `IQueryable<T>`. However, the `AsDocumentQuery` method converts the `IQueryable<T>` to an `IDocumentQuery<T>` object, which can be executed asynchronously. Thus not blocking the UI thread for mobile applications.
+
+ The `IDocumentQuery<T>.ExecuteNextAsync<T>` function retrieves the page of results from Azure Cosmos DB, which `HasMoreResults` will examine in order to see if additional results remain to be returned.
+
+> [!TIP]
+> Several functions that operate on Azure Cosmos containers and documents take an URI as a parameter which specifies the address of the container or document. This URI is constructed using the `URIFactory` class. URIs for databases, containers, and documents can all be created with this class.
+
+* The `ComsmosDBService.InsertToDoItem` function demonstrates how to insert a new document:
+
+ [!code-csharp[](~/samples-cosmosdb-xamarin/src/ToDoItems.Core/Services/CosmosDBService.cs?name=InsertToDoItem)]
+
+ The item URI is specified as well as the item to be inserted.
+
+* The `CosmosDBService.UpdateToDoItem` function demonstrates how to replace an existing document with a new one:
+
+ [!code-csharp[](~/samples-cosmosdb-xamarin/src/ToDoItems.Core/Services/CosmosDBService.cs?name=UpdateToDoItem)]
+
+ Here a new URI is needed to uniquely identify the document to replace and is obtained by using `UriFactory.CreateDocumentUri` and passing it the database and container names and the ID of the document.
+
+ The `DocumentClient.ReplaceDocumentAsync` replaces the document identified by the URI with the one specified as a parameter.
+
+* Deleting an item is demonstrated with the `CosmosDBService.DeleteToDoItem` function:
+
+ [!code-csharp[](~/samples-cosmosdb-xamarin/src/ToDoItems.Core/Services/CosmosDBService.cs?name=DeleteToDoItem)]
+
+ Again note the unique document URI being created and passed to the `DocumentClient.DeleteDocumentAsync` function.
+
+## Run the app
+
+You've now updated your app with all the info it needs to communicate with Azure Cosmos DB.
+
+The following steps will demonstrate how to run the app using the Visual Studio for Mac debugger.
+
+> [!NOTE]
+> Usage of the Android version app is exactly the same, any differences will be called out in the steps below. If you wish to debug with Visual Studio on Windows, documentation todo so can be found for [iOS here](/xamarin/ios/deploy-test/debugging-in-xamarin-ios?tabs=vswin) and [Android here](/xamarin/android/deploy-test/debugging/).
+
+1. First select the platform you wish to target by clicking on the dropdown highlighted and selecting either ToDoItems.iOS for iOS or ToDoItems.Android for Android.
+
+ :::image type="content" source="./media/create-sql-api-xamarin-dotnet/ide-select-platform.png" alt-text="Selecting a platform to debug in Visual Studio for Mac":::
+
+2. To start debugging the app, either press cmd+Enter or click the play button.
+
+ :::image type="content" source="./media/create-sql-api-xamarin-dotnet/ide-start-debug.png" alt-text="Starting to debug in Visual Studio for Mac":::
+
+3. When the iOS simulator or Android emulator finishes launching, the app will display two tabs at the bottom of the screen for iOS and the top of the screen for Android. The first shows todo items, which are not completed, the second shows todo items, which are completed.
+
+ :::image type="content" source="./media/create-sql-api-xamarin-dotnet/ios-droid-started.png" alt-text="Launch screen of ToDo app":::
+
+4. To complete a todo item on iOS, slide it to the left > tap on the **Complete** button. To complete a todo item on Android, long press the item > then tap on the complete button.
+
+ :::image type="content" source="./media/create-sql-api-xamarin-dotnet/simulator-complete.png" alt-text="Complete a todo item":::
+
+5. To edit a todo item > tap on the