Updates from: 03/04/2022 02:09:47
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Authorization Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/authorization-code-flow.md
Previously updated : 06/18/2021 Last updated : 03/03/2022
The authorization code flow for single page applications requires some additiona
The `spa` redirect type is backwards compatible with the implicit flow. Apps currently using the implicit flow to get tokens can move to the `spa` redirect URI type without issues and continue using the implicit flow. ## 1. Get an authorization code
-The authorization code flow begins with the client directing the user to the `/authorize` endpoint. This is the interactive part of the flow, where the user takes action. In this request, the client indicates in the `scope` parameter the permissions that it needs to acquire from the user. The following three examples (with line breaks for readability) each use a different user flow. If you're testing this GET HTTP request, use your browser.
+The authorization code flow begins with the client directing the user to the `/authorize` endpoint. This is the interactive part of the flow, where the user takes action. In this request, the client indicates in the `scope` parameter the permissions that it needs to acquire from the user. The following examples (with line breaks for readability) shows how to acquire an authorization code. If you're testing this GET HTTP request, use your browser.
```http
client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6
| response_mode |Recommended |The method that you use to send the resulting authorization code back to your app. It can be `query`, `form_post`, or `fragment`. | | state |Recommended |A value included in the request that can be a string of any content that you want to use. Usually, a randomly generated unique value is used, to prevent cross-site request forgery attacks. The state also is used to encode information about the user's state in the app before the authentication request occurred. For example, the page the user was on, or the user flow that was being executed. | | prompt |Optional |The type of user interaction that is required. Currently, the only valid value is `login`, which forces the user to enter their credentials on that request. Single sign-on will not take effect. |
-| code_challenge | recommended / required | Used to secure authorization code grants via Proof Key for Code Exchange (PKCE). Required if `code_challenge_method` is included. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). This is now recommended for all application types - native apps, SPAs, and confidential clients like web apps. |
-| `code_challenge_method` | recommended / required | The method used to encode the `code_verifier` for the `code_challenge` parameter. This *SHOULD* be `S256`, but the spec allows the use of `plain` if for some reason the client cannot support SHA256. <br/><br/>If excluded, `code_challenge` is assumed to be plaintext if `code_challenge` is included. Microsoft identity platform supports both `plain` and `S256`. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). This is required for [single page apps using the authorization code flow](tutorial-register-spa.md).|
+| code_challenge | recommended / required | Used to secure authorization code grants via Proof Key for Code Exchange (PKCE). Required if `code_challenge_method` is included. You need to add logic in your application to generate the `code_verifier` and `code_challenge`. The `code_challenge` is a Base64 URL-encoded SHA256 hash of the `code_verifier`. You store the `code_verifier` in your application for later use, and send the `code_challenge` along with the authorization request. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). This is now recommended for all application types - native apps, SPAs, and confidential clients like web apps. |
+| `code_challenge_method` | recommended / required | The method used to encode the `code_verifier` for the `code_challenge` parameter. This *SHOULD* be `S256`, but the spec allows the use of `plain` if for some reason the client cannot support SHA256. <br/><br/>If you exclude the `code_challenge_method`, but still include the `code_challenge`, then the `code_challenge` is assumed to be plaintext. Microsoft identity platform supports both `plain` and `S256`. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). This is required for [single page apps using the authorization code flow](tutorial-register-spa.md).|
| login_hint | No| Can be used to pre-fill the sign-in name field of the sign-in page. For more information, see [Prepopulate the sign-in name](direct-signin.md#prepopulate-the-sign-in-name). | | domain_hint | No| Provides a hint to Azure AD B2C about the social identity provider that should be used for sign-in. If a valid value is included, the user goes directly to the identity provider sign-in page. For more information, see [Redirect sign-in to a social provider](direct-signin.md#redirect-sign-in-to-a-social-provider). | | Custom parameters | No| Custom parameters that can be used with [custom policies](custom-policy-overview.md). For example, [dynamic custom page content URI](customize-ui-with-html.md?pivots=b2c-custom-policy#configure-dynamic-custom-page-content-uri), or [key-value claim resolvers](claim-resolver-overview.md#oauth2-key-value-parameters). |
grant_type=authorization_code&client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6&sco
| client_secret | Yes, in Web Apps | The application secret that was generated in the [Azure portal](https://portal.azure.com/). Client secrets are used in this flow for Web App scenarios, where the client can securely store a client secret. For Native App (public client) scenarios, client secrets cannot be securely stored, and therefore are not used in this call. If you use a client secret, please change it on a periodic basis. | | grant_type |Required |The type of grant. For the authorization code flow, the grant type must be `authorization_code`. | | scope |Required |A space-separated list of scopes. A single scope value indicates to Azure AD both of the permissions that are being requested. Using the client ID as the scope indicates that your app needs an access token that can be used against your own service or web API, represented by the same client ID. The `offline_access` scope indicates that your app needs a refresh token for long-lived access to resources. You also can use the `openid` scope to request an ID token from Azure AD B2C. |
-| code |Required |The authorization code that you acquired in the first leg of the flow. |
+| code |Required |The authorization code that you acquired in from the `/authorize` endpoint. |
| redirect_uri |Required |The redirect URI of the application where you received the authorization code. |
-| code_verifier | recommended | The same code_verifier that was used to obtain the authorization_code. Required if PKCE was used in the authorization code grant request. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). |
+| code_verifier | recommended | The same `code_verifier` used to obtain the authorization code. Required if PKCE was used in the authorization code grant request. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). |
If you're testing this POST HTTP request, you can use any HTTP client such as [Microsoft PowerShell](/powershell/scripting/overview) or [Postman](https://www.postman.com/).
active-directory-b2c Conditional Access User Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/conditional-access-user-flow.md
Previously updated : 12/09/2021 Last updated : 03/03/2022
active-directory-b2c Microsoft Graph Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/microsoft-graph-operations.md
Previously updated : 12/09/2021 Last updated : 03/03/2022
active-directory-b2c Partner Bindid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bindid.md
You should now see BindID as a new OIDC Identity provider listed within your B2C
11. Select **Run user flow**.
-12. The browser will be redirected to the BindID login page. Enter the account name registered during User registration. The user will receive a push notification to the registered user mobile device for a Fast Identity Online (FIDO2) certified authentication. It can be a user finger print, biometric or decentralized pin.
+12. The browser will be redirected to the BindID login page. Enter the account name registered during User registration. The user enters the registered account email and authenticates using appless FIDO2 biometrics, such as fingerprint.
13. Once the authentication challenge is accepted, the browser will redirect the user to the replying URL.
active-directory-b2c Tokens Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tokens-overview.md
Previously updated : 02/11/2022 Last updated : 03/03/2022
Azure AD B2C supports the [OAuth 2.0 and OpenID Connect protocols](protocols-ove
The following tokens are used in communication with Azure AD B2C: -- **ID token** - A JWT that contains claims that you can use to identify users in your application. This token is securely sent in HTTP requests for communication between two components of the same application or service. You can use the claims in an ID token as you see fit. They are commonly used to display account information or to make access control decisions in an application. ID tokens are signed, but they are not encrypted. When your application or API receives an ID token, it must validate the signature to prove that the token is authentic. Your application or API must also validate a few claims in the token to prove that it's valid. Depending on the scenario requirements, the claims validated by an application can vary, but your application must perform some common claim validations in every scenario.-- **Access token** - A JWT that contains claims that you can use to identify the granted permissions to your APIs. Access tokens are signed, but they aren't encrypted. Access tokens are used to provide access to APIs and resource servers. When your API receives an access token, it must validate the signature to prove that the token is authentic. Your API must also validate a few claims in the token to prove that it is valid. Depending on the scenario requirements, the claims validated by an application can vary, but your application must perform some common claim validations in every scenario.-- **Refresh token** - Refresh tokens are used to acquire new ID tokens and access tokens in an OAuth 2.0 flow. They provide your application with long-term access to resources on behalf of users without requiring interaction with those users. Refresh tokens are opaque to your application. They are issued by Azure AD B2C and can be inspected and interpreted only by Azure AD B2C. They are long-lived, but your application shouldn't be written with the expectation that a refresh token will last for a specific period of time. Refresh tokens can be invalidated at any moment for a variety of reasons. The only way for your application to know if a refresh token is valid is to attempt to redeem it by making a token request to Azure AD B2C. When you redeem a refresh token for a new token, you receive a new refresh token in the token response. Save the new refresh token. It replaces the refresh token that you previously used in the request. This action helps guarantee that your refresh tokens remain valid for as long as possible. Note that single-page applications using the authorization code flow with PKCE always have a refresh token lifetime of 24 hours. [Learn more about the security implications of refresh tokens in the browser](../active-directory/develop/reference-third-party-cookies-spas.md#security-implications-of-refresh-tokens-in-the-browser).
+- **ID token** - A JWT that contains claims that you can use to identify users in your application. This token is securely sent in HTTP requests for communication between two components of the same application or service. You can use the claims in an ID token as you see fit. They're commonly used to display account information or to make access control decisions in an application. ID tokens are signed, but the're not encrypted. When your application or API receives an ID token, it must validate the signature to prove that the token is authentic. Your application or API must also validate a few claims in the token to prove that it's valid. Depending on the scenario requirements, the claims validated by an application can vary, but your application must perform some common claim validations in every scenario.
+
+- **Access token** - A JWT that contains claims that you can use to identify the granted permissions to your APIs. Access tokens are signed, but they aren't encrypted. Access tokens are used to provide access to APIs and resource servers. When your API receives an access token, it must validate the signature to prove that the token is authentic. Your API must also validate a few claims in the token to prove that it's valid. Depending on the scenario requirements, the claims validated by an application can vary, but your application must perform some common claim validations in every scenario.
+
+- **Refresh token** - Refresh tokens are used to acquire new ID tokens and access tokens in an OAuth 2.0 flow. They provide your application with long-term access to resources on behalf of users without requiring interaction with those users. Refresh tokens are opaque to your application. They're issued by Azure AD B2C and can be inspected and interpreted only by Azure AD B2C. They're long-lived, but your application shouldn't be written with the expectation that a refresh token will last for a specific period of time. Refresh tokens can be invalidated at any moment for a variety of reasons. The only way for your application to know if a refresh token is valid is to attempt to redeem it by making a token request to Azure AD B2C. When you redeem a refresh token for a new token, you receive a new refresh token in the token response. Save the new refresh token. It replaces the refresh token that you previously used in the request. This action helps guarantee that your refresh tokens remain valid for as long as possible. Single-page applications using the authorization code flow with PKCE always have a refresh token lifetime of 24 hours. [Learn more about the security implications of refresh tokens in the browser](../active-directory/develop/reference-third-party-cookies-spas.md#security-implications-of-refresh-tokens-in-the-browser).
## Endpoints
The metadata document for the `B2C_1_signupsignin1` policy in the `contoso.onmic
https://contoso.b2clogin.com/contoso.onmicrosoft.com/b2c_1_signupsignin1/v2.0/.well-known/openid-configuration ```
-To determine which policy was used to sign a token (and where to go to request the metadata), you have two options. First, the policy name is included in the `tfp` (default) or `acr` claim (as configured) in the token. You can parse claims out of the body of the JWT by base-64 decoding the body and deserializing the JSON string that results. The `tfp` or `acr` claim is the name of the policy that was used to issue the token. The other option is to encode the policy in the value of the `state` parameter when you issue the request, and then decode it to determine which policy was used. Either method is valid.
+To determine which policy was used to sign a token (and where to go to request the metadata), you've two options. First, the policy name is included in the `tfp` (default) or `acr` claim (as configured) in the token. You can parse claims out of the body of the JWT by base-64 decoding the body and deserializing the JSON string that results. The `tfp` or `acr` claim is the name of the policy that was used to issue the token. The other option is to encode the policy in the value of the `state` parameter when you issue the request, and then decode it to determine which policy was used. Either method is valid.
Azure AD B2C uses the RS256 algorithm, which is based on the [RFC 3447](https://www.rfc-editor.org/rfc/rfc3447#section-3.1) specification. The public key consists of two components: the RSA modulus (`n`) and the RSA public exponent (`e`). You can programmatically convert `n` and `e` values to a certificate format for token validation.
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 11/02/2021 Last updated : 03/03/2022
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md).
+## February 2022
+
+### New articles
+
+- [Configure authentication in a sample Node.js web application by using Azure Active Directory B2C](configure-a-sample-node-web-app.md)
+- [Configure authentication in a sample Node.js web API by using Azure Active Directory B2C](configure-authentication-in-sample-node-web-app-with-api.md)
+- [Enable authentication options in a Node.js web app by using Azure Active Directory B2C](enable-authentication-in-node-web-app-options.md)
+- [Enable Node.js web API authentication options using Azure Active Directory B2C](enable-authentication-in-node-web-app-with-api-options.md)
+- [Enable authentication in your own Node.js web API by using Azure Active Directory B2C](enable-authentication-in-node-web-app-with-api.md)
+- [Enable authentication in your own Node web application using Azure Active Directory B2C](enable-authentication-in-node-web-app.md)
+
+### Updated articles
+
+- [Configure session behavior in Azure Active Directory B2C](session-behavior.md)
+- [Customize the user interface with HTML templates in Azure Active Directory B2C](customize-ui-with-html.md)
+- [Define a self-asserted technical profile in an Azure Active Directory B2C custom policy](self-asserted-technical-profile.md)
+- [About claim resolvers in Azure Active Directory B2C custom policies](claim-resolver-overview.md)
+- [Date claims transformations](date-transformations.md)
+- [Integer claims transformations](integer-transformations.md)
+- [JSON claims transformations](json-transformations.md)
+- [Define phone number claims transformations in Azure AD B2C](phone-number-claims-transformations.md)
+- [Social accounts claims transformations](social-transformations.md)
+- [String claims transformations](string-transformations.md)
+- [Web sign in with OpenID Connect in Azure Active Directory B2C](openid-connect.md)
+ ## January 2022 ### Updated articles
Welcome to what's new in Azure Active Directory B2C documentation. This article
- [Manage Azure AD B2C with Microsoft Graph](microsoft-graph-operations.md) - [Define an Azure AD MFA technical profile in an Azure AD B2C custom policy](multi-factor-auth-technical-profile.md) - [Enable multifactor authentication in Azure Active Directory B2C](multi-factor-authentication.md)-- [String claims transformations](string-transformations.md)-
-## November 2021
-
-### Updated articles
--- [Define an OAuth2 technical profile in an Azure Active Directory B2C custom policy](oauth2-technical-profile.md)-- [Error codes: Azure Active Directory B2C](error-codes.md)-- [Configure authentication options in an Android app by using Azure AD B2C](enable-authentication-android-app-options.md)-- [Set up a force password reset flow in Azure Active Directory B2C](force-password-reset.md)--
-## October 2021
-
-### New articles
--- [Tutorial: Configure IDEMIA with Azure Active Directory B2C for relying party to consume IDEMIA or US State issued mobile identity credentials (Preview)](partner-idemia.md)-- [Tutorial: Extend Azure Active Directory B2C to protect on-premises applications using F5 BIG-IP](partner-f5.md)-- [Roles and resource access control](roles-resource-access-control.md)-- [Supported Azure AD features](supported-azure-ad-features.md)-
-### Updated articles
--- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](tutorial-create-user-flows.md)-- [Customize the user interface in Azure Active Directory B2C](customize-ui.md)-- [Tutorial: Extend Azure Active Directory B2C to protect on-premises applications using F5 BIG-IP](partner-f5.md)-- [Set up sign-up and sign-in with generic OpenID Connect using Azure Active Directory B2C](identity-provider-generic-openid-connect.md)-- [RelyingParty](relyingparty.md)-- [Customize the user interface with HTML templates in Azure Active Directory B2C](customize-ui-with-html.md)-- [Collect Azure Active Directory B2C logs with Application Insights](troubleshoot-with-application-insights.md)-- [Troubleshoot Azure AD B2C custom policies and user flows](troubleshoot.md)-- [Define custom attributes in Azure Active Directory B2C](user-flow-custom-attributes.md)-- [Tutorial: Configure security analytics for Azure Active Directory B2C data with Azure Sentinel](azure-sentinel.md)-- [What is Azure Active Directory B2C?](overview.md)-- [Quickstart: Set up sign in for a single-page app using Azure Active Directory B2C](quickstart-single-page-app.md)-- [Quickstart: Set up sign in for an ASP.NET application using Azure Active Directory B2C](quickstart-web-app-dotnet.md)-- [Solutions and Training for Azure Active Directory B2C](solution-articles.md)-- [Technical and feature overview of Azure Active Directory B2C](technical-overview.md)-- [Register a SAML application in Azure AD B2C](saml-service-provider.md)-- [Configure session behavior in Azure Active Directory B2C](session-behavior.md)----
-## September 2021
-
-### Updated articles
--- [Page layout versions](page-layout.md)-- [Tutorial: Create an Azure Active Directory B2C tenant](tutorial-create-tenant.md)-- [Add an API connector to a sign-up user flow](add-api-connector.md)-- [Secure your API used an API connector in Azure AD B2C](secure-rest-api.md)-- [Configure session behavior in Azure Active Directory B2C](session-behavior.md)-- [Manage your Azure Active Directory B2C tenant](tenant-management.md)-- [Clean up resources and delete the tenant](tutorial-delete-tenant.md)-- [Define custom attributes in Azure Active Directory B2C](user-flow-custom-attributes.md)-- [Tutorial: Configure Azure Active Directory B2C with BlokSec for passwordless authentication](partner-bloksec.md)-- [Configure itsme OpenID Connect (OIDC) with Azure Active Directory B2C](partner-itsme.md)-- [Tutorial: Configure Keyless with Azure Active Directory B2C](partner-keyless.md)-- [Tutorial: Configure Nok Nok with Azure Active Directory B2C to enable passwordless FIDO2 authentication](partner-nok-nok.md)-- [Tutorial for configuring Saviynt with Azure Active Directory B2C](partner-saviynt.md)-- [Integrating Trusona with Azure Active Directory B2C](partner-trusona.md)-- [Integrating Twilio Verify App with Azure Active Directory B2C](partner-twilio.md)-- [Configure complexity requirements for passwords in Azure Active Directory B2C](password-complexity.md)-- [Set up phone sign-up and sign-in for user flows](phone-authentication-user-flows.md)-- [Set up sign-up and sign-in with a Google account using Azure Active Directory B2C](identity-provider-google.md)-- [Set up sign-up and sign-in with a ID.me account using Azure Active Directory B2C](identity-provider-id-me.md)-- [Set up sign-up and sign-in with a LinkedIn account using Azure Active Directory B2C](identity-provider-linkedin.md)-- [Set up sign-up and sign-in with a QQ account using Azure Active Directory B2C](identity-provider-qq.md)-- [Set up sign-in with a Salesforce SAML provider by using SAML protocol in Azure Active Directory B2C](identity-provider-salesforce-saml.md)-- [Set up sign-up and sign-in with a Salesforce account using Azure Active Directory B2C](identity-provider-salesforce.md)-- [Set up sign-up and sign-in with a Twitter account using Azure Active Directory B2C](identity-provider-twitter.md)-- [Set up sign-up and sign-in with a WeChat account using Azure Active Directory B2C](identity-provider-wechat.md)-- [Set up sign-up and sign-in with a Weibo account using Azure Active Directory B2C](identity-provider-weibo.md)-- [Pass an identity provider access token to your application in Azure Active Directory B2C](idp-pass-through-user-flow.md)-- [Add AD FS as a SAML identity provider using custom policies in Azure Active Directory B2C](identity-provider-adfs-saml.md)-- [Set up sign-up and sign-in with an Amazon account using Azure Active Directory B2C](identity-provider-amazon.md)-- [Set up sign-up and sign-in with a Facebook account using Azure Active Directory B2C](identity-provider-facebook.md)-- [Monitor Azure AD B2C with Azure Monitor](azure-monitor.md)-- [Billing model for Azure Active Directory B2C](billing.md)-- [Add Conditional Access to user flows in Azure Active Directory B2C](conditional-access-user-flow.md)-- [Enable custom domains for Azure Active Directory B2C](custom-domain.md)--
-## August 2021
-
-### New articles
--- [Deploy custom policies with GitHub Actions](deploy-custom-policies-github-action.md)-- [Configure authentication in a sample WPF desktop app by using Azure AD B2C](configure-authentication-sample-wpf-desktop-app.md)-- [Enable authentication options in a WPF desktop app by using Azure AD B2C](enable-authentication-wpf-desktop-app-options.md)-- [Add AD FS as a SAML identity provider using custom policies in Azure Active Directory B2C](identity-provider-adfs-saml.md)-- [Configure authentication in a sample Python web application using Azure Active Directory B2C](configure-authentication-sample-python-web-app.md)-- [Configure authentication options in a Python web application using Azure Active Directory B2C](enable-authentication-python-web-app-options.md)-- [Tutorial: How to perform security analytics for Azure AD B2C data with Azure Sentinel](azure-sentinel.md)-- [Enrich tokens with claims from external sources using API connectors](add-api-connector-token-enrichment.md)-
-### Updated articles
--- [Customize the user interface with HTML templates in Azure Active Directory B2C](customize-ui-with-html.md)-- [Configure authentication in a sample WPF desktop app by using Azure AD B2C](configure-authentication-sample-wpf-desktop-app.md)-- [Enable authentication options in a WPF desktop app by using Azure AD B2C](enable-authentication-wpf-desktop-app-options.md)-- [Configure authentication in a sample iOS Swift app by using Azure AD B2C](configure-authentication-sample-ios-app.md)-- [Enable authentication options in an iOS Swift app by using Azure AD B2C](enable-authentication-ios-app-options.md)-- [Enable authentication in your own iOS Swift app by using Azure AD B2C](enable-authentication-ios-app.md)-- [Add a web API application to your Azure Active Directory B2C tenant](add-web-api-application.md)-- [Configure authentication in a sample Android app by using Azure AD B2C](configure-authentication-sample-android-app.md)-- [Configure authentication options in an Android app by using Azure AD B2C](enable-authentication-android-app-options.md)-- [Enable authentication in your own Android app by using Azure AD B2C](enable-authentication-android-app.md)-- [Configure authentication in a sample web app by using Azure AD B2C](configure-authentication-sample-web-app.md)-- [Enable authentication options in a web app by using Azure AD B2C](enable-authentication-web-application-options.md)-- [Enable authentication in your own web app by using Azure AD B2C](enable-authentication-web-application.md)-- [Configure authentication options in a single-page application by using Azure AD B2C](enable-authentication-spa-app-options.md)-- [Enable custom domains for Azure Active Directory B2C](custom-domain.md)-- [Add AD FS as an OpenID Connect identity provider using custom policies in Azure Active Directory B2C](identity-provider-adfs.md)-- [Configure SAML identity provider options with Azure Active Directory B2C](identity-provider-generic-saml-options.md)-- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](tutorial-create-user-flows.md)-- [Tutorial: Configure Azure Active Directory B2C with BlokSec for passwordless authentication](partner-bloksec.md)-- [Add an API connector to a sign-up user flow](add-api-connector.md)-- [Use API connectors to customize and extend sign-up user flows](api-connectors-overview.md)-- [Set up phone sign-up and sign-in for user flows](phone-authentication-user-flows.md)--
-## July 2021
-
-### New articles
--- [Configure authentication in a sample Angular Single Page application using Azure Active Directory B2C](configure-authentication-sample-angular-spa-app.md)-- [Configure authentication in a sample iOS Swift application using Azure Active Directory B2C](configure-authentication-sample-ios-app.md)-- [Configure authentication options in an Angular application using Azure Active Directory B2C](enable-authentication-angular-spa-app-options.md)-- [Enable authentication in your own Angular Application using Azure Active Directory B2C](enable-authentication-angular-spa-app.md)-- [Configure authentication options in an iOS Swift application using Azure Active Directory B2C](enable-authentication-ios-app-options.md)-- [Enable authentication in your own iOS Swift application using Azure Active Directory B2C](enable-authentication-ios-app.md)-
-### Updated articles
--- [Customize the user interface in Azure Active Directory B2C](customize-ui.md)-- [Integer claims transformations](integer-transformations.md)-- [Enable JavaScript and page layout versions in Azure Active Directory B2C](javascript-and-page-layout.md)-- [Monitor Azure AD B2C with Azure Monitor](azure-monitor.md)-- [Page layout versions](page-layout.md)-- [Set up a password reset flow in Azure Active Directory B2C](add-password-reset-policy.md)--
-## June 2021
-
-### New articles
--- [Enable authentication in your own web API using Azure Active Directory B2C](enable-authentication-web-api.md)-- [Enable authentication in your own Single Page Application using Azure Active Directory B2C](enable-authentication-spa-app.md)-- [Publish your Azure AD B2C app to the Azure AD app gallery](publish-app-to-azure-ad-app-gallery.md)-- [Configure authentication in a sample Single Page application using Azure Active Directory B2C](configure-authentication-sample-spa-app.md)-- [Configure authentication in a sample web application that calls a web API using Azure Active Directory B2C](configure-authentication-sample-web-app-with-api.md)-- [Configure authentication in a sample Single Page application using Azure Active Directory B2C options](enable-authentication-spa-app-options.md)-- [Configure authentication in a sample web application that calls a web API using Azure Active Directory B2C options](enable-authentication-web-app-with-api-options.md)-- [Enable authentication in your own web application that calls a web API using Azure Active Directory B2C](enable-authentication-web-app-with-api.md)-- [Sign-in options in Azure AD B2C](sign-in-options.md)-
-### Updated articles
--- [User profile attributes](user-profile-attributes.md)-- [Configure authentication in a sample web application using Azure Active Directory B2C](configure-authentication-sample-web-app.md)-- [Configure authentication in a sample web application using Azure Active Directory B2C options](enable-authentication-web-application-options.md)-- [Set up a sign-in flow in Azure Active Directory B2C](add-sign-in-policy.md)-- [Set up a sign-up and sign-in flow in Azure Active Directory B2C](add-sign-up-and-sign-in-policy.md)-- [Set up the local account identity provider](identity-provider-local.md)-- [Technical and feature overview of Azure Active Directory B2C](technical-overview.md)-- [Add user attributes and customize user input in Azure Active Directory B2C](configure-user-input.md)-- [Azure Active Directory B2C service limits and restrictions](service-limits.md)--
-## May 2021
-
-### New articles
--- [Define an OAuth2 custom error technical profile in an Azure Active Directory B2C custom policy](oauth2-error-technical-profile.md)-- [Configure authentication in a sample web application using Azure Active Directory B2C](configure-authentication-sample-web-app.md)-- [Configure authentication in a sample web application using Azure Active Directory B2C options](enable-authentication-web-application-options.md)-- [Enable authentication in your own web application using Azure Active Directory B2C](enable-authentication-web-application.md)-- [Azure Active Directory B2C TLS and cipher suite requirements](https-cipher-tls-requirements.md)-
-### Updated articles
--- [Add Conditional Access to user flows in Azure Active Directory B2C](conditional-access-user-flow.md)-- [Mitigate credential attacks in Azure AD B2C](threat-management.md)-- [Azure Active Directory B2C service limits and restrictions](service-limits.md)--
-## April 2021
-
-### New articles
--- [Set up sign-up and sign-in with a eBay account using Azure Active Directory B2C](identity-provider-ebay.md)-- [Clean up resources and delete the tenant](tutorial-delete-tenant.md)-- [Define a Conditional Access technical profile in an Azure Active Directory B2C custom policy](conditional-access-technical-profile.md)-- [Manage your Azure Active Directory B2C tenant](tenant-management.md)-
-### Updated articles
--- [Developer notes for Azure Active Directory B2C](custom-policy-developer-notes.md)-- [Add an API connector to a sign-up user flow](add-api-connector.md)-- [Walkthrough: Add REST API claims exchanges to custom policies in Azure Active Directory B2C](add-api-connector-token-enrichment.md)-- [Secure your API Connector](secure-rest-api.md)-- [Use API connectors to customize and extend sign-up user flows](api-connectors-overview.md)-- [Technical and feature overview of Azure Active Directory B2C](technical-overview.md)-- [Overview of policy keys in Azure Active Directory B2C](policy-keys-overview.md)-- [Custom email verification with Mailjet](custom-email-mailjet.md)-- [Custom email verification with SendGrid](custom-email-sendgrid.md)-- [Tutorial: Create user flows in Azure Active Directory B2C](tutorial-create-user-flows.md)-- [Azure AD B2C custom policy overview](custom-policy-overview.md)-- [User flows and custom policies overview](user-flow-overview.md)-- [Set up phone sign-up and sign-in for user flows](phone-authentication-user-flows.md)-- [Enable multifactor authentication in Azure Active Directory B2C](multi-factor-authentication.md)-- [User flow versions in Azure Active Directory B2C](user-flow-versions.md)--
-## March 2021
-
-### New articles
--- [Enable custom domains for Azure Active Directory B2C](custom-domain.md)-- [Investigate risk with Identity Protection in Azure AD B2C](identity-protection-investigate-risk.md)-- [Set up sign-up and sign-in with an Apple ID using Azure Active Directory B2C (Preview)](identity-provider-apple-id.md)-- [Set up a force password reset flow in Azure Active Directory B2C](force-password-reset.md)-- [Embedded sign-in experience](embedded-login.md)-
-### Updated articles
--- [Set up sign-up and sign-in with an Amazon account using Azure Active Directory B2C](identity-provider-amazon.md)-- [Set up sign-in with a Salesforce SAML provider by using SAML protocol in Azure Active Directory B2C](identity-provider-salesforce-saml.md)-- [Migrate an OWIN-based web API to b2clogin.com or a custom domain](multiple-token-endpoints.md)-- [Technical profiles](technicalprofiles.md)-- [Add Conditional Access to user flows in Azure Active Directory B2C](conditional-access-user-flow.md)-- [Set up a password reset flow in Azure Active Directory B2C](add-password-reset-policy.md)-- [RelyingParty](relyingparty.md)--
-## February 2021
-
-### New articles
--- [Securing phone-based multifactor authentication (MFA)](phone-based-mfa.md)-
-### Updated articles
--- [Azure Active Directory B2C code samples](integrate-with-app-code-samples.md)-- [Track user behavior in Azure AD B2C by using Application Insights](analytics-with-application-insights.md)-- [Configure session behavior in Azure Active Directory B2C](session-behavior.md)
+- [String claims transformations](string-transformations.md)
active-directory Concept Sspr Howitworks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-howitworks.md
Last updated 06/14/2021
-+
active-directory Concept Sspr Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-licensing.md
Last updated 07/13/2021
-+
active-directory Concept Sspr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-policy.md
Last updated 06/25/2021
-+
active-directory Howto Registration Mfa Sspr Combined Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-registration-mfa-sspr-combined-troubleshoot.md
Last updated 01/19/2021
-+
active-directory Howto Sspr Authenticationdata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-authenticationdata.md
Last updated 10/05/2020
-+
active-directory Howto Sspr Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-customization.md
Last updated 07/17/2020
-+
active-directory Howto Sspr Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-deployment.md
Last updated 02/02/2022 -+ -+
active-directory Howto Sspr Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-reporting.md
Last updated 10/25/2021
-+
active-directory Howto Sspr Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-windows.md
Last updated 07/17/2020
-+
active-directory Troubleshoot Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-sspr-writeback.md
Last updated 02/22/2022
-+
active-directory Troubleshoot Sspr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-sspr.md
Last updated 06/28/2021
-+
active-directory Tutorial Enable Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-sspr-writeback.md
Last updated 11/11/2021
-+
active-directory Tutorial Enable Sspr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-sspr.md
Last updated 1/05/2022 -+ # Customer intent: As an Azure AD Administrator, I want to learn how to enable and use self-service password reset so that my end-users can unlock their accounts or reset their passwords through a web browser.
active-directory Active Directory Authentication Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/active-directory-authentication-libraries.md
The Azure Active Directory Authentication Library (ADAL) v1.0 enables applicatio
> [!NOTE] > Looking for the Azure AD v2.0 libraries (MSAL)? Checkout the [MSAL library guide](../develop/reference-v2-libraries.md).
->
->
++
+> [!WARNING]
+> Support for Active Directory Authentication Library (ADAL) will end in December, 2022. Apps using ADAL on existing OS versions will continue to work, but technical support and security updates will end. Without continued security updates, apps using ADAL will become increasingly vulnerable to the latest security attack patterns. For more information, see [Migrate apps to MSAL](..\develop\msal-migration.md).
## Microsoft-supported Client Libraries
active-directory Sample V1 Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/sample-v1-code.md
This section provides links to samples you can use to learn more about the Azure
> [!NOTE] > If you are interested in Azure AD V2 code samples, see [v2.0 code samples by scenario](../develop/sample-v2-code.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json).
+> [!WARNING]
+> Support for Active Directory Authentication Library (ADAL) will end in December, 2022. Apps using ADAL on existing OS versions will continue to work, but technical support and security updates will end. Without continued security updates, apps using ADAL will become increasingly vulnerable to the latest security attack patterns. For more information, see [Migrate apps to MSAL](..\develop\msal-migration.md).
+ To understand the basic scenario for each sample type, see [Authentication scenarios for Azure AD](v1-authentication-scenarios.md). You can also contribute to our samples on GitHub. To learn how, see [Microsoft Azure Active Directory samples and documentation](https://github.com/Azure-Samples?page=3&query=active-directory).
active-directory Howto Get List Of All Active Directory Auth Library Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-get-list-of-all-active-directory-auth-library-apps.md
Previously updated : 07/22/2021 Last updated : 03/03/2022
# Get a complete list of apps using ADAL in your tenant
-Support for Active Directory Authentication Library (ADAL) will end on June 30, 2022. Apps using ADAL on existing OS versions will continue to work, but technical support and security updates will end. Without continued security updates, apps using ADAL will become increasingly vulnerable to the latest security attack patterns. This article provides guidance on how to use Azure Monitor workbooks to obtain a list of all apps that use ADAL in your tenant.
+Support for Active Directory Authentication Library (ADAL) will end in December, 2022. Apps using ADAL on existing OS versions will continue to work, but technical support and security updates will end. Without continued security updates, apps using ADAL will become increasingly vulnerable to the latest security attack patterns. For more information, see [Migrate apps to MSAL](msal-migration.md). This article provides guidance on how to use Azure Monitor workbooks to obtain a list of all apps that use ADAL in your tenant.
## Sign-ins workbook
active-directory Msal Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-migration.md
Previously updated : 07/22/2021 Last updated : 03/03/2022
If any of your applications use the Azure Active Directory Authentication Library (ADAL) for authentication and authorization functionality, it's time to migrate them to the [Microsoft Authentication Library (MSAL)](msal-overview.md#languages-and-frameworks). -- All Microsoft support and development for ADAL, including security fixes, ends on June 30, 2022.
+- All Microsoft support and development for ADAL, including security fixes, ends in December, 2022.
+- There are no ADAL feature releases or new platform version releases planned prior to December, 2022.
- No new features have been added to ADAL since June 30, 2020. > [!WARNING]
-> If you choose not to migrate to MSAL before ADAL support ends on June 30, 2022, you put your app's security at risk. Existing apps that use ADAL will continue to work after the end-of-support date, but Microsoft will no longer release security fixes on ADAL.
+> If you choose not to migrate to MSAL before ADAL support ends in December, 2022, you put your app's security at risk. Existing apps that use ADAL will continue to work after the end-of-support date, but Microsoft will no longer release security fixes on ADAL.
## Why switch to MSAL?
MSAL provides multiple benefits over ADAL, including the following features:
|Features|MSAL|ADAL| |||| |**Security**|||
-|Security fixes beyond June 30, 2022|![Security fixes beyond June 30, 2022 - MSAL provides the feature][y]|![Security fixes beyond June 30, 2022 - ADAL doesn't provide the feature][n]|
+|Security fixes beyond December, 2022|![Security fixes beyond December, 2022 - MSAL provides the feature][y]|![Security fixes beyond December, 2022 - ADAL doesn't provide the feature][n]|
| Proactively refresh and revoke tokens based on policy or critical events for Microsoft Graph and other APIs that support [Continuous Access Evaluation (CAE)](app-resilience-continuous-access-evaluation.md).|![Proactively refresh and revoke tokens based on policy or critical events for Microsoft Graph and other APIs that support Continuous Access Evaluation (CAE) - MSAL provides the feature][y]|![Proactively refresh and revoke tokens based on policy or critical events for Microsoft Graph and other APIs that support Continuous Access Evaluation (CAE) - ADAL doesn't provide the feature][n]| | Standards compliant with OAuth v2.0 and OpenID Connect (OIDC) |![Standards compliant with OAuth v2.0 and OpenID Connect (OIDC) - MSAL provides the feature][y]|![Standards compliant with OAuth v2.0 and OpenID Connect (OIDC) - ADAL doesn't provide the feature][n]| |**User accounts and experiences**|||
active-directory V2 Oauth2 Auth Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-auth-code-flow.md
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| `response_type` | required | Must include `code` for the authorization code flow. Can also include `id_token` or `token` if using the [hybrid flow](#request-an-id-token-as-well-or-hybrid-flow). | | `redirect_uri` | required | The `redirect_uri` of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect URIs you registered in the portal, except it must be URL-encoded. For native and mobile apps, use one of the recommended values: `https://login.microsoftonline.com/common/oauth2/nativeclient` for apps using embedded browsers or `http://localhost` for apps that use system browsers. | | `scope` | required | A space-separated list of [scopes](v2-permissions-and-consent.md) that you want the user to consent to. For the `/authorize` leg of the request, this parameter can cover multiple resources. This value allows your app to get consent for multiple web APIs you want to call. |
-| `response_mode` | recommended | Specifies the method that should be used to send the resulting token back to your app. It can be one of the following values:<br/><br/>- `query`<br/>- `fragment`<br/>- `form_post`<br/><br/>`query` provides the code as a query string parameter on your redirect URI. If you're requesting an ID token using the implicit flow, you can't use `query` as specified in the [OpenID spec](https://openid.net/specs/oauth-v2-multiple-response-types-1_0.html#Combinations). If you're requesting just the code, you can use `query`, `fragment`, or `form_post`. `form_post` executes a POST containing the code to your redirect URI. |
+| `response_mode` | recommended | Specifies how the identity platform should return the requested token to your app. <br/><br/>Supported values:<br/><br/>- `query`: Default when requesting an access token. Provides the code as a query string parameter on your redirect URI. The `query` parameter is not supported when requesting an ID token by using the implicit flow. <br/>- `fragment`: Default when requesting an ID token by using the implicit flow. Also supported if requesting *only* a code.<br/>- `form_post`: Executes a POST containing the code to your redirect URI. Supported when requesting a code.<br/><br/> |
| `state` | recommended | A value included in the request that is also returned in the token response. It can be a string of any content that you wish. A randomly generated unique value is typically used for [preventing cross-site request forgery attacks](https://tools.ietf.org/html/rfc6749#section-10.12). The value can also encode information about the user's state in the app before the authentication request occurred. For instance, it could encode the page or view they were on. | | `prompt` | optional | Indicates the type of user interaction that is required. Valid values are `login`, `none`, `consent`, and `select_account`.<br/><br/>- `prompt=login` forces the user to enter their credentials on that request, negating single-sign on.<br/>- `prompt=none` is the opposite. It ensures that the user isn't presented with any interactive prompt. If the request can't be completed silently by using single-sign on, the Microsoft identity platform returns an `interaction_required` error.<br/>- `prompt=consent` triggers the OAuth consent dialog after the user signs in, asking the user to grant permissions to the app.<br/>- `prompt=select_account` interrupts single sign-on providing account selection experience listing all the accounts either in session or any remembered account or an option to choose to use a different account altogether.<br/> | | `login_hint` | optional | You can use this parameter to pre-fill the username and email address field of the sign-in page for the user. Apps can use this parameter during reauthentication, after already extracting the `login_hint` [optional claim](active-directory-optional-claims.md) from an earlier sign-in. |
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
|`response_type`| required | The addition of `id_token` indicates to the server that the application would like an ID token in the response from the `/authorize` endpoint. | |`scope`| required | For ID tokens, this parameter must be updated to include the ID token scopes: `openid` and optionally `profile` and `email`. | |`nonce`| required| A value included in the request, generated by the app, that is included in the resulting `id_token` as a claim. The app can then verify this value to mitigate token replay attacks. The value is typically a randomized, unique string that can be used to identify the origin of the request. |
-|`response_mode`| recommended | Specifies the method that should be used to send the resulting token back to your app. Default value is `query` for just an authorization code, but `fragment` if the request includes an `id_token` `response_type`. We recommend apps use `form_post`, especially when using `http://localhost` as a redirect URI. |
+|`response_mode`| recommended | Specifies the method that should be used to send the resulting token back to your app. Default value is `query` for just an authorization code, but `fragment` if the request includes an `id_token` `response_type` as specified in the [OpenID spec](https://openid.net/specs/oauth-v2-multiple-response-types-1_0.html#Combinations). We recommend apps use `form_post`, especially when using `http://localhost` as a redirect URI. |
The use of `fragment` as a response mode causes issues for web apps that read the code from the redirect. Browsers don't pass the fragment to the web server. In these situations, apps should use the `form_post` response mode to ensure that all data is sent to the server.
active-directory Assign Local Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/assign-local-admin.md
To manage a Windows device, you need to be a member of the local administrators group. As part of the Azure Active Directory (Azure AD) join process, Azure AD updates the membership of this group on a device. You can customize the membership update to satisfy your business requirements. A membership update is, for example, helpful if you want to enable your helpdesk staff to do tasks requiring administrator rights on a device.
-This article explains how the local administrators membership update works and how you can customize it during an Azure AD Join. The content of this article doesn't apply to a **hybrid Azure AD joined** devices.
+This article explains how the local administrators membership update works and how you can customize it during an Azure AD Join. The content of this article doesn't apply to **hybrid Azure AD joined** devices.
## How it works
active-directory Cross Tenant Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md
If your organization subscribes to the Azure Monitor service, you can use the [C
If your organization exports sign-in logs to a Security Information and Event Management (SIEM) system, you can retrieve required information from your SIEM system.
+## Identify changes to cross-tenant access settings
+
+The Azure AD audit logs capture all activity around cross-tenant access setting changes and activity. To audit changes to your cross-tenant access settings, use the **category** of ***CrossTenantAccessSettings*** to filter all activity to show changes to cross-tenant access settings.
+
+![Audit logs for cross-tenant access settings](media/cross-tenant-access-overview/cross-tenant-access-settings-audit-logs.png)
+ ## Next steps
-[Configure cross-tenant access settings for B2B collaboration](cross-tenant-access-settings-b2b-collaboration.md)
+[Configure cross-tenant access settings for B2B collaboration](cross-tenant-access-settings-b2b-collaboration.md)
active-directory How To Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-password-hash-synchronization.md
To support temporary passwords in Azure AD for synchronized users, you can enabl
#### Account expiration
-If your organization uses the accountExpires attribute as part of user account management, this attribute is not synchronized to Azure AD. As a result, an expired Active Directory account in an environment configured for password hash synchronization will still be active in Azure AD. We recommend that if the account is expired, a workflow action should trigger a PowerShell script that disables the user's Azure AD account (use the [Set-AzureADUser](/powershell/module/azuread/set-azureaduser) cmdlet). Conversely, when the account is turned on, the Azure AD instance should be turned on.
+If your organization uses the accountExpires attribute as part of user account management, this attribute is not synchronized to Azure AD. As a result, an expired Active Directory account in an environment configured for password hash synchronization will still be active in Azure AD. We recommend using a scheduled PowerShell script that disables users' AD accounts, once they expire (use the [Set-ADUser](/powershell/module/activedirectory/set-aduser) cmdlet). Conversely, during the process of removing the expiration from an AD account, the account should be re-enabled.
### Overwrite synchronized passwords
active-directory F5 Big Ip Kerberos Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md
This section defines all properties that you would normally use to manually conf
When a user successfully authenticates to Azure AD, it issues a SAML token with a default set of claims and attributes uniquely identifying the user. The **User Attributes & Claims tab** shows the default claims to issue for the new application. It also lets you configure more claims.
-As our AD infrastructure is based on a .com domain suffix used both, internally and externally, we donΓÇÖt require any additional attributes to achieve a functional KCD SSO implementation. See the [advanced tutorial](f5-big-ip-kerberos-advanced.md) for cases where you have multiple domains or userΓÇÖs login using an alternate suffix.
+As our AD infrastructure is based on a .com domain suffix used both, internally and externally, we donΓÇÖt require any additional attributes to achieve a functional KCD SSO implementation. See the [advanced tutorial](./f5-big-ip-kerberos-advanced.md) for cases where you have multiple domains or userΓÇÖs login using an alternate suffix.
![Screenshot for user attributes and claims](./media/f5-big-ip-kerberos-easy-button/user-attributes-claims.png)
active-directory F5 Big Ip Sap Erp Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-sap-erp-easy-button.md
+
+ Title: Configure F5 BIG-IP Easy Button for SSO to SAP ERP
+description: Learn to secure SAP ERP using Azure Active Directory (Azure AD), through F5ΓÇÖs BIG-IP Easy Button guided configuration.
+++++++ Last updated : 3/1/2022++++
+# Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to SAP ERP
+
+In this article, learn to secure SAP ERP using Azure Active Directory (Azure AD), through F5ΓÇÖs BIG-IP Easy Button guided configuration.
+
+Integrating a BIG-IP with Azure Active Directory (Azure AD) provides many benefits, including:
+
+* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](/azure/active-directory/conditional-access/overview)
+
+* Full SSO between Azure AD and BIG-IP published services
+
+* Manage identities and access from a single control plane, the [Azure portal](https://portal.azure.com/)
+
+To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD integration](./f5-aad-integration.md) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
+
+## Scenario description
+
+This scenario looks at the classic **SAP ERP application using Kerberos authentication** to manage access to protected content.
+
+Being legacy, the application lacks modern protocols to support a direct integration with Azure AD. The application can be modernized, but it is costly, requires careful planning, and introduces risk of potential downtime. Instead, an F5 BIG-IP Application Delivery Controller (ADC) is used to bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning.
+
+Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and headers-based SSO, significantly improving the overall security posture of the application.
+
+## Scenario architecture
+
+The SHA solution for this scenario is made up of the following:
+
+**SAP ERP application:** BIG-IP published service to be protected by and Azure AD SHA.
+
+**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SAML based SSO to the BIG-IP.
+
+**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the SAP service.
+
+SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
+
+![Secure hybrid access - SP initiated flow](./media/f5-big-ip-easy-button-sap-erp/sp-initiated-flow.png)
+
+| Steps| Description|
+| -- |-|
+| 1| User connects to application endpoint (BIG-IP) |
+| 2| BIG-IP APM access policy redirects user to Azure AD (SAML IdP) |
+| 3| Azure AD pre-authenticates user and applies any enforced Conditional Access policies |
+| 4| User is redirected to BIG-IP (SAML SP) and SSO is performed using issued SAML token |
+| 5| BIG-IP requests Kerberos ticket from KDC |
+| 6| BIG-IP sends request to backend application, along with Kerberos ticket for SSO |
+| 7| Application authorizes request and returns payload |
+
+## Prerequisites
+Prior BIG-IP experience isnΓÇÖt necessary, but you will need:
+
+* An Azure AD free subscription or above
+
+* An existing BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in Azure](./f5-bigip-deployment-guide.md)
+
+* Any of the following F5 BIG-IP license offers
+
+ * F5 BIG-IP® Best bundle
+
+ * F5 BIG-IP APM standalone license
+
+ * F5 BIG-IP APM add-on license on an existing BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM)
+
+ * 90-day BIG-IP full feature [trial license](https://www.f5.com/trial/big-ip-trial.php).
+
+* User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD, or created directly within Azure AD and flowed back to your on-premises directory
+
+* An account with Azure AD Application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator)
+
+* An [SSL Web certificate](./f5-bigip-deployment-guide.md) for publishing services over HTTPS, or use default BIG-IP certs while testing
+
+* An existing SAP ERP environment configured for Kerberos authentication
+
+## BIG-IP configuration methods
+
+There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This tutorial covers the latest Guided Configuration 16.1 offering an Easy button template.
+
+With the Easy Button, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures that applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
+
+>[!NOTE]
+> All example strings or values referenced throughout this guide should be replaced with those for your actual environment.
+
+## Register Easy Button
+
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](/azure/active-directory/develop/quickstart-register-app)
+
+The Easy Button client must also be registered in Azure AD, before it is allowed to establish a trust between each SAML SP instance of a BIG-IP published application, and Azure AD as the SAML IdP.
+
+1. Sign-in to the [Azure AD portal](https://portal.azure.com/) using an account with Application Administrative rights
+
+2. From the left navigation pane, select the **Azure Active Directory** service
+
+3. Under Manage, select **App registrations > New registration**
+
+4. Enter a display name for your application. For example, *F5 BIG-IP Easy Button*
+
+5. Specify who can use the application > **Accounts in this organizational directory only**
+
+6. Select **Register** to complete the initial app registration
+
+7. Navigate to **API permissions** and authorize the following Microsoft Graph **Application permissions**:
+
+ * Application.Read.All
+ * Application.ReadWrite.All
+ * Application.ReadWrite.OwnedBy
+ * Directory.Read.All
+ * Group.Read.All
+ * IdentityRiskyUser.Read.All
+ * Policy.Read.All
+ * Policy.ReadWrite.ApplicationConfiguration
+ * Policy.ReadWrite.ConditionalAccess
+ * User.Read.All
+
+8. Grant admin consent for your organization
+
+9. In the **Certificates & Secrets** blade, generate a new **client secret** and note it down
+
+10. From the **Overview** blade, note the **Client ID** and **Tenant ID**
+
+## Configure Easy Button
+
+Initiate the APM's **Guided Configuration** to launch the **Easy Button** Template.
+
+1. From a browser, sign-in to the **F5 BIG-IP management console**
+
+2. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**.
+
+ ![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-easy-button-ldap/easy-button-template.png)
+
+3. Review the list of configuration steps and select **Next**
+
+ ![Screenshot for Configure Easy Button - List configuration steps](./media/f5-big-ip-easy-button-ldap/config-steps.png)
+
+4. Follow the sequence of steps required to publish your application.
+
+ ![Configuration steps flow](./media/f5-big-ip-easy-button-ldap/config-steps-flow.png#lightbox)
+
+### Configuration Properties
+
+These are general and service account properties. The **Configuration Properties** tab creates a BIG-IP application config and SSO object. Consider the **Azure Service Account Details** section to represent the client you registered in your Azure AD tenant earlier, as an application. These settings allow a BIG-IP's OAuth client to individually register a SAML SP directly in your tenant, along with the SSO properties you would normally configure manually. Easy Button does this for every BIG-IP service being published and enabled for SHA.
+
+Some of these are global settings so can be re-used for publishing more applications, further reducing deployment time and effort.
+
+1. Provide a unique **Configuration Name** so admins can easily distinguish between Easy Button configurations
+
+2. Enable **Single Sign-On (SSO) & HTTP Headers**
+
+3. Enter the **Tenant Id, Client ID,** and **Client Secret** you noted when registering the Easy Button client in your tenant
+
+4. Confirm the BIG-IP can successfully connect to your tenant and select **Next**
+
+ ![Screenshot for Configuration General and Service Account properties](./media/f5-big-ip-easy-button-sap-erp/configuration-general-and-service-account-properties.png)
+
+### Service Provider
+
+The Service Provider settings define the properties for the SAML SP instance of the application protected through SHA.
+
+1. Enter **Host**. This is the public FQDN of the application being secured
+
+2. Enter **Entity ID.** This is the identifier Azure AD will use to identify the SAML SP requesting a token
+
+ ![Screenshot for Service Provider settings](./media/f5-big-ip-easy-button-sap-erp/service-provider-settings.png)
+
+ The optional **Security Settings** specify whether Azure AD should encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides additional assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
+
+3. From the **Assertion Decryption Private Key** list, select **Create New**
+
+ ![Screenshot for Configure Easy Button- Create New import](./media/f5-big-ip-oracle/configure-security-create-new.png)
+
+4. Select **OK**. This opens the **Import SSL Certificate and Keys** dialog in a new tab
+
+5. Select **PKCS 12 (IIS)** to import your certificate and private key. Once provisioned close the browser tab to return to the main tab
+
+ ![Screenshot for Configure Easy Button- Import new cert](./media/f5-big-ip-easy-button-sap-erp/import-ssl-certificates-and-keys.png)
+
+6. Check **Enable Encrypted Assertion**
+
+7. If you have enabled encryption, select your certificate from the **Assertion Decryption Private Key** list. This is the private key for the certificate that BIG-IP APM will use to decrypt Azure AD assertions
+
+8. If you have enabled encryption, select your certificate from the **Assertion Decryption Certificate** list. This is the certificate that BIG-IP will upload to Azure AD for encrypting the issued SAML assertions
+
+ ![Screenshot for Service Provider security settings](./media/f5-big-ip-easy-button-ldap/service-provider-security-settings.png)
+
+### Azure Active Directory
+
+This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant.
+
+Easy Button provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps. For this scenario, select **SAP ERP Central Component > Add** to start the Azure configurations.
+
+ ![Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-easy-button-sap-erp/azure-config-add-app.png)
+
+#### Azure Configuration
+
+1. Enter **Display Name** of app that the BIG-IP creates in your Azure AD tenant, and the icon that the users will see in [MyApps portal](https://myapplications.microsoft.com/)
+
+2. Leave the **Sign On URL (optional)** blank to enable IdP initiated sign-on
+
+ ![Screenshot for Azure configuration add display info](./media/f5-big-ip-easy-button-sap-erp/azure-configuration-add-display-info.png)
+
+3. Select the refresh icon next to the **Signing Key** and **Signing Certificate** to locate the certificate you imported earlier
+
+5. Enter the certificateΓÇÖs password in **Signing Key Passphrase**
+
+6. Enable **Signing Option** (optional). This ensures that BIG-IP only accepts tokens and claims that are signed by Azure AD
+
+ ![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-easy-button-ldap/azure-configuration-sign-certificates.png)
+
+7. **User and User Groups** are dynamically queried from your Azure AD tenant and used to authorize access to the application. Add a user or group that you can use later for testing, otherwise all access will be denied
+
+ ![Screenshot for Azure configuration - Add users and groups](./media/f5-big-ip-easy-button-ldap/azure-configuration-add-user-groups.png)
+
+#### User Attributes & Claims
+
+When a user successfully authenticates to Azure AD, it issues a SAML token with a default set of claims and attributes uniquely identifying the user. The **User Attributes & Claims tab** shows the default claims to issue for the new application. It also lets you configure more claims.
+
+As our example AD infrastructure is based on a .com domain suffix used both, internally and externally, we donΓÇÖt require any additional attributes to achieve a functional KCD SSO implementation. See the [advanced tutorial](./f5-big-ip-kerberos-advanced.md) for cases where you have multiple domains or userΓÇÖs login using an alternate suffix.
+
+ ![Screenshot for user attributes and claims](./media/f5-big-ip-easy-button-sap-erp/user-attributes-claims.png)
+
+You can include additional Azure AD attributes, if necessary, but for this scenario SAP ERP only requires the default attributes.
+
+#### Additional User Attributes
+
+The **Additional User Attributes** tab can support a variety of distributed systems requiring attributes stored in other directories, for session augmentation. Attributes fetched from an LDAP source can then be injected as additional SSO headers to further control access based on roles, Partner IDs, etc.
+
+ ![Screenshot for additional user attributes](./media/f5-big-ip-easy-button-header/additional-user-attributes.png)
+
+>[!NOTE]
+>This feature has no correlation to Azure AD but is another source of attributes.
+
+#### Conditional Access Policy
+
+CA policies are enforced post Azure AD pre-authentication, to control access based on device, application, location, and risk signals.
+
+The **Available Policies** view, by default, will list all CA policies that do not include user based actions.
+
+The **Selected Policies** view, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list as they are enforced at a tenant level.
+
+To select a policy to be applied to the application being published:
+
+1. Select the desired policy in the **Available Policies** list
+2. Select the right arrow and move it to the **Selected Policies** list
+
+Selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the selected policy is not enforced.
+
+![ Screenshot for CA policies](./media/f5-big-ip-easy-button-ldap/conditional-access-policy.png)
+
+>[!NOTE]
+>The policy list is enumerated only once when first switching to this tab. A refresh button is available to manually force the wizard to query your tenant, but this button is displayed only when the application has been deployed.
+
+### Virtual Server Properties
+
+A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for client requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
+
+1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP, instead of the appllication itself. Using a test PC's localhost DNS is fine for testing
+
+2. Enter **Service Port** as *443* for HTTPS
+
+3. Check **Enable Redirect Port** and then enter **Redirect Port**. It redirects incoming HTTP client traffic to HTTPS
+
+4. The Client SSL Profile enables the virtual server for HTTPS, so that client connections are encrypted over TLS. Select the **Client SSL Profile** you created as part of the prerequisites or leave the default whilst testing
+
+ ![ Screenshot for Virtual server](./media/f5-big-ip-easy-button-ldap/virtual-server.png)
+
+### Pool Properties
+
+The **Application Pool tab** details the services behind a BIG-IP, represented as a pool containing one or more application servers.
+
+1. Choose from **Select a Pool.** Create a new pool or select an existing one
+
+2. Choose the **Load Balancing Method** as *Round Robin*
+
+3. For **Pool Servers** select an existing server node or specify an IP and port for the backend node hosting the header-based application
+
+ ![ Screenshot for Application pool](./media/f5-big-ip-easy-button-ldap/application-pool.png)
+
+#### Single Sign-On & HTTP Headers
+
+Enabling SSO allows users to access BIG-IP published services without having to enter credentials. The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO. You will need the Kerberos delegation account created earlier to complete this step.
+
+Enable **Kerberos** and **Show Advanced Setting** to enter the following:
+
+* **Username Source:** Specifies the preferred username to cache for SSO. You can provide any session variable as the source of the user ID, but *session.saml.last.identity* tends to work best as it holds the Azure AD claim containing the logged in user ID
+
+* **User Realm Source:** Required if the user domain is different to the BIG-IPΓÇÖs kerberos realm. In that case, the APM session variable would contain the logged in user domain. For example,*session.saml.last.attr.name.domain*
+
+ ![Screenshot for SSO and HTTP headers](./media/f5-big-ip-kerberos-easy-button/sso-headers.png)
+
+* **KDC:** IP of a Domain Controller (Or FQDN if DNS is configured & efficient)
+
+* **UPN Support:** Enable for the APM to use the UPN for kerberos ticketing
+
+* **SPN Pattern:** Use HTTP/%h to inform the APM to use the host header of the client request and build the SPN that it is requesting a kerberos token for.
+
+* **Send Authorization:** Disable for applications that prefer negotiating authentication instead of receiving the kerberos token in the first request. For example, *Tomcat.*
+
+ ![Screenshot for SSO method configuration](./media/f5-big-ip-kerberos-easy-button/sso-method-config.png)
+
+### Session Management
+
+The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and corresponding user info. Consult [F5 documentation]( https://support.f5.com/csp/article/K18390492) for details on these settings.
+
+What isnΓÇÖt covered however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the user agent are terminated as users log off.
+ When the Easy Button deploys a SAML application to your Azure AD tenant, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the Microsoft [MyApps portal]( https://support.microsoft.com/en-us/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510) also terminate the session between the BIG-IP and a client.
+
+During deployment, the SAML federation metadata for the published application is imported from your tenant, providing the APM the SAML logout endpoint for Azure AD. This helps SP initiated sign-outs terminate the session between a client and Azure AD.
+
+## Summary
+
+This last step provides a breakdown of your configurations. Select **Deploy** to commit all settings and verify that the application now exists in your tenants list of Enterprise applications.
+
+## Next steps
+
+From a browser, **connect** to the applicationΓÇÖs external URL or select the **applicationΓÇÖs icon** in the [Microsoft MyApps portal](https://myapps.microsoft.com/). After authenticating to Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
+
+For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP.
+
+## Advanced deployment
+
+There may be cases where the Guided Configuration templates lacks the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for kerberos-based SSO](./f5-big-ip-kerberos-advanced.md).
+
+Alternatively, the BIG-IP gives you the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
+
+You can navigate to **Access > Guided Configuration** and select the **small padlock icon** on the far right of the row for your applicationsΓÇÖ configs.
+
+ ![Screenshot for Configure Easy Button - Strict Management](./media/f5-big-ip-oracle/strict-mode-padlock.png)
+
+At that point, changes via the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application will be unlocked for direct management.
+
+>[!NOTE]
+>Re-enabling strict mode and deploying a configuration will overwrite any settings performed outside of the Guided Configuration UI, therefore we recommend the advanced configuration method for production services.
+
+## Troubleshooting
+
+You can fail to access the SHA protected application due to any number of factors, including a misconfiguration.
+
+* Kerberos is time sensitive, so requires that servers and clients be set to the correct time and where possible synchronized to a reliable time source
+
+* Ensure the hostname for the domain controller and web application are resolvable in DNS
+
+* Ensure there are no duplicate SPNs in your AD environment by executing the following query at the command line on a domain PC: setspn -q HTTP/my_target_SPN
+
+You can refer to our [App Proxy guidance](../app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md) to validate an IIS application is configured appropriately for KCD. F5ΓÇÖs article on [how the APM handles Kerberos SSO](https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-access-policy-manager-single-sign-on-concepts-configuration/kerberos-single-sign-on-method.html) is also a valuable resource.
+
+### Log analysis
+
+BIG-IP logging can help quickly isolate all sorts of issues with connectivity, SSO, policy violations, or misconfigured variable mappings. Start troubleshooting by increasing the log verbosity level.
+
+1. Navigate to **Access Policy > Overview > Event Logs > Settings**
+
+2. Select the row for your published application, then **Edit > Access System Logs**
+
+3. Select **Debug** from the SSO list, and then select **OK**
+
+Reproduce your issue, then inspect the logs, but remember to switch this back when finished as verbose mode generates lots of data.
+
+If you see a BIG-IP branded error immediately after successful Azure AD pre-authentication, itΓÇÖs possible the issue relates to SSO from Azure AD to the BIG-IP.
+
+1. Navigate to **Access > Overview > Access reports**
+
+2. Run the report for the last hour to see logs provide any clues. The **View session variables** link for your session will also help understand if the APM is receiving the expected claims from Azure AD.
+
+If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related to the backend request or SSO from the BIG-IP to the application.
+
+1. Navigate to **Access Policy > Overview > Active Sessions**
+
+2. Select the link for your active session. The **View Variables** link in this location may also help determine root cause KCD issues, particularly if the BIG-IP APM fails to obtain the right user and domain identifiers from session variables
+
+See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
active-directory Allocadia Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/allocadia-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Allocadia | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Allocadia'
description: Learn how to configure single sign-on between Azure Active Directory and Allocadia.
Previously updated : 12/17/2019 Last updated : 02/25/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Allocadia
+# Tutorial: Azure AD SSO integration with Allocadia
In this tutorial, you'll learn how to integrate Allocadia with Azure Active Directory (Azure AD). When you integrate Allocadia with Azure AD, you can:
In this tutorial, you'll learn how to integrate Allocadia with Azure Active Dire
* Enable your users to be automatically signed-in to Allocadia with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Allocadia supports **IDP** initiated SSO
-* Allocadia supports **Just In Time** user provisioning
+* Allocadia supports **IDP** initiated SSO.
+* Allocadia supports **Just In Time** user provisioning.
-## Adding Allocadia from the gallery
+## Add Allocadia from the gallery
To configure the integration of Allocadia into Azure AD, you need to add Allocadia from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Allocadia** in the search box. 1. Select **Allocadia** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Allocadia
+## Configure and test Azure AD SSO for Allocadia
Configure and test Azure AD SSO with Allocadia using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Allocadia.
-To configure and test Azure AD SSO with Allocadia, complete the following building blocks:
+To configure and test Azure AD SSO with Allocadia, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure Allocadia SSO](#configure-allocadia-sso)** - to configure the single sign-on settings on application side.
- * **[Create Allocadia test user](#create-allocadia-test-user)** - to have a counterpart of B.Simon in Allocadia that is linked to the Azure AD representation of user.
+ 1. **[Create Allocadia test user](#create-allocadia-test-user)** - to have a counterpart of B.Simon in Allocadia that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Allocadia** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Allocadia** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Set up single sign-on with SAML** page, enter the values for the following fields:
-
- a. In the **Identifier** text box, type a URL using the following pattern:
-
- For test environment - `https://na2standby.allocadia.com`
+1. On the **Basic SAML Configuration** section, perform the following steps:
- For production environment - `https://na2.allocadia.com`
+ a. In the **Identifier** text box, type one of the following URLs:
- b. In the **Reply URL** text box, type a URL using the following pattern:
+ | **Identifier** |
+ |- |
+ | For test environment - `https://na2standby.allocadia.com` |
+ | For production environment - `https://na2.allocadia.com`
- For test environment - `https://na2standby.allocadia.com/allocadia/saml/SSO`
+ b. In the **Reply URL** text box, type one of the following URLs:
- For production environment - `https://na2.allocadia.com/allocadia/saml/SSO`
-
- > [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Allocadia Client support team](mailto:support@allocadia.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ | **Reply URL** |
+ |--|
+ | For test environment - `https://na2standby.allocadia.com/allocadia/saml/SSO` |
+ | For production environment - `https://na2.allocadia.com/allocadia/saml/SSO` |
1. Allocadia application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
Follow these steps to enable Azure AD SSO in the Azure portal.
| firstname | user.givenname | | lastname | user.surname | | email | user.mail |
- | | |
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Allocadia**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, a user called B.Simon is created in Allocadia. Allocadia suppor
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Allocadia tile in the Access Panel, you should be automatically signed in to the Allocadia for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the Allocadia for which you set up the SSO.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the Allocadia tile in the My Apps, you should be automatically signed in to the Allocadia for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Allocadia with Azure AD](https://aad.portal.azure.com/)
+Once you configure Allocadia you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Bic Cloud Design Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bic-cloud-design-provisioning-tutorial.md
na-+ Last updated 11/15/2021
active-directory Bullseyetdp Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bullseyetdp-provisioning-tutorial.md
ms.devlang: na-+ Last updated 02/03/2022
active-directory Culture Shift Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/culture-shift-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Culture Shift'
+description: Learn how to configure single sign-on between Azure Active Directory and Culture Shift.
++++++++ Last updated : 02/24/2022++++
+# Tutorial: Azure AD SSO integration with Culture Shift
+
+In this tutorial, you'll learn how to integrate Culture Shift with Azure Active Directory (Azure AD). When you integrate Culture Shift with Azure AD, you can:
+
+* Control in Azure AD who has access to Culture Shift.
+* Enable your users to be automatically signed-in to Culture Shift with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Culture Shift single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Culture Shift supports **SP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Culture Shift from the gallery
+
+To configure the integration of Culture Shift into Azure AD, you need to add Culture Shift from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Culture Shift** in the search box.
+1. Select **Culture Shift** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Culture Shift
+
+Configure and test Azure AD SSO with Culture Shift using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Culture Shift.
+
+To configure and test Azure AD SSO with Culture Shift, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Culture Shift SSO](#configure-culture-shift-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Culture Shift test user](#create-culture-shift-test-user)** - to have a counterpart of B.Simon in Culture Shift that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Culture Shift** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier (Entity ID)** text box, type the value:
+ `urn:amazon:cognito:sp:eu-west-2_tWqrsHU3a`
+
+ b. In the **Reply URL** text box, type the URL:
+ `https://auth.reportandsupport.co.uk/saml2/idpresponse`
+
+ c. In the **Sign on URL** text box, type the URL:
+ `https://dashboard.reportandsupport.co.uk/`
+
+1. Culture Shift application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Culture Shift application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | -| |
+ | displayname | user.displayname |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Culture Shift.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Culture Shift**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Culture Shift SSO
+
+To configure single sign-on on **Culture Shift** side, you need to send the **App Federation Metadata Url** to [Culture Shift support team](mailto:tickets@culture-shift.co.uk). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Culture Shift test user
+
+In this section, you create a user called Britta Simon in Culture Shift. Work with [Culture Shift support team](mailto:tickets@culture-shift.co.uk) to add the users in the Culture Shift platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Culture Shift Sign-on URL where you can initiate the login flow.
+
+* Go to Culture Shift Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Culture Shift tile in the My Apps, this will redirect to Culture Shift Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Culture Shift you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Directprint Io Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/directprint-io-provisioning-tutorial.md
na-+ Last updated 09/24/2021
active-directory Facebook Work Accounts Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/facebook-work-accounts-provisioning-tutorial.md
na-+ Last updated 10/27/2021
active-directory Frankli Io Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/frankli-io-provisioning-tutorial.md
ms.assetid: 936223d1-7ba5-4300-b05b-cbf78ee45d0e
-+ Last updated 12/16/2021
active-directory Gong Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gong-provisioning-tutorial.md
ms.devlang: na-+ Last updated 02/09/2022
active-directory Klaxoon Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/klaxoon-provisioning-tutorial.md
na-+ Last updated 09/22/2021
active-directory Klaxoon Saml Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/klaxoon-saml-provisioning-tutorial.md
na-+ Last updated 09/22/2021
active-directory Lanschool Air Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lanschool-air-provisioning-tutorial.md
ms.devlang: na-+ Last updated 02/03/2022
active-directory Meta Networks Connector Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/meta-networks-connector-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* [A Meta Networks Connector tenant](https://www.metanetworks.com/) * A user account in Meta Networks Connector with Admin permissions.
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Meta Networks Connector](../app-provisioning/customize-application-attributes.md).
++ ## Assigning users to Meta Networks Connector Azure Active Directory uses a concept called *assignments* to determine which users should receive access to selected apps. In the context of automatic user provisioning, only the users and/or groups that have been assigned to an application in Azure AD are synchronized.
Before configuring and enabling automatic user provisioning, you should decide w
* When assigning a user to Meta Networks Connector, you must select any valid application-specific role (if available) in the assignment dialog. Users with the **Default Access** role are excluded from provisioning.
-## Setup Meta Networks Connector for provisioning
+## Step 2. Configure Meta Networks Connector for provisioning
1. Sign in to your [Meta Networks Connector Admin Console](https://login.metanetworks.com/login/) using your organization name. Navigate to **Administration > API Keys**. ![Meta Networks Connector Admin Console](media/meta-networks-connector-provisioning-tutorial/apikey.png)
-2. Click on the plus sign on the upper right side of the screen to create a new **API Key**.
+1. Click on the plus sign on the upper right side of the screen to create a new **API Key**.
![Meta Networks Connector plus icon](media/meta-networks-connector-provisioning-tutorial/plusicon.png)
-3. Set the **API Key Name** and **API Key Description**.
+1. Set the **API Key Name** and **API Key Description**.
:::image type="content" source="media/meta-networks-connector-provisioning-tutorial/keyname.png" alt-text="Screenshot of the Meta Networks Connector Admin Console with highlighted A P I key name and A P I key description values of Azure A D and A P I key." border="false":::
-4. Turn on **Write** privileges for **Groups** and **Users**.
+1. Turn on **Write** privileges for **Groups** and **Users**.
![Meta Networks Connector privileges](media/meta-networks-connector-provisioning-tutorial/privileges.png)
-5. Click on **Add**. Copy the **SECRET** and save it as this will be the only time you can view it. This value will be entered in the Secret Token field in the Provisioning tab of your Meta Networks Connector application in the Azure portal.
+1. Click on **Add**. Copy the **SECRET** and save it as this will be the only time you can view it. This value will be entered in the Secret Token field in the Provisioning tab of your Meta Networks Connector application in the Azure portal.
:::image type="content" source="media/meta-networks-connector-provisioning-tutorial/token.png" alt-text="Screenshot of a window telling users that the A P I key was added. The Secret box contains an indecipherable value and is highlighted." border="false":::
-6. Add an IdP by navigating to **Administration > Settings > IdP > Create New**.
+1. Add an IdP by navigating to **Administration > Settings > IdP > Create New**.
![Meta Networks Connector Add IdP](media/meta-networks-connector-provisioning-tutorial/newidp.png)
-7. In the **IdP Configuration** page you can **Name** your IdP configuration and choose an **Icon**.
+1. In the **IdP Configuration** page you can **Name** your IdP configuration and choose an **Icon**.
![Meta Networks Connector IdP Name](media/meta-networks-connector-provisioning-tutorial/idpname.png) ![Meta Networks Connector IdP Icon](media/meta-networks-connector-provisioning-tutorial/icon.png)
-8. Under **Configure SCIM** select the API key name created in the previous steps. Click on **Save**.
+1. Under **Configure SCIM** select the API key name created in the previous steps. Click on **Save**.
![Meta Networks Connector configure SCIM](media/meta-networks-connector-provisioning-tutorial/configure.png)
-9. Navigate to **Administration > Settings > IdP tab**. Click on the name of the IdP configuration created in the previous steps to view the **IdP ID**. This **ID** is added to the end of **Tenant URL** while entering the value in **Tenant URL** field in the Provisioning tab of your Meta Networks Connector application in the Azure portal.
+1. Navigate to **Administration > Settings > IdP tab**. Click on the name of the IdP configuration created in the previous steps to view the **IdP ID**. This **ID** is added to the end of **Tenant URL** while entering the value in **Tenant URL** field in the Provisioning tab of your Meta Networks Connector application in the Azure portal.
![Meta Networks Connector IdP ID](media/meta-networks-connector-provisioning-tutorial/idpid.png)
-## Add Meta Networks Connector from the gallery
-
-Before configuring Meta Networks Connector for automatic user provisioning with Azure AD, you need to add Meta Networks Connector from the Azure AD application gallery to your list of managed SaaS applications.
-
-**To add Meta Networks Connector from the Azure AD application gallery, perform the following steps:**
+## Step 3. Add Meta Networks Connector from the Azure AD application gallery
-1. In the **[Azure portal](https://portal.azure.com)**, in the left navigation panel, select **Azure Active Directory**.
+Add Meta Networks Connector from the Azure AD application gallery to start managing provisioning to Meta Networks Connector. If you have previously setup Meta Networks Connector for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
- ![The Azure Active Directory button](common/select-azuread.png)
-2. Go to **Enterprise applications**, and then select **All applications**.
+
+## Step 4. Define who will be in scope for provisioning
- ![The Enterprise applications blade](common/enterprise-applications.png)
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-3. To add a new application, select the **New application** button at the top of the pane.
+* When assigning users and groups to Meta Networks Connector, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
- ![The New application button](common/add-new-app.png)
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-4. In the search box, enter **Meta Networks Connector**, select **Meta Networks Connector** in the results panel, and then click the **Add** button to add the application.
- ![Meta Networks Connector in the results list](common/search-new-app.png)
-## Configuring automatic user provisioning to Meta Networks Connector
+## Step 5. Configuring automatic user provisioning to Meta Networks Connector
This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Meta Networks Connector based on user and/or group assignments in Azure AD.
This section guides you through the steps to configure the Azure AD provisioning
![Enterprise applications blade](common/enterprise-applications.png)
-2. In the applications list, select **Meta Networks Connector**.
+1. In the applications list, select **Meta Networks Connector**.
![The Meta Networks Connector link in the Applications list](common/all-applications.png)
-3. Select the **Provisioning** tab.
+1. Select the **Provisioning** tab.
![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
-4. Set the **Provisioning Mode** to **Automatic**.
+1. Set the **Provisioning Mode** to **Automatic**.
![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input `https://api.metanetworks.com/v1/scim/<IdP ID>` in **Tenant URL**. Input the **SCIM Authentication Token** value retrieved earlier in **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to Meta Networks Connector. If the connection fails, ensure your Meta Networks Connector account has Admin permissions and try again.
+1. Under the **Admin Credentials** section, input `https://api.metanetworks.com/v1/scim/<IdP ID>` in **Tenant URL**. Input the **SCIM Authentication Token** value retrieved earlier in **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to Meta Networks Connector. If the connection fails, ensure your Meta Networks Connector account has Admin permissions and try again.
![Tenant URL + Token](common/provisioning-testconnection-tenanturltoken.png)
-6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and check the checkbox - **Send an email notification when a failure occurs**.
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and check the checkbox - **Send an email notification when a failure occurs**.
![Notification Email](common/provisioning-notification-email.png)
-7. Click **Save**.
+1. Click **Save**.
-8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Meta Networks Connector**.
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Meta Networks Connector**.
![Meta Networks Connector User Mappings](media/meta-networks-connector-provisioning-tutorial/usermappings.png)
-9. Review the user attributes that are synchronized from Azure AD to Meta Networks Connector in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Meta Networks Connector for update operations. Select the **Save** button to commit any changes.
+1. Review the user attributes that are synchronized from Azure AD to Meta Networks Connector in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Meta Networks Connector for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Meta Networks Connector API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
- ![Meta Networks Connector User Attributes](media/meta-networks-connector-provisioning-tutorial/userattributes.png)
+ |Attribute|Type|Supported for filtering|Required by Meta Networks Connector|
+ |||||
+ |userName|String|&check;|&check;
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+ |active|Boolean||
+ |phonenumbers[type eq "work"].value|String||
-10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Meta Networks Connector**.
+ > [!NOTE]
+ > phonenumbers[type eq "work"].value should be in E164 format.For example +16175551212
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Meta Networks Connector**.
![Meta Networks Connector Group Mappings](media/meta-networks-connector-provisioning-tutorial/groupmappings.png)
-11. Review the group attributes that are synchronized from Azure AD to Meta Networks Connector in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Meta Networks Connector for update operations. Select the **Save** button to commit any changes.
+1. Review the group attributes that are synchronized from Azure AD to Meta Networks Connector in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Meta Networks Connector for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Meta Networks Connector API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
- ![Meta Networks Connector Group Attributes](media/meta-networks-connector-provisioning-tutorial/groupattributes.png)
+ |Attribute|Type|Supported for filtering|Required by Meta Networks Connector|
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-13. To enable the Azure AD provisioning service for Meta Networks Connector, change the **Provisioning Status** to **On** in the **Settings** section.
+1. To enable the Azure AD provisioning service for Meta Networks Connector, change the **Provisioning Status** to **On** in the **Settings** section.
![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
-14. Define the users and/or groups that you would like to provision to Meta Networks Connector by choosing the desired values in **Scope** in the **Settings** section.
+1. Define the users and/or groups that you would like to provision to Meta Networks Connector by choosing the desired values in **Scope** in the **Settings** section.
![Provisioning Scope](common/provisioning-scope.png)
-15. When you are ready to provision, click **Save**.
+1. When you are ready to provision, click **Save**.
![Saving Provisioning Configuration](common/provisioning-configuration-save.png) This operation starts the initial synchronization of all users and/or groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. You can use the **Synchronization Details** section to monitor progress and follow links to provisioning activity report, which describes all actions performed by the Azure AD provisioning service on Meta Networks Connector.
-For more information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md).
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
-## Additional resources
+## More resources
* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
active-directory Mx3 Diagnostics Connector Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mx3-diagnostics-connector-provisioning-tutorial.md
na-+ Last updated 10/12/2021
active-directory Netpresenter Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netpresenter-provisioning-tutorial.md
na-+ Last updated 10/04/2021
active-directory Openlearning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/openlearning-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with OpenLearning'
+description: Learn how to configure single sign-on between Azure Active Directory and OpenLearning.
++++++++ Last updated : 02/17/2022++++
+# Tutorial: Azure AD SSO integration with OpenLearning
+
+In this tutorial, you'll learn how to integrate OpenLearning with Azure Active Directory (Azure AD). When you integrate OpenLearning with Azure AD, you can:
+
+* Control in Azure AD who has access to OpenLearning.
+* Enable your users to be automatically signed-in to OpenLearning with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* OpenLearning single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* OpenLearning supports **SP** initiated SSO.
+
+## Add OpenLearning from the gallery
+
+To configure the integration of OpenLearning into Azure AD, you need to add OpenLearning from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **OpenLearning** in the search box.
+1. Select **OpenLearning** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for OpenLearning
+
+Configure and test Azure AD SSO with OpenLearning using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in OpenLearning.
+
+To configure and test Azure AD SSO with OpenLearning, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure OpenLearning SSO](#configure-openlearning-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create OpenLearning test user](#create-openlearning-test-user)** - to have a counterpart of B.Simon in OpenLearning that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **OpenLearning** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you have **Service Provider metadata file**, perform the following steps:
+
+ a. Click **Upload metadata file**.
+
+ ![Upload metadata file](common/upload-metadata.png)
+
+ b. Click on **folder logo** to select the metadata file and click **Upload**.
+
+ ![choose metadata file](common/browse-upload-metadata.png)
+
+ c. After the metadata file is successfully uploaded, the **Identifier** value gets auto populated in Basic SAML Configuration section.
+
+ d. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://www.openlearning.com/saml-redirect/<institution_id>/<idp_name>/`
+
+ > [!Note]
+ > If the **Identifier** value does not get auto populated, then please fill in the value manually according to your requirement. The Sign-on URL value is not real. Update this value with the actual Sign-on URL. Contact [OpenLearning Client support team](mailto:dev@openlearning.com) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up OpenLearning** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+1. OpenLearning application expects to enable token encryption in order to make SSO work. To activate token encryption, go to the **Azure Active Directory** > **Enterprise applications** and select **Token encryption**. For more information, please refer this [link](../manage-apps/howto-saml-token-encryption.md).
+
+ ![Screenshot shows the activation of Token Encryption.](./media/openlearning-tutorial/token.png "Token Encryption")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to OpenLearning.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **OpenLearning**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure OpenLearning SSO
+
+1. Log in to your OpenLearning company site as an administrator.
+
+1. Go to **SETTINGS** > **Integrations** and click **ADD** under SAML Identity Provider(IDP) Configuration.
+
+1. In the **SAML Identity Provider** page, perform the following steps:
+
+ ![Screenshot shows SAML settings](./media/openlearning-tutorial/certificate.png "SAML settings")
+
+ 1. In the **Name (required)** textbox, type a short configuration name.
+
+ 1. Copy **Reply(ACS) Url** value, paste this value into the **Reply URL** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ 1. In the **Entity ID/Issuer URL (required)** textbox, paste the **Azure AD Identifier** value which you have copied from the Azure portal.
+
+ 1. In the **Sign-In URL (required)** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ 1. Open the downloaded **Certificate (Base64)** from the Azure portal into Notepad and paste the content into the **Certificate (required)** textbox.
+
+ 1. Download the **Metadata XML** into Notepad and upload the file into **Basic SAML Configuration** section in the Azure portal.
+
+ 1. Click **Save**.
+
+### Create OpenLearning test user
+
+1. In a different web browser window, log in to your OpenLearning website as an administrator.
+
+1. Navigate to **People** and select **Invite People**.
+
+1. Enter the valid **Email Addresses** in the textbox and click **INVITE ALL USERS**.
+
+ ![Screenshot shows inviting all users](./media/openlearning-tutorial/users.png "SAML USERS")
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to OpenLearning Sign-on URL where you can initiate the login flow.
+
+* Go to OpenLearning Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the OpenLearning tile in the My Apps, this will redirect to OpenLearning Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure OpenLearning you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Oracle Cloud Infrastructure Console Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-cloud-infrastructure-console-provisioning-tutorial.md
Add Oracle Cloud Infrastructure Console from the Azure AD application gallery to
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Oracle Cloud Infrastructure Console, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
- * Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+ ## Step 5. Configure automatic user provisioning to Oracle Cloud Infrastructure Console
active-directory Prodpad Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/prodpad-provisioning-tutorial.md
ms.devlang: na-+ Last updated 02/09/2022
active-directory Reviewsnap Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/reviewsnap-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Reviewsnap | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Reviewsnap'
description: Learn how to configure single sign-on between Azure Active Directory and Reviewsnap.
Previously updated : 03/26/2019 Last updated : 02/28/2022
-# Tutorial: Azure Active Directory integration with Reviewsnap
+# Tutorial: Azure AD SSO integration with Reviewsnap
-In this tutorial, you learn how to integrate Reviewsnap with Azure Active Directory (Azure AD).
-Integrating Reviewsnap with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Reviewsnap with Azure Active Directory (Azure AD). When you integrate Reviewsnap with Azure AD, you can:
-* You can control in Azure AD who has access to Reviewsnap.
-* You can enable your users to be automatically signed-in to Reviewsnap (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Reviewsnap.
+* Enable your users to be automatically signed-in to Reviewsnap with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with Reviewsnap, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* Reviewsnap single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Reviewsnap single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Reviewsnap supports **SP and IDP** initiated SSO
-
-## Adding Reviewsnap from the gallery
-
-To configure the integration of Reviewsnap into Azure AD, you need to add Reviewsnap from the gallery to your list of managed SaaS apps.
-
-**To add Reviewsnap from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
+* Reviewsnap supports **SP and IDP** initiated SSO.
-4. In the search box, type **Reviewsnap**, select **Reviewsnap** from result panel then click **Add** button to add the application.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
- ![Reviewsnap in the results list](common/search-new-app.png)
+## Add Reviewsnap from the gallery
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Reviewsnap based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Reviewsnap needs to be established.
-
-To configure and test Azure AD single sign-on with Reviewsnap, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Reviewsnap Single Sign-On](#configure-reviewsnap-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Reviewsnap test user](#create-reviewsnap-test-user)** - to have a counterpart of Britta Simon in Reviewsnap that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure the integration of Reviewsnap into Azure AD, you need to add Reviewsnap from the gallery to your list of managed SaaS apps.
-### Configure Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Reviewsnap** in the search box.
+1. Select **Reviewsnap** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure and test Azure AD SSO for Reviewsnap
-To configure Azure AD single sign-on with Reviewsnap, perform the following steps:
+Configure and test Azure AD SSO with Reviewsnap using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Reviewsnap.
-1. In the [Azure portal](https://portal.azure.com/), on the **Reviewsnap** application integration page, select **Single sign-on**.
+To configure and test Azure AD SSO with Reviewsnap, perform the following steps:
- ![Configure single sign-on link](common/select-sso.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Reviewsnap SSO](#configure-reviewsnap-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Reviewsnap test user](#create-reviewsnap-test-user)** - to have a counterpart of B.Simon in Reviewsnap that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+## Configure Azure AD SSO
- ![Single sign-on select mode](common/select-saml-option.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. In the Azure portal, on the **Reviewsnap** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
- ![Screenshot shows the Basic SAML Configuration, where you can enter Identifier, Reply U R L, and select Save.](common/idp-intiated.png)
-
- a. In the **Identifier** text box, type a URL:
+ a. In the **Identifier** text box, type the URL:
`https://app.reviewsnap.com` b. In the **Reply URL** text box, type a URL using the following pattern:
To configure Azure AD single sign-on with Reviewsnap, perform the following step
5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/metadata-upload-additional-signon.png)
-
- In the **Sign-on URL** text box, type a URL:
+ In the **Sign-on URL** text box, type the URL:
`https://app.reviewsnap.com/login` > [!NOTE]
To configure Azure AD single sign-on with Reviewsnap, perform the following step
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Reviewsnap Single Sign-On
-
-To configure single sign-on on **Reviewsnap** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Reviewsnap support team](mailto:support@reviewsnap.com). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type `brittasimon@yourcompanydomain.extension`
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Reviewsnap.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Reviewsnap.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Reviewsnap**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Reviewsnap**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure Reviewsnap SSO
-2. In the applications list, select **Reviewsnap**.
-
- ![The Reviewsnap link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
+To configure single sign-on on **Reviewsnap** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Reviewsnap support team](mailto:support@reviewsnap.com). They set this setting to have the SAML SSO connection set properly on both sides.
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
+### Create Reviewsnap test user
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+In this section, you create a user called Britta Simon in Reviewsnap. Work with [Reviewsnap support team](mailto:support@reviewsnap.com) to add the users in the Reviewsnap platform. Users must be created and activated before you use single sign-on.
-7. In the **Add Assignment** dialog click the **Assign** button.
+## Test SSO
-### Create Reviewsnap test user
+In this section, you test your Azure AD single sign-on configuration with following options.
-In this section, you create a user called Britta Simon in Reviewsnap. Work with [Reviewsnap support team](mailto:support@reviewsnap.com) to add the users in the Reviewsnap platform. Users must be created and activated before you use single sign-on.
+#### SP initiated:
-### Test single sign-on
+* Click on **Test this application** in Azure portal. This will redirect to Reviewsnap Sign on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to Reviewsnap Sign-on URL directly and initiate the login flow from there.
-When you click the Reviewsnap tile in the Access Panel, you should be automatically signed in to the Reviewsnap for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+#### IDP initiated:
-## Additional Resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Reviewsnap for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Reviewsnap tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Reviewsnap for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Reviewsnap you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Rolepoint Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rolepoint-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with RolePoint | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with RolePoint'
description: In this tutorial, you'll learn how to configure single sign-on between Azure Active Directory and RolePoint.
Previously updated : 03/15/2019 Last updated : 02/28/2022
-# Tutorial: Azure Active Directory integration with RolePoint
+# Tutorial: Azure AD SSO integration with RolePoint
-In this tutorial, you'll learn how to integrate RolePoint with Azure Active Directory (Azure AD).
-This integration provides these benefits:
+In this tutorial, you'll learn how to integrate RolePoint with Azure Active Directory (Azure AD). When you integrate RolePoint with Azure AD, you can:
-* You can use Azure AD to control who has access to RolePoint.
-* You can enable your users to be automatically signed in to RolePoint (single sign-on) with their Azure AD accounts.
-* You can manage your accounts in one central location: the Azure portal.
-
-To learn more about SaaS app integration with Azure AD, see [Single sign-on to applications in Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to RolePoint.
+* Enable your users to be automatically signed-in to RolePoint with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
In this tutorial, you'll configure and test Azure AD single sign-on in a test en
## Add RolePoint from the gallery
-To set up the integration of RolePoint into Azure AD, you need to add RolePoint from the gallery to your list of managed SaaS apps.
-
-1. In the [Azure portal](https://portal.azure.com), in the left pane, select **Azure Active Directory**:
-
- ![Select Azure Active Directory](common/select-azuread.png)
-
-2. Go to **Enterprise applications** > **All applications**:
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add an application, select **New application** at the top of the window:
-
- ![Select New application](common/add-new-app.png)
-
-4. In the search box, enter **RolePoint**. Select **RolePoint** in the search results and then select **Add**.
-
- ![Search results](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+To configure the integration of RolePoint into Azure AD, you need to add RolePoint from the gallery to your list of managed SaaS apps.
-In this section, you'll configure and test Azure AD single sign-on with RolePoint by using a test user named Britta Simon.
-To enable single sign-on, you need to establish a relationship between an Azure AD user and the corresponding user in RolePoint.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **RolePoint** in the search box.
+1. Select **RolePoint** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure and test Azure AD single sign-on with RolePoint, you need to complete these steps:
+## Configure and test Azure AD SSO for RolePoint
-1. **[Configure Azure AD single sign-on](#configure-azure-ad-single-sign-on)** to enable the feature for your users.
-2. **[Configure RolePoint single sign-on](#configure-rolepoint-single-sign-on)** on the application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** to test Azure AD single sign-on.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** to enable Azure AD single sign-on for the user.
-5. **[Create a RolePoint test user](#create-a-rolepoint-test-user)** that's linked to the Azure AD representation of the user.
-6. **[Test single sign-on](#test-single-sign-on)** to verify that the configuration works.
+Configure and test Azure AD SSO with RolePoint using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in RolePoint.
-### Configure Azure AD single sign-on
+To configure and test Azure AD SSO with RolePoint, perform the following steps:
-In this section, you'll enable Azure AD single sign-on in the Azure portal.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure RolePoint SSO](#configure-rolepoint-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create RolePoint test user](#create-rolepoint-test-user)** - to have a counterpart of B.Simon in RolePoint that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-To configure Azure AD single sign-on with RolePoint, take these steps:
+## Configure Azure AD SSO
-1. In the [Azure portal](https://portal.azure.com/), on the RolePoint application integration page, select **Single sign-on**:
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Select single sign-on](common/select-sso.png)
+1. In the Azure portal, on the **RolePoint** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-2. In the **Select a single sign-on method** dialog box, select **SAML/WS-Fed** mode to enable single sign-on:
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
- ![Select a single sign-on method](common/select-saml-option.png)
+4. In the **Basic SAML Configuration** dialog box, perform the following steps:
-3. On the **Set up Single Sign-On with SAML** page, select the **Edit** icon to open the **Basic SAML Configuration** dialog box:
+ 1. In the **Identifier (Entity ID)** box, type a URL using the following pattern:
- ![Edit icon](common/edit-urls.png)
-
-4. In the **Basic SAML Configuration** dialog box, take the following steps.
-
- ![Basic SAML Configuration dialog box](common/sp-identifier.png)
-
- 1. In the **Sign on URL** box, enter a URL in this pattern:
-
- `https://<subdomain>.rolepoint.com/login`
+ `https://app.rolepoint.com/<instancename>`
- 1. In the **Identifier (Entity ID)** box, enter a URL in this pattern:
+ 1. In the **Sign on URL** box, type a URL using the following pattern:
- `https://app.rolepoint.com/<instancename>`
+ `https://<subdomain>.rolepoint.com/login`
> [!NOTE]
- > These values are placeholders. You need to use the actual sign-on URL and identifier. We suggest that you use a unique string value in the identifier. Contact the [RolePoint support team](mailto:info@rolepoint.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** dialog box in the Azure portal.
+ > These values are placeholders. You need to use the actual Identifier and Sign on URL. We suggest that you use a unique string value in the identifier. Contact the [RolePoint support team](mailto:info@rolepoint.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** dialog box in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, select the **Download** link next to **Federation Metadata XML**, per your requirements, and save the file on your computer.
To configure Azure AD single sign-on with RolePoint, take these steps:
![Copy the configuration URLs](common/copy-configuration-urls.png)
- 1. **Login URL**.
-
- 1. **Azure AD Identifier**.
-
- 1. **Logout URL**.
--
-### Configure RolePoint single sign-on
-
-To set up single sign-on on the RolePoint side, you need to work with the [RolePoint support team](mailto:info@rolepoint.com). Send this team the Federation Metadata XML file and the URLs that you got from the Azure portal. They'll configure RolePoint to ensure the SAML SSO connection is set properly on both sides.
- ### Create an Azure AD test user
-In this section, you'll create a test user named Britta Simon in the Azure portal.
-
-1. In the Azure portal, select **Azure Active Directory** in the left pane, select **Users**, and then select **All users**:
-
- ![Select All users](common/users.png)
-
-2. Select **New user** at the top of the window:
-
- ![Select New user](common/new-user.png)
-
-3. In the **User** dialog box, take the following steps.
-
- ![User dialog box](common/user-properties.png)
-
- 1. In the **Name** box, enter **BrittaSimon**.
-
- 1. In the **User name** box, enter **BrittaSimon@\<yourcompanydomain>.\<extension>**. (For example, BrittaSimon@contoso.com.)
+In this section, you'll create a test user in the Azure portal called B.Simon.
- 1. Select **Show Password**, and then write down the value that's in the **Password** box.
-
- 1. Select **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you'll enable Britta Simon to use Azure single sign-on by granting her access to RolePoint.
-
-1. In the Azure portal, select **Enterprise applications**, select **All applications**, and then select **RolePoint**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the list of applications, select **RolePoint**.
-
- ![List of applications](common/all-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to RolePoint.
-3. In the left pane, select **Users and groups**:
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **RolePoint**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Select Users and groups](common/users-groups-blade.png)
+## Configure RolePoint SSO
-4. Select **Add user**, and then select **Users and groups** in the **Add Assignment** dialog box.
-
- ![Select Add user](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog box, select **Britta Simon** in the users list, and then click the **Select** button at the bottom of the window.
-
-6. If you expect a role value in the SAML assertion, in the **Select Role** dialog box, select the appropriate role for the user from the list. Click the **Select** button at the bottom of the window.
-
-7. In the **Add Assignment** dialog box, select **Assign**.
+To set up single sign-on on the RolePoint side, you need to work with the [RolePoint support team](mailto:info@rolepoint.com). Send this team the Federation Metadata XML file and the URLs that you got from the Azure portal. They'll configure RolePoint to ensure the SAML SSO connection is set properly on both sides.
-### Create a RolePoint test user
+### Create RolePoint test user
Next, you need to create a user named Britta Simon in RolePoint. Work with the [RolePoint support team](mailto:info@rolepoint.com) to add users to RolePoint. Users need to be created and activated before you can use single sign-on.
-### Test single sign-on
+## Test SSO
-Now you need to test your Azure AD single sign-on configuration by using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you select the RolePoint tile in the Access Panel, you should be automatically signed in to the RolePoint instance for which you set up SSO. For more information about the Access Panel, see [Access and use apps on the My Apps portal](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to RolePoint Sign-on URL where you can initiate the login flow.
-## Additional resources
+* Go to RolePoint Sign-on URL directly and initiate the login flow from there.
-- [Tutorials for integrating SaaS applications with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the RolePoint tile in the My Apps, this will redirect to RolePoint Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure RolePoint you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Shucchonavi Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/shucchonavi-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Shuccho Navi | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Shuccho Navi'
description: Learn how to configure single sign-on between Azure Active Directory and Shuccho Navi.
Previously updated : 03/07/2019 Last updated : 02/28/2022
-# Tutorial: Azure Active Directory integration with Shuccho Navi
+# Tutorial: Azure AD SSO integration with Shuccho Navi
-In this tutorial, you learn how to integrate Shuccho Navi with Azure Active Directory (Azure AD).
-Integrating Shuccho Navi with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Shuccho Navi with Azure Active Directory (Azure AD). When you integrate Shuccho Navi with Azure AD, you can:
-* You can control in Azure AD who has access to Shuccho Navi.
-* You can enable your users to be automatically signed-in to Shuccho Navi (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Shuccho Navi.
+* Enable your users to be automatically signed-in to Shuccho Navi with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Shuccho Navi, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Shuccho Navi single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Shuccho Navi single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Shuccho Navi supports **SP** initiated SSO
-
-## Adding Shuccho Navi from the gallery
-
-To configure the integration of Shuccho Navi into Azure AD, you need to add Shuccho Navi from the gallery to your list of managed SaaS apps.
-
-**To add Shuccho Navi from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
+* Shuccho Navi supports **SP** initiated SSO.
- ![The New application button](common/add-new-app.png)
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-4. In the search box, type **Shuccho Navi**, select **Shuccho Navi** from result panel then click **Add** button to add the application.
+## Add Shuccho Navi from the gallery
- ![Shuccho Navi in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Shuccho Navi based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Shuccho Navi needs to be established.
-
-To configure and test Azure AD single sign-on with Shuccho Navi, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Shuccho Navi Single Sign-On](#configure-shuccho-navi-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Shuccho Navi test user](#create-shuccho-navi-test-user)** - to have a counterpart of Britta Simon in Shuccho Navi that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-
-### Configure Azure AD single sign-on
+To configure the integration of Shuccho Navi into Azure AD, you need to add Shuccho Navi from the gallery to your list of managed SaaS apps.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Shuccho Navi** in the search box.
+1. Select **Shuccho Navi** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure Azure AD single sign-on with Shuccho Navi, perform the following steps:
+## Configure and test Azure AD SSO for Shuccho Navi
-1. In the [Azure portal](https://portal.azure.com/), on the **Shuccho Navi** application integration page, select **Single sign-on**.
+Configure and test Azure AD SSO with Shuccho Navi using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Shuccho Navi.
- ![Configure single sign-on link](common/select-sso.png)
+To configure and test Azure AD SSO with Shuccho Navi, perform the following steps:
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Shuccho Navi SSO](#configure-shuccho-navi-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Shuccho Navi test user](#create-shuccho-navi-test-user)** - to have a counterpart of B.Simon in Shuccho Navi that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
- ![Single sign-on select mode](common/select-saml-option.png)
+## Configure Azure AD SSO
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+1. In the Azure portal, on the **Shuccho Navi** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-4. On the **Basic SAML Configuration** section, perform the following steps:
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
- ![Shuccho Navi Domain and URLs single sign-on information](common/sp-signonurl.png)
+4. On the **Basic SAML Configuration** section, perform the following step:
In the **Sign-on URL** text box, type a URL using the following pattern: `https://naviauth.nta.co.jp/saml/login?ENTP_CD=<Your company code>`
To configure Azure AD single sign-on with Shuccho Navi, perform the following st
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Shuccho Navi Single Sign-On
-
-To configure single sign-on on **Shuccho Navi** side, you need to send the downloaded **Metadata XML** and appropriate copied URLs from Azure portal to [Shuccho Navi support team](mailto:sys_ntabtm@nta.co.jp). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Shuccho Navi.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Shuccho Navi.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Shuccho Navi**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Shuccho Navi**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure Shuccho Navi SSO
-2. In the applications list, select **Shuccho Navi**.
-
- ![The Shuccho Navi link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog, select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog, click the **Assign** button.
+To configure single sign-on on **Shuccho Navi** side, you need to send the downloaded **Metadata XML** and appropriate copied URLs from Azure portal to [Shuccho Navi support team](mailto:sys_ntabtm@nta.co.jp). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Shuccho Navi test user In this section, you create a user called Britta Simon in Shuccho Navi. Work with [Shuccho Navi support team](mailto:sys_ntabtm@nta.co.jp) to add the users in the Shuccho Navi platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Shuccho Navi tile in the Access Panel, you should be automatically signed in to the Shuccho Navi for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Shuccho Navi Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Shuccho Navi Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Shuccho Navi tile in the My Apps, this will redirect to Shuccho Navi Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Shuccho Navi you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Soonr Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/soonr-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Soonr Workplace | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Soonr Workplace'
description: Learn how to configure single sign-on between Azure Active Directory and Soonr Workplace.
Previously updated : 04/08/2019 Last updated : 02/28/2022
-# Tutorial: Azure Active Directory integration with Soonr Workplace
+# Tutorial: Azure AD SSO integration with Soonr Workplace
-In this tutorial, you learn how to integrate Soonr Workplace with Azure Active Directory (Azure AD).
-Integrating Soonr Workplace with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Soonr Workplace with Azure Active Directory (Azure AD). When you integrate Soonr Workplace with Azure AD, you can:
-* You can control in Azure AD who has access to Soonr Workplace.
-* You can enable your users to be automatically signed-in to Soonr Workplace (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Soonr Workplace.
+* Enable your users to be automatically signed-in to Soonr Workplace with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with Soonr Workplace, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* Soonr Workplace single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Soonr Workplace single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Soonr Workplace supports **SP and IDP** initiated SSO
+* Soonr Workplace supports **SP and IDP** initiated SSO.
-## Adding Soonr Workplace from the gallery
+## Add Soonr Workplace from the gallery
To configure the integration of Soonr Workplace into Azure AD, you need to add Soonr Workplace from the gallery to your list of managed SaaS apps.
-**To add Soonr Workplace from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click the **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add a new application, click the **New application** button at the top of the dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Soonr Workplace**, select **Soonr Workplace** from the result panel then click the **Add** button to add the application.
-
- ![Soonr Workplace in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Soonr Workplace based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Soonr Workplace needs to be established.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Soonr Workplace** in the search box.
+1. Select **Soonr Workplace** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure and test Azure AD single sign-on with Soonr Workplace, you need to complete the following building blocks:
+## Configure and test Azure AD SSO for Soonr Workplace
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Soonr Workplace Single Sign-On](#configure-soonr-workplace-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Soonr Workplace test user](#create-soonr-workplace-test-user)** - to have a counterpart of Britta Simon in Soonr Workplace that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+Configure and test Azure AD SSO with Soonr Workplace using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Soonr Workplace.
-### Configure Azure AD single sign-on
+To configure and test Azure AD SSO with Soonr Workplace, perform the following steps:
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Soonr Workplace SSO](#configure-soonr-workplace-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Soonr Workplace test user](#create-soonr-workplace-test-user)** - to have a counterpart of B.Simon in Soonr Workplace that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-To configure Azure AD single sign-on with Soonr Workplace, perform the following steps:
+## Configure Azure AD SSO
-1. In the [Azure portal](https://portal.azure.com/), on the **Soonr Workplace** application integration page, select **Single sign-on**.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Configure single sign-on link](common/select-sso.png)
+1. In the Azure portal, on the **Soonr Workplace** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
- ![Screenshot shows the Basic SAML Configuration, where you can enter Identifier, Reply U R L, and select Save.](common/idp-intiated.png)
- a. In the **Identifier** text box, type a URL using the following pattern: `https://<servername>.soonr.com/singlesignon/saml/metadata`
To configure Azure AD single sign-on with Soonr Workplace, perform the following
5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/metadata-upload-additional-signon.png)
- In the **Sign-on URL** text box, type a URL using the following pattern: `https://<servername>.soonr.com/singlesignon/saml/SSO`
To configure Azure AD single sign-on with Soonr Workplace, perform the following
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Soonr Workplace Single Sign-On
-
-To configure single sign-on on **Soonr Workplace** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Soonr Workplace support team](https://awp.autotask.net/help/). They set this setting to have the SAML SSO connection set properly on both sides.
-
-> [!Note]
-> If you require assistance with configuring Autotask Workplace, please see [this page](https://awp.autotask.net/help/Content/0_HOME/Support_for_End_Clients.htm) to get assistance with your Workplace account.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type `brittasimon@yourcompanydomain.extension`. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Soonr Workplace.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Soonr Workplace.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Soonr Workplace**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Soonr Workplace**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure Soonr Workplace SSO
-2. In the applications list, select **Soonr Workplace**.
-
- ![The Soonr Workplace link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
+To configure single sign-on on **Soonr Workplace** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Soonr Workplace support team](https://awp.autotask.net/help/). They set this setting to have the SAML SSO connection set properly on both sides.
- ![The Add Assignment pane](common/add-assign-user.png)
+> [!Note]
+> If you require assistance with configuring Autotask Workplace, please see [this page](https://awp.autotask.net/help/Content/0_HOME/Support_for_End_Clients.htm) to get assistance with your Workplace account.
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
+### Create Soonr Workplace test user
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+In this section, you create a user called Britta Simon in Soonr Workplace. Work with [Soonr Workplace support team](https://awp.autotask.net/help/) to add the users in the Soonr Workplace platform. Users must be created and activated before you use single sign-on.
-7. In the **Add Assignment** dialog click the **Assign** button.
+## Test SSO
-### Create Soonr Workplace test user
+In this section, you test your Azure AD single sign-on configuration with following options.
-In this section, you create a user called Britta Simon in Soonr Workplace. Work with [Soonr Workplace support team](https://awp.autotask.net/help/) to add the users in the Soonr Workplace platform. Users must be created and activated before you use single sign-on.
+#### SP initiated:
-### Test single sign-on
+* Click on **Test this application** in Azure portal. This will redirect to Soonr Workplace Sign on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to Soonr Workplace Sign-on URL directly and initiate the login flow from there.
-When you click the Soonr Workplace tile in the Access Panel, you should be automatically signed in to the Soonr Workplace for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+#### IDP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Soonr Workplace for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Soonr Workplace tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Soonr Workplace for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Soonr Workplace you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Terratrue Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/terratrue-provisioning-tutorial.md
ms.devlang: na-+ Last updated 12/16/2021
active-directory Tonicdm Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tonicdm-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with TonicDM | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with TonicDM'
description: Learn how to configure single sign-on between Azure Active Directory and TonicDM.
Previously updated : 03/28/2019 Last updated : 02/25/2022
-# Tutorial: Azure Active Directory integration with TonicDM
+# Tutorial: Azure AD SSO integration with TonicDM
-In this tutorial, you learn how to integrate TonicDM with Azure Active Directory (Azure AD).
-Integrating TonicDM with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate TonicDM with Azure Active Directory (Azure AD). When you integrate TonicDM with Azure AD, you can:
-* You can control in Azure AD who has access to TonicDM.
-* You can enable your users to be automatically signed-in to TonicDM (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to TonicDM.
+* Enable your users to be automatically signed-in to TonicDM with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with TonicDM, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* TonicDM single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* TonicDM single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* TonicDM supports **SP** initiated SSO
-
-* TonicDM supports **Just In Time** user provisioning
-
-## Adding TonicDM from the gallery
-
-To configure the integration of TonicDM into Azure AD, you need to add TonicDM from the gallery to your list of managed SaaS apps.
-
-**To add TonicDM from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
+* TonicDM supports **SP** initiated SSO.
- ![The New application button](common/add-new-app.png)
+* TonicDM supports **Just In Time** user provisioning.
-4. In the search box, type **TonicDM**, select **TonicDM** from result panel then click **Add** button to add the application.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
- ![TonicDM in the results list](common/search-new-app.png)
+## Add TonicDM from the gallery
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with TonicDM based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in TonicDM needs to be established.
-
-To configure and test Azure AD single sign-on with TonicDM, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure TonicDM Single Sign-On](#configure-tonicdm-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create TonicDM test user](#create-tonicdm-test-user)** - to have a counterpart of Britta Simon in TonicDM that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure the integration of TonicDM into Azure AD, you need to add TonicDM from the gallery to your list of managed SaaS apps.
-### Configure Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **TonicDM** in the search box.
+1. Select **TonicDM** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure and test Azure AD SSO for TonicDM
-To configure Azure AD single sign-on with TonicDM, perform the following steps:
+Configure and test Azure AD SSO with TonicDM using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in TonicDM.
-1. In the [Azure portal](https://portal.azure.com/), on the **TonicDM** application integration page, select **Single sign-on**.
+To configure and test Azure AD SSO with TonicDM, perform the following steps:
- ![Configure single sign-on link](common/select-sso.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure TonicDM SSO](#configure-tonicdm-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create TonicDM test user](#create-tonicdm-test-user)** - to have a counterpart of B.Simon in TonicDM that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+## Configure Azure AD SSO
- ![Single sign-on select mode](common/select-saml-option.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. In the Azure portal, on the **TonicDM** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![TonicDM Domain and URLs single sign-on information](common/sp-identifier.png)
+ a. In the **Identifier (Entity ID)** text box, type the URL:
+ `https://tonicdm.com/saml/metadata`
- a. In the **Sign on URL** text box, type a URL:
+ b. In the **Sign on URL** text box, type the URL:
`https://tonicdm.com/`
- b. In the **Identifier (Entity ID)** text box, type a URL:
- `https://tonicdm.com/saml/metadata`
- 5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer. ![The Certificate download link](common/certificatebase64.png)
To configure Azure AD single sign-on with TonicDM, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure TonicDM Single Sign-On
-
-To configure single sign-on on **TonicDM** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [TonicDM support team](mailto:support@tonicdm.com). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com
+In this section, you'll create a test user in the Azure portal called B.Simon.
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to TonicDM.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **TonicDM**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **TonicDM**.
-
- ![The TonicDM link in the Applications list](common/all-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to TonicDM.
-3. In the menu on the left, select **Users and groups**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **TonicDM**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![The "Users and groups" link](common/users-groups-blade.png)
+## Configure TonicDM SSO
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **TonicDM** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [TonicDM support team](mailto:support@tonicdm.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create TonicDM test user In this section, you create a user called Britta Simon in TonicDM. Work with [TonicDM support team](mailto:support@tonicdm.com) to add the users in the TonicDM platform. Users must be created and activated before you use single sign-on
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the TonicDM tile in the Access Panel, you should be automatically signed in to the TonicDM for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to TonicDM Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to TonicDM Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the TonicDM tile in the My Apps, this will redirect to TonicDM Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure TonicDM you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Us Bank Prepaid Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/us-bank-prepaid-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with U.S. Bank Prepaid'
+description: Learn how to configure single sign-on between Azure Active Directory and U.S. Bank Prepaid.
++++++++ Last updated : 03/03/2022++++
+# Tutorial: Azure AD SSO integration with U.S. Bank Prepaid
+
+In this tutorial, you'll learn how to integrate U.S. Bank Prepaid with Azure Active Directory (Azure AD). When you integrate U.S. Bank Prepaid with Azure AD, you can:
+
+* Control in Azure AD who has access to U.S. Bank Prepaid.
+* Enable your users to be automatically signed-in to U.S. Bank Prepaid with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* U.S. Bank Prepaid single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* U.S. Bank Prepaid supports **SP and IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add U.S. Bank Prepaid from the gallery
+
+To configure the integration of U.S. Bank Prepaid into Azure AD, you need to add U.S. Bank Prepaid from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **U.S. Bank Prepaid** in the search box.
+1. Select **U.S. Bank Prepaid** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for U.S. Bank Prepaid
+
+Configure and test Azure AD SSO with U.S. Bank Prepaid using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in U.S. Bank Prepaid.
+
+To configure and test Azure AD SSO with U.S. Bank Prepaid, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure U.S. Bank Prepaid SSO](#configure-us-bank-prepaid-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create U.S. Bank Prepaid test user](#create-us-bank-prepaid-test-user)** - to have a counterpart of B.Simon in U.S. Bank Prepaid that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **U.S. Bank Prepaid** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **SP** initiated mode then perform the following steps:
+
+ a. In the **Identifier** text box, type the value:
+ `USBank:SAML2.0:Prepaid_SP`
+
+ b. In the **Reply URL** text box, type the URL:
+ `https://uat-federation.usbank.com/sp/ACS.saml2`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<Environment>.usbank.com/sp/startSSO.ping?PartnerIdpId=<ID>`
+
+ > [!NOTE]
+ > The value is not real. Update this value with the actual Sign-on URL. Contact [U.S. Bank Prepaid Client support team](mailto:web.access.management@usbank.com) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to U.S. Bank Prepaid.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **U.S. Bank Prepaid**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure U.S. Bank Prepaid SSO
+
+To configure single sign-on on **U.S. Bank Prepaid** side, you need to send the **App Federation Metadata Url** to [U.S. Bank Prepaid support team](mailto:web.access.management@usbank.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create U.S. Bank Prepaid test user
+
+In this section, you create a user called Britta Simon in U.S. Bank Prepaid. Work with [U.S. Bank Prepaid support team](mailto:web.access.management@usbank.com) to add the users in the U.S. Bank Prepaid platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to U.S. Bank Prepaid Sign on URL where you can initiate the login flow.
+
+* Go to U.S. Bank Prepaid Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the U.S. Bank Prepaid for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the U.S. Bank Prepaid tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the U.S. Bank Prepaid for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure U.S. Bank Prepaid you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Waywedo Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/waywedo-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Way We Do | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Way We Do'
description: Learn how to configure single sign-on between Azure Active Directory and Way We Do.
Previously updated : 06/20/2019 Last updated : 02/25/2022
-# Tutorial: Integrate Way We Do with Azure Active Directory
+# Tutorial: Azure AD SSO integration with Way We Do
In this tutorial, you'll learn how to integrate Way We Do with Azure Active Directory (Azure AD). When you integrate Way We Do with Azure AD, you can:
In this tutorial, you'll learn how to integrate Way We Do with Azure Active Dire
* Enable your users to be automatically signed-in to Way We Do with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites
-To get started, you need the following items:
+To configure Azure AD integration with Way We Do, you need the following items:
-* An Azure AD subscription. If you don't have a subscription, you can get one-month free trial [here](https://azure.microsoft.com/pricing/free-trial/).
-* Way We Do single sign-on (SSO) enabled subscription.
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Way We Do single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Way We Do supports **SP** initiated SSO
-* Way We Do supports **Just In Time** user provisioning
+* Way We Do supports **SP** initiated SSO.
+* Way We Do supports **Just In Time** user provisioning.
-## Adding Way We Do from the gallery
+## Add Way We Do from the gallery
To configure the integration of Way We Do into Azure AD, you need to add Way We Do from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Way We Do** in the search box. 1. Select **Way We Do** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on
+## Configure and test Azure AD SSO for Way We Do
Configure and test Azure AD SSO with Way We Do using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Way We Do.
-To configure and test Azure AD SSO with Way We Do, complete the following building blocks:
+To configure and test Azure AD SSO with Way We Do, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
-2. **[Configure Way We Do SSO](#configure-way-we-do-sso)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Way We Do test user](#create-way-we-do-test-user)** - to have a counterpart of Britta Simon in Way We Do that is linked to the Azure AD representation of user.
-6. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Way We Do SSO](#configure-way-we-do-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Way We Do test user](#create-way-we-do-test-user)** - to have a counterpart of B.Simon in Way We Do that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-### Configure Azure AD SSO
+## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Way We Do** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, on the **Way We Do** application integration page, find the **Manage** section and select **Single sign-on**.
1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** page, enter the values for the following fields:
-
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.waywedo.com/Authentication/ExternalSignIn`
+1. On the **Basic SAML Configuration** section, perform the following steps:
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
`https://<SUBDOMAIN>.waywedo.com`
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.waywedo.com/Authentication/ExternalSignIn`
+ > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Way We Do Client support team](mailto:support@waywedo.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Way We Do Client support team](mailto:support@waywedo.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer.
Follow these steps to enable Azure AD SSO in the Azure portal.
![Copy configuration URLs](common/copy-configuration-urls.png)
-### Configure Way We Do SSO
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Way We Do.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Way We Do**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Way We Do SSO
1. To automate the configuration within Way We Do, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Click the **person icon** in the top right corner of any page in Way We Do, then click **Account** in the dropdown menu.
- ![Way We Do account](./media/waywedo-tutorial/tutorial_waywedo_account.png)
+ ![Way We Do account](./media/waywedo-tutorial/account.png)
1. Click the **menu icon** to open the push navigation menu and Click **Single Sign On**.
- ![Way We Do single](./media/waywedo-tutorial/tutorial_waywedo_single.png)
+ ![Way We Do single](./media/waywedo-tutorial/single.png)
1. On the **Single sign-on setup** page, perform the following steps:
- ![Way We Do save](./media/waywedo-tutorial/tutorial_waywedo_save.png)
+ ![Way We Do save](./media/waywedo-tutorial/save.png)
1. Click the **Turn on single sign-on** toggle to **Yes** to enable Single Sign-On.
Follow these steps to enable Azure AD SSO in the Azure portal.
> Users added through single sign-on are added as general users and are not assigned a role in the system. An Administrator is able to go in and modify their security role as an editor or administrator and can also assign one or several Org Chart roles. 1. Click **Save** to persist your settings.-
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Way We Do.
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Way We Do**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add User link](common/add-assign-user.png)
-
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
-1. In the **Add Assignment** dialog, click the **Assign** button.
-
+
### Create Way We Do test user In this section, a user called Britta Simon is created in Way We Do. Way We Do supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Way We Do, a new one is created after authentication.
In this section, a user called Britta Simon is created in Way We Do. Way We Do s
> [!Note] > If you need to create a user manually, contact [Way We Do Client support team](mailto:support@waywedo.com).
-### Test SSO
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you select the Way We Do tile in the Access Panel, you should be automatically signed in to the Way We Do for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Way We Do Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Way We Do Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Way We Do tile in the My Apps, this will redirect to Way We Do Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Way We Do you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Zwayam Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zwayam-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Zwayam | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Zwayam'
description: Learn how to configure single sign-on between Azure Active Directory and Zwayam.
Previously updated : 03/29/2019 Last updated : 02/25/2022
-# Tutorial: Azure Active Directory integration with Zwayam
+# Tutorial: Azure AD SSO integration with Zwayam
-In this tutorial, you learn how to integrate Zwayam with Azure Active Directory (Azure AD).
-Integrating Zwayam with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Zwayam with Azure Active Directory (Azure AD). When you integrate Zwayam with Azure AD, you can:
-* You can control in Azure AD who has access to Zwayam.
-* You can enable your users to be automatically signed-in to Zwayam (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Zwayam.
+* Enable your users to be automatically signed-in to Zwayam with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with Zwayam, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* Zwayam single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Zwayam single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Zwayam supports **SP** initiated SSO
-
-## Adding Zwayam from the gallery
-
-To configure the integration of Zwayam into Azure AD, you need to add Zwayam from the gallery to your list of managed SaaS apps.
-
-**To add Zwayam from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
+* Zwayam supports **SP** initiated SSO.
-4. In the search box, type **Zwayam**, select **Zwayam** from result panel then click **Add** button to add the application.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
- ![Zwayam in the results list](common/search-new-app.png)
+## Add Zwayam from the gallery
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Zwayam based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Zwayam needs to be established.
-
-To configure and test Azure AD single sign-on with Zwayam, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Zwayam Single Sign-On](#configure-zwayam-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Zwayam test user](#create-zwayam-test-user)** - to have a counterpart of Britta Simon in Zwayam that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure the integration of Zwayam into Azure AD, you need to add Zwayam from the gallery to your list of managed SaaS apps.
-### Configure Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Zwayam** in the search box.
+1. Select **Zwayam** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure and test Azure AD SSO for Zwayam
-To configure Azure AD single sign-on with Zwayam, perform the following steps:
+Configure and test Azure AD SSO with Zwayam using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Zwayam.
-1. In the [Azure portal](https://portal.azure.com/), on the **Zwayam** application integration page, select **Single sign-on**.
+To configure and test Azure AD SSO with Zwayam, perform the following steps:
- ![Configure single sign-on link](common/select-sso.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Zwayam SSO](#configure-zwayam-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Zwayam test user](#create-zwayam-test-user)** - to have a counterpart of B.Simon in Zwayam that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+## Configure Azure AD SSO
- ![Single sign-on select mode](common/select-saml-option.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. In the Azure portal, on the **Zwayam** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Zwayam Domain and URLs single sign-on information](common/sp-identifier.png)
+ a. In the **Identifier (Entity ID)** text box, type the URL:
+ `https://sso.zwayam.com/zwayam-saml/saml/metadata`
- a. In the **Sign on URL** text box, type a URL using the following pattern:
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
`https://sso.zwayam.com/zwayam-saml/zwayam-saml/saml/login?idp=<SAML Entity ID>`
- b. In the **Identifier (Entity ID)** text box, type a URL:
- `https://sso.zwayam.com/zwayam-saml/saml/metadata`
- > [!NOTE] > The **Sign on URL** value is not real. Update the value with the actual Sign on URL. `<SAML Entity ID>` is the Azure AD Identifier value which is explained later in the tutorial.
To configure Azure AD single sign-on with Zwayam, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Zwayam Single Sign-On
-
-To configure single sign-on on **Zwayam** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Zwayam support team](mailto:opendoors@zwayam.com). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com
+In this section, you'll create a test user in the Azure portal called B.Simon.
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Zwayam.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Zwayam**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Zwayam**.
-
- ![The Zwayam link in the Applications list](common/all-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Zwayam.
-3. In the menu on the left, select **Users and groups**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Zwayam**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![The "Users and groups" link](common/users-groups-blade.png)
+## Configure Zwayam SSO
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Zwayam** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Zwayam support team](mailto:opendoors@zwayam.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Zwayam test user In this section, you create a user called Britta Simon in Zwayam. Work with [Zwayam support team](mailto:opendoors@zwayam.com) to add the users in the Zwayam platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Zwayam tile in the Access Panel, you should be automatically signed in to the Zwayam for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Zwayam Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Zwayam Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Zwayam tile in the My Apps, this will redirect to Zwayam Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Zwayam you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
Applications that use the Azure Active Directory Verifiable Credentials service
| Tenant region | Request API endpoint POST | ||-|
-| Europe | https://beta.eu.did.msidentity.com/v1.0/{tenantID}/verifiablecredentials/request |
-| Non-EU | https://beta.did.msidentity.com/v1.0/{tenantID}/verifiablecredentials/request |
+| Europe | `https://beta.eu.did.msidentity.com/v1.0/{tenantID}/verifiablecredentials/request` |
+| Non-EU | `https://beta.did.msidentity.com/v1.0/{tenantID}/verifiablecredentials/request` |
To confirm which endpoint you should use, we recommend checking your Azure AD tenant's region as described above. If the Azure AD tenant is in the EU, you should use the Europe endpoint.
advisor Advisor Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-cost-recommendations.md
Last updated 10/29/2021
Azure Advisor helps you optimize and reduce your overall Azure spend by identifying idle and underutilized resources. You can get cost recommendations from the **Cost** tab on the Advisor dashboard.
-## How to access cost recommendations in Azure Advisor
- 1. Sign in to the [**Azure portal**](https://portal.azure.com). 1. Search for and select [**Advisor**](https://aka.ms/azureadvisordashboard) from any page.
This is a special type of resize recommendation, where Advisor analyzes workload
- If the P95 of CPU is less than two times the burstable SKUs' baseline performance - If the current SKU does not have accelerated networking enabled (burstable SKUs donΓÇÖt support accelerated networking yet) - If we determine that the Burstable SKU credits are sufficient to support the average CPU utilization over 7 days-- The result is a recommendation suggesting that the user resize their current VM to a burstable SKU (with the same number of cores) to take advantage of the low costs and the fact that the workload has low average utilization but high spikes in cases, which is perfect for the B-series SKU.
+- The result is a recommendation suggesting that the user resize their current VM to a burstable SKU (with the same number of cores) to take advantage of the low costs and the fact that the workload has low average utilization but high spikes in cases, which can be best served by the B-series SKU.
Advisor shows the estimated cost savings for either recommended action: resize or shut down. For resize, Advisor provides current and target SKU information. To be more selective about the actioning on underutilized virtual machines, you can adjust the CPU utilization rule on a per-subscription basis.
In such cases simply use the Dismiss/Postpone options associated with the recomm
We are constantly working on improving these recommendations. Feel free to share feedback on [Advisor Forum](https://aka.ms/advisorfeedback).
-## Optimize spend for MariaDB, MySQL, and PostgreSQL servers by right-sizing
-Advisor analyses your usage and evaluates whether your MariaDB, MySQL, or PostgreSQL database server resources have been underutilized for an extended time over the past seven days. Low resource utilization results in unwanted expenditure that you can fix without significant performance impact. To reduce your costs and efficiently manage your resources, we recommend that you reduce the compute size (vCores) by half.
-
-## Reduce costs by eliminating unprovisioned ExpressRoute circuits
-
-Advisor identifies Azure ExpressRoute circuits that have been in the provider status of **Not provisioned** for more than one month. It recommends deleting the circuit if you aren't planning to provision the circuit with your connectivity provider.
-
-## Reduce costs by deleting or reconfiguring idle virtual network gateways
-
-Advisor identifies virtual network gateways that have been idle for more than 90 days. Because these gateways are billed hourly, you should consider reconfiguring or deleting them if you don't intend to use them anymore.
-
-## Buy reserved virtual machine instances to save money over pay-as-you-go costs
-
-Advisor reviews your virtual machine usage over the past 30 days to determine if you could save money by purchasing an Azure reservation. Advisor shows you the regions and sizes where the potential for savings is highest and the estimated savings from purchasing reservations. With Azure reservations, you can pre-purchase the base costs for your virtual machines. Discounts automatically apply to new or existing VMs that have the same size and region as your reservations. [Learn more about Azure Reserved VM Instances.](https://azure.microsoft.com/pricing/reserved-vm-instances/)
-
-Advisor also notifies you of your reserved instances that will expire in the next 30 days. It recommends that you purchase new reserved instances to avoid pay-as-you-go pricing.
-
-## Buy reserved instances for several resource types to save over your pay-as-you-go costs
-
-Advisor analyzes usage patterns for the past 30 days for the following resources and recommends reserved capacity purchases that optimize costs.
-
-### Azure Cosmos DB reserved capacity
-Advisor analyzes your Azure Cosmos DB usage patterns for the past 30 days and recommends reserved capacity purchases to optimize costs. By using reserved capacity, you can pre-purchase Azure Cosmos DB hourly usage and save over your pay-as-you-go costs. Reserved capacity is a billing benefit and automatically applies to new and existing deployments. Advisor calculates savings estimates for individual subscriptions by using 3-year reservation pricing and by extrapolating the usage patterns observed over the past 30 days. Shared scope recommendations are available for reserved capacity purchases and can increase savings.
-
-### SQL Database and SQL Managed Instance reserved capacity
-Advisor analyzes SQL Database and SQL Managed Instance usage patterns over the past 30 days. It then recommends reserved capacity purchases that optimize costs. By using reserved capacity, you can pre-purchase SQL DB hourly usage and save over your SQL compute costs. Your SQL license is charged separately and isn't discounted by the reservation. Reserved capacity is a billing benefit and automatically applies to new and existing deployments. Advisor calculates savings estimates for individual subscriptions by using 3-year reservation pricing and by extrapolating the usage patterns observed over the past 30 days. Shared scope recommendations are available for reserved capacity purchases and can increase savings. For details, see [Azure SQL Database & SQL Managed Instance reserved capacity](../azure-sql/database/reserved-capacity-overview.md).
-
-### App Service Stamp Fee reserved capacity
-Advisor analyzes the Stamp Fee usage pattern for your Azure App Service isolated environment over the past 30 days and recommends reserved capacity purchases that optimize costs. By using reserved capacity, you can pre-purchase hourly usage for the isolated environment Stamp Fee and save over your pay-as-you-go costs. Note that reserved capacity applies only to the Stamp Fee and not to App Service instances. Reserved capacity is a billing benefit and automatically applies to new and existing deployments. Advisor calculates saving estimates for individual subscriptions by using 3-year reservation pricing based on usage patterns over the past 30 days.
-
-### Blob storage reserved capacity
-Advisor analyzes your Azure Blob storage and Azure Data Lake storage usage over the past 30 days. It then calculates reserved capacity purchases that optimize costs. With reserved capacity, you can pre-purchase hourly usage and save over your current on-demand costs. Blob storage reserved capacity applies only to data stored on Azure Blob general-purpose v2 and Azure Data Lake Storage Gen2 accounts. Reserved capacity is a billing benefit and automatically applies to new and existing deployments. Advisor calculates savings estimates for individual subscriptions by using 3-year reservation pricing and the usage patterns observed over the past 30 days. Shared scope recommendations are available for reserved capacity purchases and can increase savings.
-
-### MariaDB, MySQL, and PostgreSQL reserved capacity
-Advisor analyzes your usage patterns for Azure Database for MariaDB, Azure Database for MySQL, and Azure Database for PostgreSQL over the past 30 days. It then recommends reserved capacity purchases that optimize costs. By using reserved capacity, you can pre-purchase MariaDB, MySQL, and PostgreSQL hourly usage and save over your current costs. Reserved capacity is a billing benefit and automatically applies to new and existing deployments. Advisor calculates savings estimates for individual subscriptions by using 3-year reservation pricing and the usage patterns observed over the past 30 days. Shared scope recommendations are available for reserved capacity purchases and can increase savings.
-
-### Azure Synapse Analytics reserved capacity
-Advisor analyzes your Azure Synapse Analytics usage patterns over the past 30 days and recommends reserved capacity purchases that optimize costs. By using reserved capacity, you can pre-purchase Synapse Analytics hourly usage and save over your on-demand costs. Reserved capacity is a billing benefit and automatically applies to new and existing deployments. Advisor calculates savings estimates for individual subscriptions by using 3-year reservation pricing and the usage patterns observed over the past 30 days. Shared scope recommendations are available for reserved capacity purchases and can increase savings.
-
-## Delete unassociated public IP addresses to save money
-
-Advisor identifies public IP addresses that aren't associated with Azure resources like load balancers and VMs. A nominal charge is associated with these public IP addresses. If you don't plan to use them, you can save money by deleting them.
-
-## Delete Azure Data Factory pipelines that are failing
-
-Advisor detects Azure Data Factory pipelines that repeatedly fail. It recommends that you resolve the problems or delete the pipelines if you don't need them. You're billed for these pipelines even if though they're not serving you while they're failing.
-
-## Use standard snapshots for managed disks
-To save 60% of cost, we recommend storing your snapshots in standard storage, regardless of the storage type of the parent disk. This option is the default option for managed disk snapshots. Advisor identifies snapshots that are stored in premium storage and recommends migrating then from premium to standard storage. [Learn more about managed disk pricing.](https://aka.ms/aa_manageddisksnapshot_learnmore)
-
-## Use lifecycle management
-By using intelligence about your Azure Blob storage object count, total size, and transactions, Advisor detects whether you should enable lifecycle management to tier data on one or more of your storage accounts. It prompts you to create lifecycle management rules to automatically tier your data to cool or archive storage to optimize your storage costs while retaining your data in Azure Blob storage for application compatibility.
-
-## Create an Ephemeral OS Disk recommendation
-[Ephemeral OS Disk](../virtual-machines/ephemeral-os-disks.md) allows you to:
-- Save on storage costs for OS disks. -- Get lower read/write latency to OS disks. -- Get faster VM reimage operations by resetting the OS (and temporary disk) to its original state.-
-It's preferable to use Ephemeral OS Disk for short-lived IaaS VMs or VMs with stateless workloads. Advisor provides recommendations for resources that can benefit from Ephemeral OS Disk.
-
-## Reduce Azure Data Explorer table cache-period (policy) for cluster cost optimization (Preview)
-Advisor identifies resources where reducing the table cache policy will free up Azure Data Explorer cluster nodes having low CPU utilization, memory, and a high cache size configuration.
-
-## Configure manual throughput instead of autoscale on your Azure Cosmos DB database or container
-Based on your usage in the past 7 days, you can save by using manual throughput instead of autoscale. Manual throughput is more cost-effective when average utilization of your max throughput (RU/s) is greater than 66% or less than or equal to 10%. Cost savings amount represents potential savings from using the recommended manual throughput, based on usage in the past 7 days. Your actual savings may vary depending on the manual throughput you set and whether your average utilization of throughput continues to be similar to the time period analyzed. The estimated savings do not account for any discount that may apply to your account.
-
-## Enable autoscale on your Azure Cosmos DB database or container
-Based on your usage in the past 7 days, you can save by enabling autoscale. For each hour, we compared the RU/s provisioned to the actual utilization of the RU/s (what autoscale would have scaled to) and calculated the cost savings across the time period. Autoscale helps optimize your cost by scaling down RU/s when not in use.
- ## Next steps To learn more about Advisor recommendations, see:
+* [Advisor cost recommendations (full list)](advisor-reference-cost-recommendations.md)
* [Introduction to Advisor](advisor-overview.md) * [Advisor score](azure-advisor-score.md) * [Get started with Advisor](advisor-get-started.md)
-* [Advisor performance recommendations](advisor-performance-recommendations.md)
-* [Advisor reliability recommendations](advisor-high-availability-recommendations.md)
+* [Advisor performance recommendations](advisor-reference-performance-recommendations.md)
+* [Advisor reliability recommendations](advisor-reference-reliability-recommendations.md)
* [Advisor security recommendations](advisor-security-recommendations.md)
-* [Advisor operational excellence recommendations](advisor-operational-excellence-recommendations.md)
+* [Advisor operational excellence recommendations](advisor-reference-operational-excellence-recommendations.md)
advisor Advisor Reference Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-cost-recommendations.md
+
+ Title: Cost recommendations
+description: Full list of available cost recommendations in Advisor.
+ Last updated : 02/04/2022++
+# Cost recommendations
+
+Azure Advisor helps you optimize and reduce your overall Azure spend by identifying idle and underutilized resources. You can get cost recommendations from the **Cost** tab on the Advisor dashboard.
+
+1. Sign in to the [**Azure portal**](https://portal.azure.com).
+
+1. Search for and select [**Advisor**](https://aka.ms/azureadvisordashboard) from any page.
+
+1. On the **Advisor** dashboard, select the **Cost** tab.
+
+## Compute
+
+### Use Standard Storage to store Managed Disks snapshots
+
+To save 60% of cost, we recommend storing your snapshots in Standard Storage, regardless of the storage type of the parent disk. This is the default option for Managed Disks snapshots. Migrate your snapshot from Premium to Standard Storage. Refer to Managed Disks pricing details.
+
+Learn more about [Managed Disk Snapshot - ManagedDiskSnapshot (Use Standard Storage to store Managed Disks snapshots)](https://aka.ms/aa_manageddisksnapshot_learnmore).
+
+### Right-size or shutdown underutilized virtual machines
+
+We've analyzed the usage patterns of your virtual machine over the past 7 days and identified virtual machines with low usage. While certain scenarios can result in low utilization by design, you can often save money by managing the size and number of virtual machines.
+
+Learn more about [Virtual machine - LowUsageVmV2 (Right-size or shutdown underutilized virtual machines)](https://aka.ms/aa_lowusagerec_learnmore).
+
+### You have disks which have not been attached to a VM for more than 30 days. Please evaluate if you still need the disk.
+
+We have observed that you have disks which have not been attached to a VM for more than 30 days. Please evaluate if you still need the disk. Note that if you decide to delete the disk, recovery is not possible. We recommend that you create a snapshot before deletion or ensure the data in the disk is no longer required.
+
+Learn more about [Disk - DeleteOrDowngradeUnattachedDisks (You have disks which have not been attached to a VM for more than 30 days. Please evaluate if you still need the disk.)](https://aka.ms/unattacheddisks).
+
+## MariaDB
+
+### Right-size underutilized MariaDB servers
+
+Our internal telemetry shows that the MariaDB database server resources have been underutilized for an extended period of time over the last 7 days. Low resource utilization results in unwanted expenditure which can be fixed without significant performance impact. To reduce your costs and efficiently manage your resources, we recommend reducing the compute size (vCores) by half.
+
+Learn more about [MariaDB server - OrcasMariaDbCpuRightSize (Right-size underutilized MariaDB servers)](https://aka.ms/mariadbpricing).
+
+## MySQL
+
+### Right-size underutilized MySQL servers
+
+Our internal telemetry shows that the MySQL database server resources have been underutilized for an extended period of time over the last 7 days. Low resource utilization results in unwanted expenditure which can be fixed without significant performance impact. To reduce your costs and efficiently manage your resources, we recommend reducing the compute size (vCores) by half.
+
+Learn more about [MySQL server - OrcasMySQLCpuRightSize (Right-size underutilized MySQL servers)](https://aka.ms/mysqlpricing).
+
+## PostgreSQL
+
+### Right-size underutilized PostgreSQL servers
+
+Our internal telemetry shows that the PostgreSQL database server resources have been underutilized for an extended period of time over the last 7 days. Low resource utilization results in unwanted expenditure which can be fixed without significant performance impact. To reduce your costs and efficiently manage your resources, we recommend reducing the compute size (vCores) by half.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlCpuRightSize (Right-size underutilized PostgreSQL servers)](https://aka.ms/postgresqlpricing).
+
+## Cosmos DB
+
+### Review the configuration of your Azure Cosmos DB free tier account
+
+Your Azure Cosmos DB free tier account is currently containing resources with a total provisioned throughput exceeding 1000 Request Units per second (RU/s). Because Azure Cosmos DB's free tier only covers the first 1000 RU/s of throughput provisioned across your account, any throughput beyond 1000 RU/s will be billed at the regular pricing. As a result, we anticipate that you will get charged for the throughput currently provisioned on your Azure Cosmos DB account.
+
+Learn more about [Cosmos DB account - CosmosDBFreeTierOverage (Review the configuration of your Azure Cosmos DB free tier account)](/azure/cosmos-db/understand-your-bill#azure-free-tier).
+
+### Consider taking action on your idle Azure Cosmos DB containers
+
+We haven't detected any activity over the past 30 days on one or more of your Azure Cosmos DB containers. Consider lowering their throughput, or deleting them if you don't plan on using them.
+
+Learn more about [Cosmos DB account - CosmosDBIdleContainers (Consider taking action on your idle Azure Cosmos DB containers)](/azure/cosmos-db/how-to-provision-container-throughput).
+
+### Enable autoscale on your Azure Cosmos DB database or container
+
+Based on your usage in the past 7 days, you can save by enabling autoscale. For each hour, we compared the RU/s provisioned to the actual utilization of the RU/s (what autoscale would have scaled to) and calculated the cost savings across the time period. Autoscale helps optimize your cost by scaling down RU/s when not in use.
+
+Learn more about [Cosmos DB account - CosmosDBAutoscaleRecommendations (Enable autoscale on your Azure Cosmos DB database or container)](/azure/cosmos-db/provision-throughput-autoscale).
+
+### Configure manual throughput instead of autoscale on your Azure Cosmos DB database or container
+
+Based on your usage in the past 7 days, you can save by using manual throughput instead of autoscale. Manual throughput is more cost-effective when average utilization of your max throughput (RU/s) is greater than 66% or less than or equal to 10%.
+
+Learn more about [Cosmos DB account - CosmosDBMigrateToManualThroughputFromAutoscale (Configure manual throughput instead of autoscale on your Azure Cosmos DB database or container)](/azure/cosmos-db/how-to-choose-offer).
+
+## Data Explorer
+
+### Unused/Empty Data Explorer resources
+
+This recommendation surfaces all Data Explorer resources provisioned more than 10 days from the last update, and found either empty or with no activity. The recommended action is to validate and consider deleting the resources.
+
+Learn more about [Data explorer resource - ADX Unused resource (Unused/Empty Data Explorer resources)](https://aka.ms/adxemptycluster).
+
+### Right-size Data Explorer resources for optimal cost
+
+One or more of these were detected: Low data capacity, CPU utilization, or memory utilization. The recommended action to improve the performance is to scale down and/or scale in the resource to the recommended configuration shown.
+
+Learn more about [Data explorer resource - Right-size for cost (Right-size Data Explorer resources for optimal cost)](https://aka.ms/adxskusize).
+
+### Reduce Data Explorer table cache policy to optimize costs
+
+Reducing the table cache policy will free up Data Explorer cluster nodes with low CPU utilization, memory, and a high cache size configuration.
+
+Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTables (Reduce Data Explorer table cache policy to optimize costs)](https://aka.ms/adxcachepolicy).
+
+### Unused Data Explorer resources with data
+
+This recommendation surfaces all Data Explorer resources provisioned more than 10 days from the last update, and found containing data but with no activity. The recommended action is to validate and consider stopping the unused resources.
+
+Learn more about [Data explorer resource - StopUnusedClustersWithData (Unused Data Explorer resources with data)](https://aka.ms/adxunusedcluster).
+
+### Cleanup unused storage in Data Explorer resources
+
+Over time, internal extents merge operations can accumulate redundant and unused storage artifacts that remain beyond the data retention period. While this unreferenced data doesnΓÇÖt negatively impact the performance, it can lead to more storage use and larger costs than necessary. This recommendation surfaces Data Explorer resources that have unused storage artifacts. The recommended action is to run the cleanup command to detect and delete unused storage artifacts and reduce cost. Note that data recoverability will be reset to the cleanup time and will not be available on data that was created before running the cleanup.
+
+Learn more about [Data explorer resource - RunCleanupCommandForAzureDataExplorer (Cleanup unused storage in Data Explorer resources)](https://aka.ms/adxcleanextentcontainers).
+
+### Enable optimized autoscale for Data Explorer resources
+
+Looks like your resource could have automatically scaled to reduce costs (based on the usage patterns, cache utilization, ingestion utilization, and CPU). To optimize costs and performance, we recommend enabling optimized autoscale. To make sure you don't exceed your planned budget, add a maximum instance count when you enable this.
+
+Learn more about [Data explorer resource - EnableOptimizedAutoscaleAzureDataExplorer (Enable optimized autoscale for Data Explorer resources)](https://aka.ms/adxoptimizedautoscale).
+
+## Network
+
+### Delete ExpressRoute circuits in the provider status of Not Provisioned
+
+We noticed that your ExpressRoute circuit is in the provider status of Not Provisioned for more than one month. This circuit is currently billed hourly to your subscription. We recommend that you delete the circuit if you aren't planning to provision the circuit with your connectivity provider.
+
+Learn more about [ExpressRoute circuit - ExpressRouteCircuit (Delete ExpressRoute circuits in the provider status of Not Provisioned)](https://aka.ms/expressroute).
+
+### Repurpose or delete idle virtual network gateways
+
+We noticed that your virtual network gateway has been idle for over 90 days. This gateway is being billed hourly. You may want to reconfigure this gateway, or delete it if you do not intend to use it anymore.
+
+Learn more about [Virtual network gateway - IdleVNetGateway (Repurpose or delete idle virtual network gateways)](https://aka.ms/aa_idlevpngateway_learnmore).
+
+## Recovery Services
+
+### Use differential or incremental backup for database workloads
+
+For SQL/HANA DBs in Azure VMs being backed up to Azure, using daily differential with weekly full backup is often more cost-effective than daily fully backups. For HANA, Azure Backup also supports incremental backup which is even more cost effective
+
+Learn more about [Recovery Services vault - Optimize costs of database backup (Use differential or incremental backup for database workloads)](https://aka.ms/DBBackupCostOptimization).
+
+## Storage
+
+### Revisit retention policy for classic log data in storage accounts
+
+Large classic log data is detected on your storage accounts. You are billed on capacity of data stored in storage accounts including classic logs. You are recommended to check the retention policy of classic logs and update with necessary period to retain less log data. This would reduce unnecessary classic log data and save your billing cost from less capacity.
+
+Learn more about [Storage Account - XstoreLargeClassicLog (Revisit retention policy for classic log data in storage accounts)]( /azure/storage/common/manage-storage-analytics-logs#modify-retention-policy).
+
+## Reserved Instances
+
+### Configure automatic renewal for your expiring reservation
+
+Reserved instances listed below are expiring soon or recently expired. Your resources will continue to operate normally, however, you will be billed at the on-demand rates going forward. To optimize your costs, configure automatic renewal for these reservations or purchase a replacement manually.
+
+Learn more about [Reservation - ReservedInstancePurchaseNew (Configure automatic renewal for your expiring reservation)](https://aka.ms/reservedinstances).
+
+### Buy virtual machine reserved instances to save money over pay-as-you-go costs
+
+Reserved instances can provide a significant discount over pay-as-you-go prices. With reserved instances, you can pre-purchase the base costs for your virtual machines. Discounts will automatically apply to new or existing VMs that have the same size and region as your reserved instance. We analyzed your usage over the last 30 days and recommend money-saving reserved instances.
+
+Learn more about [Virtual machine - ReservedInstance (Buy virtual machine reserved instances to save money over pay-as-you-go costs)](https://aka.ms/reservedinstances).
+
+### Consider Cosmos DB reserved instance to save over your pay-as-you-go costs
+
+We analyzed your Cosmos DB usage pattern over last 30 days and calculate reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase Cosmos DB hourly usage and save over your pay-as-you-go costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings even more.
+
+Learn more about [Subscription - CosmosDBReservedCapacity (Consider Cosmos DB reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+
+### Consider SQL PaaS DB reserved instance to save over your pay-as-you-go costs
+
+We analyzed your SQL PaaS usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase hourly usage for your SQL PaaS deployments and save over your SQL PaaS compute costs. SQL license is charged separately and is not discounted by the reservation. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - SQLReservedCapacity (Consider SQL PaaS DB reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+
+### Consider App Service stamp fee reserved instance to save over your on-demand costs
+
+We analyzed your App Service isolated environment stamp fees usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase hourly usage for the isolated environment stamp fee and save over your Pay-as-you-go costs. Note that reserved instance only applies to the stamp fee and not to the App Service instances. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions based on usage pattern over last 30 days.
+
+Learn more about [Subscription - AppServiceReservedCapacity (Consider App Service stamp fee reserved instance to save over your on-demand costs)](https://aka.ms/rirecommendations).
+
+### Consider Database for MariaDB reserved instance to save over your pay-as-you-go costs
+
+We analyzed your Azure Database for MariaDB usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase MariaDB hourly usage and save over your compute costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - MariaDBSQLReservedCapacity (Consider Database for MariaDB reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+
+### Consider Database for MySQL reserved instance to save over your pay-as-you-go costs
+
+We analyzed your MySQL Database usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase MySQL hourly usage and save over your compute costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - MySQLReservedCapacity (Consider Database for MySQL reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+
+### Consider Database for PostgreSQL reserved instance to save over your pay-as-you-go costs
+
+We analyzed your Database for PostgreSQL usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase PostgresSQL Database hourly usage and save over your on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - PostgreSQLReservedCapacity (Consider Database for PostgreSQL reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+
+### Consider Cache for Redis reserved instance to save over your pay-as-you-go costs
+
+We analyzed your Cache for Redis usage pattern over last 30 days and calculated reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase Cache for Redis hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - RedisCacheReservedCapacity (Consider Cache for Redis reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+
+### Consider Azure Synapse Analytics (formerly SQL DW) reserved instance to save over your pay-as-you-go costs
+
+We analyze you Azure Synapse Analytics usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase Synapse Analytics hourly usage and save over your on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - SQLDWReservedCapacity (Consider Azure Synapse Analytics (formerly SQL DW) reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+
+### (Preview) Consider Blob storage reserved instance to save on Blob v2 and Datalake storage Gen2 costs
+
+We analyzed your Azure Blob and Datalake storage usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Blob storage reserved instance applies only to data stored on Azure Blob (GPv2) and Azure Data Lake Storage (Gen 2). Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - BlobReservedCapacity ((Preview) Consider Blob storage reserved instance to save on Blob v2 and Datalake storage Gen2 costs)](https://aka.ms/rirecommendations).
+
+### (Preview) Consider Azure Data explorer reserved capacity to save over your pay-as-you-go costs
+
+We analyzed your Azure Data Explorer usage pattern over last 30 days and recommend reserved capacity purchase that maximizes your savings. With reserved capacity you can pre-purchase Data Explorer hourly usage and get savings over your on-demand costs. Reserved capacity is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions using 3-year reservation pricing and last 30 day's usage pattern. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - DataExplorerReservedCapacity ((Preview) Consider Azure Data explorer reserved capacity to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+
+### Consider Azure Dedicated Host reserved instance to save over your on-demand costs
+
+We analyzed your Azure Dedicated Host usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - AzureDedicatedHostReservedCapacity (Consider Azure Dedicated Host reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider Data Factory reserved instance to save over your on-demand costs
+
+We analyzed your Data Factory usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - DataFactorybReservedCapacity (Consider Data Factory reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider Azure Data Explorer reserved instance to save over your on-demand costs
+
+We analyzed your Azure Data Explorer usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - AzureDataExplorerReservedCapacity (Consider Azure Data Explorer reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider Azure Files reserved instance to save over your on-demand costs
+
+We analyzed your Azure Files usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - AzureFilesReservedCapacity (Consider Azure Files reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider Azure VMware Solution reserved instance to save over your on-demand costs
+
+We analyzed your Azure VMware Solution usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - AzureVMwareSolutionReservedCapacity (Consider Azure VMware Solution reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### (Preview) Consider Databricks reserved capacity to save over your on-demand costs
+
+We analyzed your Databricks usage over last 30 days and calculated reserved capacity purchase that would maximize your savings. With reserved capacity you can pre-purchase hourly usage and save over your current on-demand costs. Reserved capacity is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions using 3-year reservation pricing and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - DataBricksReservedCapacity ((Preview) Consider Databricks reserved capacity to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider NetApp Storage reserved instance to save over your on-demand costs
+
+We analyzed your NetApp Storage usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - NetAppStorageReservedCapacity (Consider NetApp Storage reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider Azure Managed Disk reserved instance to save over your on-demand costs
+
+We analyzed your Azure Managed Disk usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - AzureManagedDiskReservedCapacity (Consider Azure Managed Disk reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider Red Hat reserved instance to save over your on-demand costs
+
+We analyzed your Red Hat usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - RedHatReservedCapacity (Consider Red Hat reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider RedHat Osa reserved instance to save over your on-demand costs
+
+We analyzed your RedHat Osa usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - RedHatOsaReservedCapacity (Consider RedHat Osa reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider SapHana reserved instance to save over your on-demand costs
+
+We analyzed your SapHana usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - SapHanaReservedCapacity (Consider SapHana reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider SuseLinux reserved instance to save over your on-demand costs
+
+We analyzed your SuseLinux usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - SuseLinuxReservedCapacity (Consider SuseLinux reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider VMware Cloud Simple reserved instance
+
+We analyzed your VMware Cloud Simple usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - VMwareCloudSimpleReservedCapacity (Consider VMware Cloud Simple reserved instance )](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Use Virtual Machines with Ephemeral OS Disk enabled to save cost and get better performance
+
+With Ephemeral OS Disk, Customers get these benefits: Save on storage cost for OS disk. Get lower read/write latency to OS disk. Faster VM Reimage operation by resetting OS (and Temporary disk) to its original state. It is more preferable to use Ephemeral OS Disk for short-lived IaaS VMs or VMs with stateless workloads
+
+Learn more about [Subscription - EphemeralOsDisk (Use Virtual Machines with Ephemeral OS Disk enabled to save cost and get better performance)](/azure/virtual-machines/windows/ephemeral-os-disks).
+
+## Synapse
+
+### Consider enabling autopause feature on Spark compute.
+
+Auto-pause releases and shuts down unused compute resources after a set idle period of inactivity
+
+Learn more about [Synapse workspace - EnableSynapseSparkComputeAutoPauseGuidance (Consider enabling autopause feature on spark compute.)](https://aka.ms/EnableSynapseSparkComputeAutoPauseGuidance).
+
+### Consider enabling autoscale feature on Spark compute.
+
+Apache Spark for Azure Synapse Analytics pool's Autoscale feature automatically scales the number of nodes in a cluster instance up and down. During the creation of a new Apache Spark for Azure Synapse Analytics pool, a minimum and maximum number of nodes can be set when Autoscale is selected. Autoscale then monitors the resource requirements of the load and scales the number of nodes up or down. There's no additional charge for this feature.
+
+Learn more about [Synapse workspace - EnableSynapseSparkComputeAutoScaleGuidance (Consider enabling autoscale feature on spark compute.)](https://aka.ms/EnableSynapseSparkComputeAutoScaleGuidance).
++
+## Next steps
+
+Learn more about [Cost Optimization - Microsoft Azure Well Architected Framework](/azure/architecture/framework/cost/overview)
advisor Advisor Reference Operational Excellence Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-operational-excellence-recommendations.md
+
+ Title: Operational excellence recommendations
+description: Operational excellence recommendations
+ Last updated : 02/02/2022++
+# Operational excellence recommendations
+
+Operational excellence recommendations in Azure Advisor can help you with:
+- Process and workflow efficiency.
+- Resource manageability.
+- Deployment best practices.
+
+You can get these recommendations on the **Operational Excellence** tab of the Advisor dashboard.
+
+1. Sign in to the [**Azure portal**](https://portal.azure.com).
+
+1. Search for and select [**Advisor**](https://aka.ms/azureadvisordashboard) from any page.
+
+1. On the **Advisor** dashboard, select the **Operational Excellence** tab.
++
+## Spring Cloud
+
+### Update your outdated Azure Spring Cloud SDK to the latest version
+
+We have identified API calls from an outdated Azure Spring Cloud SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
+
+Learn more about [Spring Cloud Service - SpringCloudUpgradeOutdatedSDK (Update your outdated Azure Spring Cloud SDK to the latest version)](/azure/spring-cloud).
+
+### Update Azure Spring Cloud API Version
+
+We have identified API calls from outdated Azure Spring Cloud API for resources under this subscription. We recommend switching to the latest Spring Cloud API version. You need to update your existing code to use the latest API version. Also, you need to upgrade your Azure SDK and Azure CLI to the latest version. This ensures you receive the latest features and performance improvements.
+
+Learn more about [Spring Cloud Service - UpgradeAzureSpringCloudAPI (Update Azure Spring Cloud API Version)](/azure/spring-cloud).
+
+## Automation
+
+### Upgrade to Start/Stop VMs v2
+
+This new version of Start/Stop VMs v2 (preview) provides a decentralized low-cost automation option for customers who want to optimize their VM costs. It offers all of the same functionality as the original version available with Azure Automation, but it is designed to take advantage of newer technology in Azure.
+
+Learn more about [Automation account - SSV1_Upgrade (Upgrade to Start/Stop VMs v2)](https://aka.ms/startstopv2docs).
+
+## Batch
+
+### Recreate your pool to get the latest node agent features and fixes
+
+Your pool has an old node agent. Consider recreating your pool to get the latest node agent updates and bug fixes.
+
+Learn more about [Batch account - OldPool (Recreate your pool to get the latest node agent features and fixes)](https://aka.ms/batch_oldpool_learnmore).
+
+### Delete and recreate your pool to remove a deprecated internal component
+
+Your pool is using a deprecated internal component. Please delete and recreate your pool for improved stability and performance.
+
+Learn more about [Batch account - RecreatePool (Delete and recreate your pool to remove a deprecated internal component)](https://aka.ms/batch_deprecatedcomponent_learnmore).
+
+### Upgrade to the latest API version to ensure your Batch account remains operational.
+
+In the past 14 days, you have invoked a Batch management or service API version that is scheduled for deprecation. Upgrade to the latest API version to ensure your Batch account remains operational.
+
+Learn more about [Batch account - UpgradeAPI (Upgrade to the latest API version to ensure your Batch account remains operational.)](https://aka.ms/batch_deprecatedapi_learnmore).
+
+### Delete and recreate your pool using a VM size that will soon be retired
+
+Your pool is using A8-A11 VMs, which are set to be retired in March 2021. Please delete your pool and recreate it with a different VM size.
+
+Learn more about [Batch account - RemoveA8_A11Pools (Delete and recreate your pool using a VM size that will soon be retired)](https://aka.ms/batch_a8_a11_retirement_learnmore).
+
+### Recreate your pool with a new image
+
+Your pool is using an image with an imminent expiration date. Please recreate the pool with a new image to avoid potential interruptions. A list of newer images is available via the ListSupportedImages API.
+
+Learn more about [Batch account - EolImage (Recreate your pool with a new image)](https://aka.ms/batch_expiring_image_learn_more).
+
+## Cognitive Service
+
+### Upgrade to the latest version of the Immersive Reader SDK
+
+We have identified resources under this subscription using outdated versions of the Immersive Reader SDK. Using the latest version of the Immersive Reader SDK provides you with updated security, performance and an expanded set of features for customizing and enhancing your integration experience.
+
+Learn more about [Cognitive Service - ImmersiveReaderSDKRecommendation (Upgrade to the latest version of the Immersive Reader SDK)](https://aka.ms/ImmersiveReaderAzureAdvisorSDKLearnMore).
+
+## Compute
+
+### Increase the number of compute resources you can deploy by 10 vCPU
+
+If quota limits are exceeded, new VM deployments will be blocked until quota is increased. Increase your quota now to enable deployment of more resources. Learn More
+
+Learn more about [Virtual machine - IncreaseQuotaExperiment (Increase the number of compute resources you can deploy by 10 vCPU)](https://aka.ms/SubscriptionServiceLimits).
+
+### Add Azure Monitor to your virtual machine (VM) labeled as production
+
+Azure Monitor for VMs monitors your Azure virtual machines (VM) and virtual machine scale sets at scale. It analyzes the performance and health of your Windows and Linux VMs, and it monitors their processes and dependencies on other resources and external processes. It includes support for monitoring performance and application dependencies for VMs that are hosted on-premises or in another cloud provider.
+
+Learn more about [Virtual machine - AddMonitorProdVM (Add Azure Monitor to your virtual machine (VM) labeled as production)](/azure/azure-monitor/insights/vminsights-overview).
+
+### Excessive NTP client traffic caused by frequent DNS lookups and NTP sync for new servers, which happens often on some global NTP servers.
+
+Excessive NTP client traffic caused by frequent DNS lookups and NTP sync for new servers, which happens often on some global NTP servers. This can be viewed as malicious traffic and blocked by the DDOS service in the Azure environment
+
+Learn more about [Virtual machine - GetVmlistFortigateNtpIssue (Excessive NTP client traffic caused by frequent DNS lookups and NTP sync for new servers, which happens often on some global NTP servers.)](https://docs.fortinet.com/document/fortigate/6.2.3/fortios-release-notes/236526/known-issues).
+
+### An Azure environment update has been rolled out that may affect you Checkpoint Firewall.
+
+The image version of the Checkpoint firewall installed may have been affected by the recent Azure environment update. A kernel panic resulting in a reboot to factory defaults can occur in certain circumstances.
+
+Learn more about [Virtual machine - NvaCheckpointNicServicing (An Azure environment update has been rolled out that may affect you Checkpoint Firewall.)](https://supportcenter.checkpoint.com/supportcenter/portal).
+
+### The iControl REST interface has an unauthenticated remote command execution vulnerability.
+
+This vulnerability allows for unauthenticated attackers with network access to the iControl REST interface, through the BIG-IP management interface and self IP addresses, to execute arbitrary system commands, create or delete files, and disable services. This vulnerability can only be exploited through the control plane and cannot be exploited through the data plane. Exploitation can lead to complete system compromise. The BIG-IP system in Appliance mode is also vulnerable
+
+Learn more about [Virtual machine - GetF5vulnK03009991 (The iControl REST interface has an unauthenticated remote command execution vulnerability.)](https://support.f5.com/csp/article/K03009991).
+
+### NVA Accelerated Networking enabled but potentially not working.
+
+Desired state for Accelerated Networking is set to ΓÇÿtrueΓÇÖ for one or more interfaces on this VM, but actual state for accelerated networking is not enabled.
+
+Learn more about [Virtual machine - GetVmListANDisabled (NVA Accelerated Networking enabled but potentially not working.)](/azure/virtual-network/create-vm-accelerated-networking-cli).
+
+### Upgrade Citrix load balancers to avoid connectivity issues during NIC maintenance operations.
+
+We have identified that your Virtual Machine might be running a version of software image that is running drivers for Accelerated Networking (AN) that are not compatible with the Azure environment. It has a synthetic network interface which, either, is AN capable but may disconnect during a maintenance or NIC operation. It is recommended that you upgrade to the latest version of the image that addresses this issue. Please contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - GetCitrixVFRevokeError (Upgrade Citrix load balancers to avoid connectivity issues during NIC maintenance operations.)](https://www.citrix.com/support/).
+
+## Kubernetes
+
+### Update cluster's service principal
+
+This cluster's service principal is expired and the cluster will not be healthy until the service principal is updated
+
+Learn more about [Kubernetes service - UpdateServicePrincipal (Update cluster's service principal)](/azure/aks/update-credentials).
+
+### Monitoring addon workspace is deleted
+
+Monitoring addon workspace is deleted. Correct issues to setup monitoring addon.
+
+Learn more about [Kubernetes service - MonitoringAddonWorkspaceIsDeleted (Monitoring addon workspace is deleted)](https://aka.ms/aks-disable-monitoring-addon).
+
+### Deprecated Kubernetes API in 1.16 is found
+
+Deprecated Kubernetes API in 1.16 is found. Avoid using deprecated API.
+
+Learn more about [Kubernetes service - DeprecatedKubernetesAPIIn116IsFound (Deprecated Kubernetes API in 1.16 is found)](https://aka.ms/aks-deprecated-k8s-api-1.16).
+
+### Enable the Cluster Autoscaler
+
+This cluster has not enabled AKS Cluster Autoscaler, and it will not adapt to changing load conditions unless you have other ways to autoscale your cluster
+
+Learn more about [Kubernetes service - EnableClusterAutoscaler (Enable the Cluster Autoscaler)](/azure/aks/cluster-autoscaler).
+
+### The AKS node pool subnet is full
+
+Some of the subnets for this cluster's node pools are full and cannot take any more worker nodes. Using the Azure CNI plugin requires to reserve IP addresses for each node and all the pods for the node at node provisioning time. If there is not enough IP address space in the subnet, no worker nodes can be deployed. Additionally, the AKS cluster cannot be upgraded if the node subnet is full.
+
+Learn more about [Kubernetes service - NodeSubnetIsFull (The AKS node pool subnet is full)](/azure/aks/use-multiple-node-pools#add-a-node-pool-with-a-unique-subnet-preview).
+
+### Disable the Application Routing Addon
+
+This cluster has Pod Security Policies enabled, which are going to be deprecated in favor of Azure Policy for AKS
+
+Learn more about [Kubernetes service - UseAzurePolicyForKubernetes (Disable the Application Routing Addon)](/azure/aks/use-pod-security-on-azure-policy).
+
+### Use Ephemeral OS disk
+
+This cluster is not using ephemeral OS disks which can provide lower read/write latency, along with faster node scaling and cluster upgrades
+
+Learn more about [Kubernetes service - UseEphemeralOSdisk (Use Ephemeral OS disk)](/azure/aks/cluster-configuration#ephemeral-os).
+
+### Use Uptime SLA
+
+This cluster has not enabled Uptime SLA, and it limited to an SLO of 99.5%
+
+Learn more about [Kubernetes service - UseUptimeSLA (Use Uptime SLA)](/azure/aks/uptime-sla).
+
+### Deprecated Kubernetes API in 1.22 has been found
+
+Deprecated Kubernetes API in 1.22 has been found. Avoid using deprecated APIs.
+
+Learn more about [Kubernetes service - DeprecatedKubernetesAPIIn122IsFound (Deprecated Kubernetes API in 1.22 has been found)](https://aka.ms/aks-deprecated-k8s-api-1.22).
+
+## Desktop Virtualization
+
+### Permissions missing for start VM on connect
+
+We have determined you have enabled start VM on connect but didn't gave the Azure Virtual Desktop the rights to power manage VMs in your subscription. As a result your users connecting to host pools won't receive a remote desktop session. Review feature documentation for requirements.
+
+Learn more about [Host Pool - AVDStartVMonConnect (Permissions missing for start VM on connect)](https://aka.ms/AVDStartVMRequirement).
+
+### No validation environment enabled
+
+We have determined that you do not have a validation environment enabled in current subscription. When creating your host pools, you have selected "No" for "Validation environment" in the properties tab. Having at least one host pool with a validation environment enabled ensures the business continuity through Windows Virtual Desktop service deployments with early detection of potential issues.
+
+Learn more about [Host Pool - ValidationEnvHostPools (No validation environment enabled)](/azure/virtual-desktop/create-validation-host-pool).
+
+### Not enough production environments enabled
+
+We have determined that too many of your host pools have Validation Environment enabled. In order for Validation Environments to best serve their purpose, you should have at least one, but never more than half of your host pools in Validation Environment. By having a healthy balance between your host pools with Validation Environment enabled and those with it disabled, you will best be able to utilize the benefits of the multistage deployments that Windows Virtual Desktop offers with certain updates. To fix this issue, open your host pool's properties and select "No" next to the "Validation Environment" setting.
+
+Learn more about [Host Pool - ProductionEnvHostPools (Not enough production environments enabled)](/azure/virtual-desktop/create-host-pools-powershell).
+
+## Cosmos DB
+
+### Migrate Azure Cosmos DB attachments to Azure Blob Storage
+
+We noticed that your Azure Cosmos collection is using the legacy attachments feature. We recommend migrating attachments to Azure Blob Storage to improve the resiliency and scalability of your blob data.
+
+Learn more about [Cosmos DB account - CosmosDBAttachments (Migrate Azure Cosmos DB attachments to Azure Blob Storage)](/azure/cosmos-db/attachments#migrating-attachments-to-azure-blob-storage).
+
+### Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup
+
+Your Azure Cosmos DB accounts are configured with periodic backup. Continuous backup with point-in-time restore is now available on these accounts. With continuous backup, you can restore your data to any point in time within the past 30 days. Continuous backup may also be more cost-effective as a single copy of your data is retained.
+
+Learn more about [Cosmos DB account - CosmosDBMigrateToContinuousBackup (Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup)](/azure/cosmos-db/continuous-backup-restore-introduction).
+
+## Insights
+
+### Repair your log alert rule
+
+We have detected that one or more of your alert rules have invalid queries specified in their condition section. Log alert rules are created in Azure Monitor and are used to run analytics queries at specified intervals. The results of the query determine if an alert needs to be triggered. Analytics queries may become invalid overtime due to changes in referenced resources, tables, or commands. We recommend that you correct the query in the alert rule to prevent it from getting auto-disabled and ensure monitoring coverage of your resources in Azure.
+
+Learn more about [Alert Rule - ScheduledQueryRulesLogAlert (Repair your log alert rule)](https://aka.ms/aa_logalerts_queryrepair).
+
+### Log alert rule was disabled
+
+The alert rule was disabled by Azure Monitor as it was causing service issues. To enable the alert rule, contact support.
+
+Learn more about [Alert Rule - ScheduledQueryRulesRp (Log alert rule was disabled)](https://aka.ms/aa_logalerts_queryrepair).
+
+## Key Vault
+
+### Create a backup of HSM
+
+Create a periodic HSM backup to prevent data loss and have ability to recover the HSM in case of a disaster.
+
+Learn more about [Managed HSM Service - CreateHSMBackup (Create a backup of HSM)](/azure/key-vault/managed-hsm/best-practices#backup).
+
+## Data Explorer
+
+### Reduce the cache policy on your Data Explorer tables
+
+Reduce the table cache policy to match the usage patterns (query lookback period)
+
+Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTablesOperationalExcellence (Reduce the cache policy on your Data Explorer tables)](https://aka.ms/adxcachepolicy).
+
+## Networking
+
+### Resolve Azure Key Vault issue for your Application Gateway
+
+We've detected that one or more of your Application Gateways has been misconfigured to obtain their listener certificate(s) from Key Vault, which may result in operational issues. You should fix this misconfiguration immediately to avoid operational issues for your Application Gateway.
+
+Learn more about [Application gateway - AppGwAdvisorRecommendationForKeyVaultErrors (Resolve Azure Key Vault issue for your Application Gateway)](https://aka.ms/agkverror).
+
+### Application Gateway does not have enough capacity to scale out
+
+We've detected that your Application Gateway subnet does not have enough capacity for allowing scale out during high traffic conditions, which can cause downtime.
+
+Learn more about [Application gateway - AppgwRestrictedSubnetSpace (Application Gateway does not have enough capacity to scale out)](https://aka.ms/application-gateway-faq).
+
+### Enable Traffic Analytics to view insights into traffic patterns across Azure resources
+
+Traffic Analytics is a cloud-based solution that provides visibility into user and application activity in Azure. Traffic analytics analyzes Network Watcher network security group (NSG) flow logs to provide insights into traffic flow. With traffic analytics, you can view top talkers across Azure and non Azure deployments, investigate open ports, protocols and malicious flows in your environment and optimize your network deployment for performance. You can process flow logs at 10 mins and 60 mins processing intervals, giving you faster analytics on your traffic.
+
+Learn more about [Network Security Group - NSGFlowLogsenableTA (Enable Traffic Analytics to view insights into traffic patterns across Azure resources)](https://aka.ms/aa_enableta_learnmore).
+
+## SQL Virtual Machine
+
+### SQL IaaS Agent should be installed in full mode
+
+Full mode installs the SQL IaaS Agent to the VM to deliver full functionality. Use it for managing a SQL Server VM with a single instance. There is no cost associated with using the full manageability mode. System administrator permissions are required. Note that installing or upgrading to full mode is an online operation, there is no restart required.
+
+Learn more about [SQL virtual machine - UpgradeToFullMode (SQL IaaS Agent should be installed in full mode)](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management?tabs=azure-powershell).
+
+## Storage
+
+### Prevent hitting subscription limit for maximum storage accounts
+
+A region can support a maximum of 250 storage accounts per subscription. You have either already reached or are about to reach that limit. If you reach that limit, you will be unable to create any more storage accounts in that subscription/region combination. Please evaluate the recommended action below to avoid hitting the limit.
+
+Learn more about [Storage Account - StorageAccountScaleTarget (Prevent hitting subscription limit for maximum storage accounts)](https://aka.ms/subscalelimit).
+
+### Update to newer releases of the Storage Java v12 SDK for better reliability.
+
+We noticed that one or more of your applications use an older version of the Azure Storage Java v12 SDK to write data to Azure Storage. Unfortunately, the version of the SDK being used has a critical issue that uploads incorrect data during retries (for example, in case of HTTP 500 errors), resulting in an invalid object being written. The issue is fixed in newer releases of the Java v12 SDK.
+
+Learn more about [Storage Account - UpdateStorageJavaSDK (Update to newer releases of the Storage Java v12 SDK for better reliability.)](/azure/developer/java/sdk/?view=azure-java-stable&preserve-view=true).
+
+## Subscription
+
+### Set up staging environments in Azure App Service
+
+Deploying an app to a slot first and swapping it into production makes sure that all instances of the slot are warmed up before being swapped into production. This eliminates downtime when you deploy your app. The traffic redirection is seamless, no requests are dropped because of swap operations.
+
+Learn more about [Subscription - AzureApplicationService (Set up staging environments in Azure App Service)](/azure/app-service/deploy-staging-slots).
+
+### Enforce 'Add or replace a tag on resources' using Azure Policy
+
+Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy adds or replaces the specified tag and value when any resource is created or updated. Existing resources can be remediated by triggering a remediation task. Does not modify tags on resource groups.
+
+Learn more about [Subscription - AddTagPolicy (Enforce 'Add or replace a tag on resources' using Azure Policy)](/azure/governance/policy/overview).
+
+### Enforce 'Allowed locations' using Azure Policy
+
+Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy enables you to restrict the locations your organization can specify when deploying resources. Use to enforce your geo-compliance requirements.
+
+Learn more about [Subscription - AllowedLocationsPolicy (Enforce 'Allowed locations' using Azure Policy)](/azure/governance/policy/overview).
+
+### Enforce 'Audit VMs that do not use managed disks' using Azure Policy
+
+Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy audits VMs that do not use managed disks.
+
+Learn more about [Subscription - AuditForManagedDisksPolicy (Enforce 'Audit VMs that do not use managed disks' using Azure Policy)](/azure/governance/policy/overview).
+
+### Enforce 'Allowed virtual machine SKUs' using Azure Policy
+
+Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy enables you to specify a set of virtual machine SKUs that your organization can deploy.
+
+Learn more about [Subscription - AllowedVirtualMachineSkuPolicy (Enforce 'Allowed virtual machine SKUs' using Azure Policy)](/azure/governance/policy/overview).
+
+### Enforce 'Inherit a tag from the resource group' using Azure Policy
+
+Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy adds or replaces the specified tag and value from the parent resource group when any resource is created or updated. Existing resources can be remediated by triggering a remediation task.
+
+Learn more about [Subscription - InheritTagPolicy (Enforce 'Inherit a tag from the resource group' using Azure Policy)](/azure/governance/policy/overview).
+
+### Use Azure Lighthouse to simply and securely manage customer subscriptions at scale
+
+Using Azure Lighthouse improves security and reduces unnecessary access to your customer tenants by enabling more granular permissions for your users. It also allows for greater scalability, as your users can work across multiple customer subscriptions using a single login in your tenant.
+
+Learn more about [Subscription - OnboardCSPSubscriptionsToLighthouse (Use Azure Lighthouse to simply and securely manage customer subscriptions at scale)](/azure/lighthouse/concepts/cloud-solution-provider).
+
+## Web
+
+### Set up staging environments in Azure App Service
+
+Deploying an app to a slot first and swapping it into production makes sure that all instances of the slot are warmed up before being swapped into production. This eliminates downtime when you deploy your app. The traffic redirection is seamless, no requests are dropped because of swap operations.
+
+Learn more about [App service - AzureAppService-StagingEnv (Set up staging environments in Azure App Service)](/azure/app-service/deploy-staging-slots).
++
+## Next steps
+
+Learn more about [Operational Excellence - Microsoft Azure Well Architected Framework](/azure/architecture/framework/devops/overview)
advisor Advisor Reference Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-performance-recommendations.md
+
+ Title: Performance recommendations
+description: Full list of available performance recommendations in Advisor.
+ Last updated : 02/03/2022++
+# Performance recommendations
+
+The performance recommendations in Azure Advisor can help improve the speed and responsiveness of your business-critical applications. You can get performance recommendations from Advisor on the **Performance** tab of the Advisor dashboard.
+
+1. Sign in to the [**Azure portal**](https://portal.azure.com).
+
+1. Search for and select [**Advisor**](https://aka.ms/azureadvisordashboard) from any page.
+
+1. On the **Advisor** dashboard, select the **Performance** tab.
++
+## Attestation
+
+### Update Attestation API Version
+
+We have identified API calls from outdated Attestation API for resources under this subscription. We recommend switching to the latest Attestation API versions. You need to update your existing code to use the latest API version. This ensures you receive the latest features and performance improvements.
+
+Learn more about [Attestation provider - UpgradeAttestationAPI (Update Attestation API Version)](/rest/api/attestation).
+
+## Azure VMware Solution
+
+### vSAN capacity utilization has crossed critical threshold
+
+Your vSAN capacity utilization has reached 75%. The cluster utilization is required to remain below the 75% critical threshold for SLA compliance. Add new nodes to VSphere cluster to increase capacity or delete VMs to reduce consumption or adjust VM workloads
+
+Learn more about [AVS Private cloud - vSANCapacity (vSAN capacity utilization has crossed critical threshold)](/azure/azure-vmware/concepts-private-clouds-clusters).
+
+## Azure Cache for Redis
+
+### Improve your Cache and application performance when running with high network bandwidth
+
+Cache instances perform best when not running under high network bandwidth which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce network bandwidth or scale to a different size or sku with more capacity.
+
+Learn more about [Redis Cache Server - RedisCacheNetworkBandwidth (Improve your Cache and application performance when running with high network bandwidth)](https://aka.ms/redis/recommendations/bandwidth).
+
+### Improve your Cache and application performance when running with many connected clients
+
+Cache instances perform best when not running under high server load which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce the server load or scale to a different size or sku with more capacity.
+
+Learn more about [Redis Cache Server - RedisCacheConnectedClients (Improve your Cache and application performance when running with many connected clients)](https://aka.ms/redis/recommendations/connections).
+
+### Improve your Cache and application performance when running with high server load
+
+Cache instances perform best when not running under high server load which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce the server load or scale to a different size or sku with more capacity.
+
+Learn more about [Redis Cache Server - RedisCacheServerLoad (Improve your Cache and application performance when running with high server load)](https://aka.ms/redis/recommendations/cpu).
+
+### Improve your Cache and application performance when running with high memory pressure
+
+Cache instances perform best when not running under high memory pressure which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce used memory or scale to a different size or sku with more capacity.
+
+Learn more about [Redis Cache Server - RedisCacheUsedMemory (Improve your Cache and application performance when running with high memory pressure)](https://aka.ms/redis/recommendations/memory).
+
+## Cognitive Service
+
+### Upgrade to the latest Cognitive Service Text Analytics API version
+
+Upgrade to the latest API version to get the best results in terms of model quality, performance and service availability. Also there are new features available as new endpoints starting from V3.0 such as PII recognition, entity recognition and entity linking available as separate endpoints. In terms of changes in preview endpoints we have opinion mining in SA endpoint, redacted text property in PII endpoint
+
+Learn more about [Cognitive Service - UpgradeToLatestAPI (Upgrade to the latest Cognitive Service Text Analytics API version)](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api).
+
+### Upgrade to the latest API version of Azure Cognitive Service for Language
+
+Upgrade to the latest API version to get the best results in terms of model quality, performance and service availability.
+
+Learn more about [Cognitive Service - UpgradeToLatestAPILanguage (Upgrade to the latest API version of Azure Cognitive Service for Language)](https://aka.ms/language-api).
+
+### Upgrade to the latest Cognitive Service Text Analytics SDK version
+
+Upgrade to the latest SDK version to get the best results in terms of model quality, performance and service availability. Also there are new features available as new endpoints starting from V3.0 such as PII recognition, Entity recognition and entity linking available as separate endpoints. In terms of changes in preview endpoints we have Opinion Mining in SA endpoint, redacted text property in PII endpoint
+
+Learn more about [Cognitive Service - UpgradeToLatestSDK (Upgrade to the latest Cognitive Service Text Analytics SDK version)](/azure/cognitive-services/text-analytics/quickstarts/text-analytics-sdk?tabs=version-3-1&pivots=programming-language-csharp).
+
+### Upgrade to the latest Cognitive Service Language SDK version
+
+Upgrade to the latest SDK version to get the best results in terms of model quality, performance and service availability.
+
+Learn more about [Cognitive Service - UpgradeToLatestSDKLanguage (Upgrade to the latest Cognitive Service Language SDK version)](https://aka.ms/language-api).
+
+## Communication services
+
+### Use recommended version of Chat SDK
+
+Azure Communication Services Chat SDK can be used to add rich, real-time chat to your applications. Update to the recommended version of Chat SDK to ensure the latest fixes and features.
+
+Learn more about [Communication service - UpgradeChatSdk (Use recommended version of Chat SDK)](/azure/communication-services/concepts/chat/sdk-features).
+
+### Use recommended version of Resource Manager SDK
+
+Resource Manager SDK can be used to provision and manage Azure Communication Services resources. Update to the recommended version of Resource Manager SDK to ensure the latest fixes and features.
+
+Learn more about [Communication service - UpgradeResourceManagerSdk (Use recommended version of Resource Manager SDK)](/azure/communication-services/quickstarts/create-communication-resource?tabs=windows&pivots=platform-net).
+
+### Use recommended version of Identity SDK
+
+Azure Communication Services Identity SDK can be used to manage identities, users, and access tokens. Update to the recommended version of Identity SDK to ensure the latest fixes and features.
+
+Learn more about [Communication service - UpgradeIdentitySdk (Use recommended version of Identity SDK)](/azure/communication-services/concepts/sdk-options).
+
+### Use recommended version of SMS SDK
+
+Azure Communication Services SMS SDK can be used to send and receive SMS messages. Update to the recommended version of SMS SDK to ensure the latest fixes and features.
+
+Learn more about [Communication service - UpgradeSmsSdk (Use recommended version of SMS SDK)](/azure/communication-services/concepts/telephony-sms/sdk-features).
+
+### Use recommended version of Phone Numbers SDK
+
+Azure Communication Services Phone Numbers SDK can be used to acquire and manage phone numbers. Update to the recommended version of Phone Numbers SDK to ensure the latest fixes and features.
+
+Learn more about [Communication service - UpgradePhoneNumbersSdk (Use recommended version of Phone Numbers SDK)](/azure/communication-services/concepts/sdk-options).
+
+### Use recommended version of Calling SDK
+
+Azure Communication Services Calling SDK can be used to enable voice, video, screen-sharing, and other real-time communication. Update to the recommended version of Calling SDK to ensure the latest fixes and features.
+
+Learn more about [Communication service - UpgradeCallingSdk (Use recommended version of Calling SDK)](/azure/communication-services/concepts/voice-video-calling/calling-sdk-features).
+
+### Use recommended version of Call Automation SDK
+
+Azure Communication Services Call Automation SDK can be used to make and manage calls, play audio, and configure recording. Update to the recommended version of Call Automation SDK to ensure the latest fixes and features.
+
+Learn more about [Communication service - UpgradeServerCallingSdk (Use recommended version of Call Automation SDK)](/azure/communication-services/concepts/voice-video-calling/call-automation-apis).
+
+### Use recommended version of Network Traversal SDK
+
+Azure Communication Services Network Traversal SDK can be used to access TURN servers for low-level data transport. Update to the recommended version of Network Traversal SDK to ensure the latest fixes and features.
+
+Learn more about [Communication service - UpgradeTurnSdk (Use recommended version of Network Traversal SDK)](/azure/communication-services/concepts/sdk-options).
+
+## Compute
+
+### Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.
+
+We have determined that your VMs are located in a region different or far from where your users are connecting from, using Windows Virtual Desktop (WVD). This may lead to prolonged connection response times and will impact overall user experience on WVD.
+
+Learn more about [Virtual machine - RegionProximitySessionHosts (Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.)](/azure/virtual-desktop/connection-latency).
+
+### Consider increasing the size of your NVA to address persistent high CPU
+
+When NVAs run at high CPU, packets can get dropped resulting in connection failures or high latency due to network retransmits. Your NVA is running at high CPU, so you should consider increasing the VM size as allowed by the NVA vendor's licensing requirements.
+
+Learn more about [Virtual machine - NVAHighCPU (Consider increasing the size of your NVA to address persistent high CPU)](https://aka.ms/NVAHighCPU).
+
+### Use Managed disks to prevent disk I/O throttling
+
+Your virtual machine disks belong to a storage account that has reached its scalability target, and is susceptible to I/O throttling. To protect your virtual machine from performance degradation and to simplify storage management, use Managed Disks.
+
+Learn more about [Virtual machine - ManagedDisksStorageAccount (Use Managed disks to prevent disk I/O throttling)](https://aka.ms/aa_avset_manageddisk_learnmore).
+
+### Convert Managed Disks from Standard HDD to Premium SSD for performance
+
+We have noticed your Standard HDD disk is approaching performance targets. Azure premium SSDs deliver high-performance and low-latency disk support for virtual machines with IO-intensive workloads. Give your disk performance a boost by upgrading your Standard HDD disk to Premium SSD disk. Upgrading requires a VM reboot, which will take three to five minutes.
+
+Learn more about [Disk - MDHDDtoPremiumForPerformance (Convert Managed Disks from Standard HDD to Premium SSD for performance)](/azure/virtual-machines/windows/disks-types#premium-ssd).
+
+### Enable Accelerated Networking to improve network performance and latency
+
+We have detected that Accelerated Networking is not enabled on VM resources in your existing deployment that may be capable of supporting this feature. If your VM OS image supports Accelerated Networking as detailed in the documentation, make sure to enable this free feature on these VMs to maximize the performance and latency of your networking workloads in cloud
+
+Learn more about [Virtual machine - AccelNetConfiguration (Enable Accelerated Networking to improve network performance and latency)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Use SSD Disks for your production workloads
+
+We noticed that you are using SSD disks while also using Standard HDD disks on the same VM. Standard HDD managed disks are generally recommended for dev-test and backup; we recommend you use Premium SSDs or Standard SSDs for production. Premium SSDs deliver high-performance and low-latency disk support for virtual machines with IO-intensive workloads. Standard SSDs provide consistent and lower latency. Upgrade your disk configuration today for improved latency, reliability, and availability. Upgrading requires a VM reboot, which will take three to five minutes.
+
+Learn more about [Virtual machine - MixedDiskTypeToSSDPublic (Use SSD Disks for your production workloads)](/azure/virtual-machines/windows/disks-types#disk-comparison).
+
+### Barracuda Networks NextGen Firewall may experience high CPU utilization, reduced throughput and high latency.
+
+We have identified that your Virtual Machine might be running a version of Barracuda Networks NextGen Firewall Image that is running older drivers for Accelerated Networking, which may cause the product to revert to using the standard, synthetic network interface which does not use Accelerated Networking. It is recommended that you upgrade to a newer version of the image that addresses this issue and enable Accelerated Networking. Contact Barracuda Networks for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - BarracudaNVAAccelNet (Barracuda Networks NextGen Firewall may experience high CPU utilization, reduced throughput and high latency.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Arista Networks vEOS Router may experience high CPU utilization, reduced throughput and high latency.
+
+We have identified that your Virtual Machine might be running a version of Arista Networks vEOS Router Image that is running older drivers for Accelerated Networking, which may cause the product to revert to using the standard, synthetic network interface which does not use Accelerated Networking. It is recommended that you upgrade to a newer version of the image that addresses this issue and enable Accelerated Networking. Contact Arista Networks for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - AristaNVAAccelNet (Arista Networks vEOS Router may experience high CPU utilization, reduced throughput and high latency.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Cisco Cloud Services Router 1000V may experience high CPU utilization, reduced throughput and high latency.
+
+We have identified that your Virtual Machine might be running a version of Cisco Cloud Services Router 1000V Image that is running older drivers for Accelerated Networking, which may cause the product to revert to using the standard, synthetic network interface which does not use Accelerated Networking. It is recommended that you upgrade to a newer version of the image that addresses this issue and enable Accelerated Networking. Contact Cisco for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - CiscoCSRNVAAccelNet (Cisco Cloud Services Router 1000V may experience high CPU utilization, reduced throughput and high latency.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Palo Alto Networks VM-Series Firewall may experience high CPU utilization, reduced throughput and high latency.
+
+We have identified that your Virtual Machine might be running a version of Palo Alto Networks VM-Series Firewall Image that is running older drivers for Accelerated Networking, which may cause the product to revert to using the standard, synthetic network interface which does not use Accelerated Networking. It is recommended that you upgrade to a newer version of the image that addresses this issue and enable Accelerated Networking. Contact Palo Alto Networks for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - PaloAltoNVAAccelNet (Palo Alto Networks VM-Series Firewall may experience high CPU utilization, reduced throughput and high latency.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### NetApp Cloud Volumes ONTAP may experience high CPU utilization, reduced throughput and high latency.
+
+We have identified that your Virtual Machine might be running a version of NetApp Cloud Volumes ONTAP Image that is running older drivers for Accelerated Networking, which may cause the product to revert to using the standard, synthetic network interface which does not use Accelerated Networking. It is recommended that you upgrade to a newer version of the image that addresses this issue and enable Accelerated Networking. Contact NetApp for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - NetAppNVAAccelNet (NetApp Cloud Volumes ONTAP may experience high CPU utilization, reduced throughput and high latency.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Match production Virtual Machines with Production Disk for consistent performance and better latency
+
+Production virtual machines need production disks if you want to get the best performance. We see that you are running a production level virtual machine, however, you are using a low performing disk with standard HDD. Upgrading your disks that are attached to your production disks, either Standard SSD or Premium SSD, will benefit you with a more consistent experience and improvements in latency.
+
+Learn more about [Virtual machine - MatchProdVMProdDisks (Match production Virtual Machines with Production Disk for consistent performance and better latency)](/azure/virtual-machines/windows/disks-types#disk-comparison).
+
+### Update to the latest version of your Arista VEOS product for Accelerated Networking support.
+
+We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - AristaVeosANUpgradeRecommendation (Update to the latest version of your Arista VEOS product for Accelerated Networking support.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Update to the latest version of your Barracuda NG Firewall product for Accelerated Networking support.
+
+We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - BarracudaNgANUpgradeRecommendation (Update to the latest version of your Barracuda NG Firewall product for Accelerated Networking support.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Update to the latest version of your Cisco Cloud Services Router 1000V product for Accelerated Networking support.
+
+We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - Cisco1000vANUpgradeRecommendation (Update to the latest version of your Cisco Cloud Services Router 1000V product for Accelerated Networking support.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Update to the latest version of your F5 BigIp product for Accelerated Networking support.
+
+We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - F5BigIpANUpgradeRecommendation (Update to the latest version of your F5 BigIp product for Accelerated Networking support.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Update to the latest version of your NetApp product for Accelerated Networking support.
+
+We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - NetAppANUpgradeRecommendation (Update to the latest version of your NetApp product for Accelerated Networking support.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Update to the latest version of your Palo Alto Firewall product for Accelerated Networking support.
+
+We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - PaloAltoFWANUpgradeRecommendation (Update to the latest version of your Palo Alto Firewall product for Accelerated Networking support.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Update to the latest version of your Check Point product for Accelerated Networking support.
+
+We have identified that your Virtual Machine (VM) might be running a version of software image that is running older drivers for Accelerated Networking (AN). Your VM has a synthetic network interface that is either not AN capable or is not compatible with all Azure hardware. We recommend that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - CheckPointCGANUpgradeRecommendation (Update to the latest version of your Check Point product for Accelerated Networking support.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Accelerated Networking may require stopping and starting the VM
+
+We have detected that Accelerated Networking is not engaged on VM resources in your existing deployment even though the feature has been requested. In rare cases like this, it may be necessary to stop and start your VM, at your convenience, to re-engage AccelNet.
+
+Learn more about [Virtual machine - AccelNetDisengaged (Accelerated Networking may require stopping and starting the VM)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### NVA may see traffic loss due to hitting the maximum number of flows.
+
+Packet loss has been observed for this Virtual Machine due to hitting or exceeding the maximum number of flows for a VM instance of this size on Azure
+
+Learn more about [Virtual machine - NvaMaxFlowLimit (NVA may see traffic loss due to hitting the maximum number of flows.)](/azure/virtual-network/virtual-machine-network-throughput).
+
+### Take advantage of Ultra Disk low latency for your log disks and improve your database workload performance.
+
+Ultra disk is available in the same region as your database workload. Ultra disk offers high throughput, high IOPS, and consistent low latency disk storage for your database workloads: For Oracle DBs, you can now use either 4k or 512E sector sizes with Ultra disk depending on your Oracle DB version. For SQL server, leveraging Ultra disk for your log disk might offer more performance for your database. See instructions here for migrating your log disk to Ultra disk.
+
+Learn more about [Virtual machine - AzureStorageVmUltraDisk (Take advantage of Ultra Disk low latency for your log disks and improve your database workload performance.)](/azure/virtual-machines/disks-enable-ultra-ssd?tabs=azure-portal).
+
+## Kubernetes
+
+### Unsupported Kubernetes version is detected
+
+Unsupported Kubernetes version is detected. Ensure Kubernetes cluster runs with a supported version.
+
+Learn more about [Kubernetes service - UnsupportedKubernetesVersionIsDetected (Unsupported Kubernetes version is detected)](https://aka.ms/aks-supported-versions).
+
+## Data Factory
+
+### Review your throttled Data Factory Triggers
+
+A high volume of throttling has been detected in an event-based trigger that runs in your Data Factory resource. This is causing your pipeline runs to drop from the run queue. Review the trigger definition to resolve issues and increase performance.
+
+Learn more about [Data factory trigger - ADFThrottledTriggers (Review your throttled Data Factory Triggers)](https://aka.ms/adf-create-event-trigger).
+
+## MariaDB
+
+### Scale the storage limit for MariaDB server
+
+Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases
+
+Learn more about [MariaDB server - OrcasMariaDbStorageLimit (Scale the storage limit for MariaDB server)](https://aka.ms/mariadbstoragelimits).
+
+### Increase the MariaDB server vCores
+
+Our internal telemetry shows that the CPU has been running under high utilization for an extended period of time over the last 7 days. High CPU utilization may lead to slow query performance. To improve performance, we recommend moving to a larger compute size.
+
+Learn more about [MariaDB server - OrcasMariaDbCpuOverlaod (Increase the MariaDB server vCores)](https://aka.ms/mariadbpricing).
+
+### Scale the MariaDB server to higher SKU
+
+Our internal telemetry shows that the server may be unable to support the connection requests because of the maximum supported connections for the given SKU. This may result in a large number of failed connections requests which adversely affect the performance. To improve performance, we recommend moving to higher memory SKU by increasing vCore or switching to Memory-Optimized SKUs.
+
+Learn more about [MariaDB server - OrcasMariaDbConcurrentConnection (Scale the MariaDB server to higher SKU)](https://aka.ms/mariadbconnectionlimits).
+
+### Move your MariaDB server to Memory Optimized SKU
+
+Our internal telemetry shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
+
+Learn more about [MariaDB server - OrcasMariaDbMemoryCache (Move your MariaDB server to Memory Optimized SKU)](https://aka.ms/mariadbpricing).
+
+### Increase the reliability of audit logs
+
+Our internal telemetry shows that the server's audit logs may have been lost over the past day. This can occur when your server is experiencing a CPU heavy workload or a server generates a large number of audit logs over a short period of time. We recommend only logging the necessary events required for your audit purposes using the following server parameters: audit_log_events, audit_log_exclude_users, audit_log_include_users. If the CPU usage on your server is high due to your workload, we recommend increasing the server's vCores to improve performance.
+
+Learn more about [MariaDB server - OrcasMariaDBAuditLog (Increase the reliability of audit logs)](https://aka.ms/mariadb-audit-logs).
+
+## MySQL
+
+### Scale the storage limit for MySQL server
+
+Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases
+
+Learn more about [MySQL server - OrcasMySQLStorageLimit (Scale the storage limit for MySQL server)](https://aka.ms/mysqlstoragelimits).
+
+### Scale the MySQL server to higher SKU
+
+Our internal telemetry shows that the server may be unable to support the connection requests because of the maximum supported connections for the given SKU. This may result in a large number of failed connections requests which adversely affect the performance. To improve performance, we recommend moving to a higher memory SKU by increasing vCore or switching to Memory-Optimized SKUs.
+
+Learn more about [MySQL server - OrcasMySQLConcurrentConnection (Scale the MySQL server to higher SKU)](https://aka.ms/mysqlconnectionlimits).
+
+### Increase the MySQL server vCores
+
+Our internal telemetry shows that the CPU has been running under high utilization for an extended period of time over the last 7 days. High CPU utilization may lead to slow query performance. To improve performance, we recommend moving to a larger compute size.
+
+Learn more about [MySQL server - OrcasMySQLCpuOverload (Increase the MySQL server vCores)](https://aka.ms/mysqlpricing).
+
+### Move your MySQL server to Memory Optimized SKU
+
+Our internal telemetry shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
+
+Learn more about [MySQL server - OrcasMySQLMemoryCache (Move your MySQL server to Memory Optimized SKU)](https://aka.ms/mysqlpricing).
+
+### Add a MySQL Read Replica server
+
+Our internal telemetry shows that you may have a read intensive workload running, which results in resource contention for this server. This may lead to slow query performance for the server. To improve performance, we recommend you add a read replica, and offload some of your read workloads to the replica.
+
+Learn more about [MySQL server - OrcasMySQLReadReplica (Add a MySQL Read Replica server)](https://aka.ms/mysqlreadreplica).
+
+### Improve MySQL connection management
+
+Our internal telemetry indicates that your application connecting to MySQL server may not be managing connections efficiently. This may result in unnecessary resource consumption and overall higher application latency. To improve connection management, we recommend that you reduce the number of short-lived connections and eliminate unnecessary idle connections. This can be done by configuring a server side connection-pooler, such as ProxySQL.
+
+Learn more about [MySQL server - OrcasMySQLConnectionPooling (Improve MySQL connection management)](https://aka.ms/azure_mysql_connection_pooling).
+
+### Increase the reliability of audit logs
+
+Our internal telemetry shows that the server's audit logs may have been lost over the past day. This can occur when your server is experiencing a CPU heavy workload or a server generates a large number of audit logs over a short period of time. We recommend only logging the necessary events required for your audit purposes using the following server parameters: audit_log_events, audit_log_exclude_users, audit_log_include_users. If the CPU usage on your server is high due to your workload, we recommend increasing the server's vCores to improve performance.
+
+Learn more about [MySQL server - OrcasMySQLAuditLog (Increase the reliability of audit logs)](https://aka.ms/mysql-audit-logs).
+
+### Improve performance by optimizing MySQL temporary-table sizing
+
+Our internal telemetry indicates that your MySQL server may be incurring unnecessary I/O overhead due to low temporary-table parameter settings. This may result in unnecessary disk-based transactions and reduced performance. We recommend that you increase the 'tmp_table_size' and 'max_heap_table_size' parameter values to reduce the number of disk-based transactions.
+
+Learn more about [MySQL server - OrcasMySqlTmpTables (Improve performance by optimizing MySQL temporary-table sizing)](https://aka.ms/azure_mysql_tmp_table).
+
+### Improve MySQL connection latency
+
+Our internal telemetry indicates that your application connecting to MySQL server may not be managing connections efficiently. This may result in higher application latency. To improve connection latency, we recommend that you enable connection redirection. This can be done by enabling the connection redirection feature of the PHP driver.
+
+Learn more about [MySQL server - OrcasMySQLConnectionRedirection (Improve MySQL connection latency)](https://aka.ms/azure_mysql_connection_redirection).
+
+## PostgreSQL
+
+### Scale the storage limit for PostgreSQL server
+
+Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlStorageLimit (Scale the storage limit for PostgreSQL server)](https://aka.ms/postgresqlstoragelimits).
+
+### Increase the work_mem to avoid excessive disk spilling from sort and hash
+
+Our internal telemetry shows that the configuration work_mem is too small for your PostgreSQL server which is resulting in disk spilling and degraded query performance. To improve this, we recommend increasing the work_mem limit for the server which will help to reduce the scenarios when the sort or hash happens on disk, thereby improving the overall query performance.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlWorkMem (Increase the work_mem to avoid excessive disk spilling from sort and hash)](https://aka.ms/runtimeconfiguration).
+
+### Distribute data in server group to distribute workload among nodes
+
+It looks like the data has not been distributed in this server group but stays on the coordinator. For full Hyperscale (Citus) benefits distribute data on worker nodes in this server group.
+
+Learn more about [Hyperscale (Citus) server group - OrcasPostgreSqlCitusDistributeData (Distribute data in server group to distribute workload among nodes)](https://go.microsoft.com/fwlink/?linkid=2135201).
+
+### Rebalance data in Hyperscale (Citus) server group to distribute workload among worker nodes more evenly
+
+It looks like the data is not well balanced between worker nodes in this Hyperscale (Citus) server group. In order to use each worker node of the Hyperscale (Citus) server group effectively rebalance data in this server group.
+
+Learn more about [Hyperscale (Citus) server group - OrcasPostgreSqlCitusRebalanceData (Rebalance data in Hyperscale (Citus) server group to distribute workload among worker nodes more evenly)](https://go.microsoft.com/fwlink/?linkid=2148869).
+
+### Scale the PostgreSQL server to higher SKU
+
+Our internal telemetry shows that the server may be unable to support the connection requests because of the maximum supported connections for the given SKU. This may result in a large number of failed connections requests which adversely affect the performance. To improve performance, we recommend moving to higher memory SKU by increasing vCore or switching to Memory-Optimized SKUs.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlConcurrentConnection (Scale the PostgreSQL server to higher SKU)](https://aka.ms/postgresqlconnectionlimits).
+
+### Move your PostgreSQL server to Memory Optimized SKU
+
+Our internal telemetry shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlMemoryCache (Move your PostgreSQL server to Memory Optimized SKU)](https://aka.ms/postgresqlpricing).
+
+### Add a PostgreSQL Read Replica server
+
+Our internal telemetry shows that you may have a read intensive workload running, which results in resource contention for this server. This may lead to slow query performance for the server. To improve performance, we recommend you add a read replica, and offload some of your read workloads to the replica.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlReadReplica (Add a PostgreSQL Read Replica server)](https://aka.ms/postgresqlreadreplica).
+
+### Increase the PostgreSQL server vCores
+
+Our internal telemetry shows that the CPU has been running under high utilization for an extended period of time over the last 7 days. High CPU utilization may lead to slow query performance. To improve performance, we recommend moving to a larger compute size.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlCpuOverload (Increase the PostgreSQL server vCores)](https://aka.ms/postgresqlpricing).
+
+### Improve PostgreSQL connection management
+
+Our internal telemetry indicates that your PostgreSQL server may not be managing connections efficiently. This may result in unnecessary resource consumption and overall higher application latency. To improve connection management, we recommend that you reduce the number of short-lived connections and eliminate unnecessary idle connections. This can be done by configuring a server side connection-pooler, such as PgBouncer.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlConnectionPooling (Improve PostgreSQL connection management)](https://aka.ms/azure_postgresql_connection_pooling).
+
+### Improve PostgreSQL log performance
+
+Our internal telemetry indicates that your PostgreSQL server has been configured to output VERBOSE error logs. This can be useful for troubleshooting your database, but it can also result in reduced database performance. To improve performance, we recommend that you change the log_error_verbosity parameter to the DEFAULT setting.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlLogErrorVerbosity (Improve PostgreSQL log performance)](https://aka.ms/azure_postgresql_log_settings).
+
+### Optimize query statistics collection on an Azure Database for PostgreSQL
+
+Our internal telemetry indicates that your PostgreSQL server has been configured to track query statistics using the pg_stat_statements module. While useful for troubleshooting, it can also result in reduced server performance. To improve performance, we recommend that you change the pg_stat_statements.track parameter to NONE.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlStatStatementsTrack (Optimize query statistics collection on an Azure Database for PostgreSQL)](https://aka.ms/azure_postgresql_optimize_query_stats).
+
+### Optimize query store on an Azure Database for PostgreSQL when not troubleshooting
+
+Our internal telemetry indicates that your PostgreSQL database has been configured to track query performance using the pg_qs.query_capture_mode parameter. While troubleshooting, we suggest setting the pg_qs.query_capture_mode parameter to TOP or ALL. When not troubleshooting, we recommend that you set the pg_qs.query_capture_mode parameter to NONE.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlQueryCaptureMode (Optimize query store on an Azure Database for PostgreSQL when not troubleshooting)](https://aka.ms/azure_postgresql_query_store).
+
+### Increase the storage limit for PostgreSQL Flexible Server
+
+Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlFlexibleServerStorageLimit (Increase the storage limit for PostgreSQL Flexible Server)](https://aka.ms/azure_postgresql_flexible_server_limits).
+
+### Optimize logging settings by setting LoggingCollector to -1
+
+Optimize logging settings by setting LoggingCollector to -1
+
+### Optimize logging settings by setting LogDuration to OFF
+
+Optimize logging settings by setting LogDuration to OFF
+
+### Optimize logging settings by setting LogStatement to NONE
+
+Optimize logging settings by setting LogStatement to NONE
+
+### Optimize logging settings by setting ReplaceParameter to OFF
+
+Optimize logging settings by setting ReplaceParameter to OFF
+
+### Optimize logging settings by setting LoggingCollector to OFF
+
+Optimize logging settings by setting LoggingCollector to OFF
+
+### Increase the storage limit for Hyperscale (Citus) server group
+
+Our internal telemetry shows that one or more nodes in the server group may be constrained because they are approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned disk space.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlCitusStorageLimitHyperscaleCitus (Increase the storage limit for Hyperscale (Citus) server group)](/azure/postgresql/howto-hyperscale-scale-grow#increase-storage-on-nodes).
+
+### Optimize log_statement settings for PostgreSQL on Azure Database
+
+Our internal telemetry indicates that you have log_statement enabled, for better performance, set it to NONE
+
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogStatement (Optimize log_statement settings for PostgreSQL on Azure Database)](/azure/postgresql/flexible-server/concepts-logging).
+
+### Increase the work_mem to avoid excessive disk spilling from sort and hash
+
+Our internal telemetry shows that the configuration work_mem is too small for your PostgreSQL server which is resulting in disk spilling and degraded query performance. To improve this, we recommend increasing the work_mem limit for the server which will help to reduce the scenarios when the sort or hash happens on disk, thereby improving the overall query performance.
+
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruWorkMem (Increase the work_mem to avoid excessive disk spilling from sort and hash)](https://aka.ms/runtimeconfiguration).
+
+### Improve PostgreSQL - Flexible Server performance by enabling Intelligent tuning
+
+Our internal telemetry suggests that you can improve storage performance by enabling Intelligent tuning
+
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruIntelligentTuning (Improve PostgreSQL - Flexible Server performance by enabling Intelligent tuning)](/azure/postgresql/flexible-server/concepts-intelligent-tuning).
+
+### Optimize log_duration settings for PostgreSQL on Azure Database
+
+Our internal telemetry indicates that you have log_duration enabled, for better performance, set it to OFF
+
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogDuration (Optimize log_duration settings for PostgreSQL on Azure Database)](/azure/postgresql/flexible-server/concepts-logging).
+
+### Optimize log_min_duration settings for PostgreSQL on Azure Database
+
+Our internal telemetry indicates that you have log_min_duration enabled, for better performance, set it to -1
+
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogMinDuration (Optimize log_min_duration settings for PostgreSQL on Azure Database)](/azure/postgresql/flexible-server/concepts-logging).
+
+### Optimize pg_qs.query_capture_mode settings for PostgreSQL on Azure Database
+
+Our internal telemetry indicates that you have pg_qs.query_capture_mode enabled, for better performance, set it to NONE
+
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruQueryCaptureMode (Optimize pg_qs.query_capture_mode settings for PostgreSQL on Azure Database)](/azure/postgresql/flexible-server/concepts-query-store-best-practices).
+
+### Optimize PostgreSQL performance by enabling PGBouncer
+
+Our Internal telemetry indicates that you can improve PostgreSQL performance by enabling PGBouncer
+
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruOrcasPostgreSQLConnectionPooling (Optimize PostgreSQL performance by enabling PGBouncer)](/azure/postgresql/flexible-server/concepts-pgbouncer).
+
+### Optimize log_error_verbosity settings for PostgreSQL on Azure Database
+
+Our internal telemetry indicates that you have log_error_verbosity enabled, for better performance, set it to DEFAULT
+
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogErrorVerbosity (Optimize log_error_verbosity settings for PostgreSQL on Azure Database)](/azure/postgresql/flexible-server/concepts-logging).
+
+### Increase the storage limit for Hyperscale (Citus) server group
+
+Our internal telemetry shows that one or more nodes in the server group may be constrained because they are approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned disk space.
+
+Learn more about [Hyperscale (Citus) server group - MarlinStorageLimitRecommendation (Increase the storage limit for Hyperscale (Citus) server group)](/azure/postgresql/howto-hyperscale-scale-grow#increase-storage-on-nodes).
+
+### Migrate your database from SSPG to FSPG
+
+Consider our new offering Azure Database for PostgreSQL Flexible Server that provides richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls and simplified developer experience. Learn more.
+
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlMeruMigration (Migrate your database from SSPG to FSPG)](https://aka.ms/sspg-upgrade).
+
+## Desktop Virtualization
+
+### Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.
+
+We have determined that your VMs are located in a region different or far from where your users are connecting from, using Windows Virtual Desktop (WVD). This may lead to prolonged connection response times and will impact overall user experience on WVD. When creating VMs for your host pools, you should attempt to use a region closer to the user. Having close proximity ensures continuing satisfaction with the WVD service and a better overall quality of experience.
+
+Learn more about [Host Pool - RegionProximityHostPools (Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.)](/azure/virtual-desktop/connection-latency).
+
+### Change the max session limit for your depth first load balanced host pool to improve VM performance
+
+Depth first load balancing uses the max session limit to determine the maximum number of users that can have concurrent sessions on a single session host. If the max session limit is too high, all user sessions will be directed to the same session host and this may cause performance and reliability issues. Therefore, when setting a host pool to have depth first load balancing, you should also set an appropriate max session limit according to the configuration of your deployment and capacity of your VMs. To fix this, open your host pool's properties and change the value next to the "Max session limit" setting.
+
+Learn more about [Host Pool - ChangeMaxSessionLimitForDepthFirstHostPool (Change the max session limit for your depth first load balanced host pool to improve VM performance )](/azure/virtual-desktop/configure-host-pool-load-balancing).
+
+## Cosmos DB
+
+### Configure your Azure Cosmos DB applications to use Direct connectivity in the SDK
+
+We noticed that your Azure Cosmos DB applications are using Gateway mode via the Cosmos DB .NET or Java SDKs. We recommend switching to Direct connectivity for lower latency and higher scalability.
+
+Learn more about [Cosmos DB account - CosmosDBGatewayMode (Configure your Azure Cosmos DB applications to use Direct connectivity in the SDK)](/azure/cosmos-db/performance-tips#networking).
+
+### Configure your Azure Cosmos DB query page size (MaxItemCount) to -1
+
+You are using the query page size of 100 for queries for your Azure Cosmos container. We recommend using a page size of -1 for faster scans.
+
+Learn more about [Cosmos DB account - CosmosDBQueryPageSize (Configure your Azure Cosmos DB query page size (MaxItemCount) to -1)](/azure/cosmos-db/sql-api-query-metrics#max-item-count).
+
+### Add composite indexes to your Azure Cosmos DB container
+
+Your Azure Cosmos DB containers are running ORDER BY queries incurring high Request Unit (RU) charges. It is recommended to add composite indexes to your containers' indexing policy to improve the RU consumption and decrease the latency of these queries.
+
+Learn more about [Cosmos DB account - CosmosDBOrderByHighRUCharge (Add composite indexes to your Azure Cosmos DB container)](/azure/cosmos-db/index-policy#composite-indexes).
+
+### Optimize your Azure Cosmos DB indexing policy to only index what's needed
+
+Your Azure Cosmos DB containers are using the default indexing policy, which indexes every property in your documents. Because you're storing large documents, a high number of properties get indexed, resulting in high Request Unit consumption and poor write latency. To optimize write performance, we recommend overriding the default indexing policy to only index the properties used in your queries.
+
+Learn more about [Cosmos DB account - CosmosDBDefaultIndexingWithManyPaths (Optimize your Azure Cosmos DB indexing policy to only index what's needed)](/azure/cosmos-db/index-policy).
+
+### Use hierarchical partition keys for optimal data distribution
+
+This account has a custom setting that allows the logical partition size in a container to exceed the limit of 20 GB. This setting was applied by the Azure Cosmos DB team as a temporary measure to give you time to re-architect your application with a different partition key. It is not recommended as a long-term solution, as SLA guarantees are not honored when the limit is increased. You can now use hierarchical partition keys (preview) to re-architect your application. The feature allows you to exceed the 20 GB limit by setting up to three partition keys, ideal for multi-tenant scenarios or workloads that use synthetic keys.
+
+Learn more about [Cosmos DB account - CosmosDBHierarchicalPartitionKey (Use hierarchical partition keys for optimal data distribution)](https://devblogs.microsoft.com/cosmosdb/hierarchical-partition-keys-private-preview/).
+
+## HDInsight
+
+### Reads happen on most recent data
+
+More than 75% of your read requests are landing on the memstore. That indicates that the reads are primarily on recent data. This suggests that even if a flush happens on the memstore, the recent file needs to be accessed and that file needs to be in the cache.
+
+Learn more about [HDInsight cluster - HBaseMemstoreReadPercentage (Reads happen on most recent data)](/azure/hdinsight/hbase/apache-hbase-advisor).
+
+### Consider using Accelerated Writes feature in your HBase cluster to improve cluster performance.
+
+You are seeing this advisor recommendation because HDInsight team's system log shows that in the past 7 days, your cluster has encountered the following scenarios:
+ 1. High WAL sync time latency
+ 2. High write request count (at least 3 one hour windows of over 1000 avg_write_requests/second/node)
+
+These conditions are indicators that your cluster is suffering from high write latencies. This could be due to heavy workload performed on your cluster.
+To improve the performance of your cluster, you may want to consider utilizing the Accelerated Writes feature provided by Azure HDInsight HBase. The Accelerated Writes feature for HDInsight Apache HBase clusters attaches premium SSD-managed disks to every RegionServer (worker node) instead of using cloud storage. As a result, provides low write-latency and better resiliency for your applications.
+Learn more about [HDInsight cluster - AccWriteCandidate (Consider using Accelerated Writes feature in your HBase cluster to improve cluster performance.)](/azure/hdinsight/hbase/apache-hbase-accelerated-writes).
+
+### More than 75% of your queries are full scan queries.
+
+More than 75% of the scan queries on your cluster are doing a full region/table scan. Modify your scan queries to avoid full region or table scans.
+
+Learn more about [HDInsight cluster - ScanQueryTuningcandidate (More than 75% of your queries are full scan queries.)](/azure/hdinsight/hbase/apache-hbase-advisor).
+
+### Check your region counts as you have blocking updates.
+
+Region counts needs to be adjusted to avoid updates getting blocked. It might require a scale up of the cluster by adding new nodes.
+
+Learn more about [HDInsight cluster - RegionCountCandidate (Check your region counts as you have blocking updates.)](/azure/hdinsight/hbase/apache-hbase-advisor).
+
+### Consider increasing the flusher threads
+
+The flush queue size in your region servers is more than 100 or there are updates getting blocked frequently. Tuning of the flush handler is recommended.
+
+Learn more about [HDInsight cluster - FlushQueueCandidate (Consider increasing the flusher threads)](/azure/hdinsight/hbase/apache-hbase-advisor).
+
+### Consider increasing your compaction threads for compactions to complete faster
+
+The compaction queue in your region servers is more than 2000 suggesting that more data requires compaction. Slower compactions can impact read performance as the number of files to read are more. More files without compaction can also impact the heap usage related to how files interact with Azure file system.
+
+Learn more about [HDInsight cluster - CompactionQueueCandidate (Consider increasing your compaction threads for compactions to complete faster)](/azure/hdinsight/hbase/apache-hbase-advisor).
+
+## Key Vault
+
+### Update Key Vault SDK Version
+
+New Key Vault Client Libraries are split to keys, secrets, and certificates SDKs, which are integrated with recommended Azure Identity library to provide seamless authentication to Key Vault across all languages and environments. It also contains several performance fixes to issues reported by customers and proactively identified through our QA process.<br><br>**PLEASE DISMISS:**<br>If Key Vault is integrated with Azure Storage, Disk or other Azure services which can use old Key Vault SDK and when all your current custom applications are using .NET SDK 4.0 or above.
+
+Learn more about [Key vault - UpgradeKeyVaultSDK (Update Key Vault SDK Version)](/azure/key-vault/general/client-libraries).
+
+### Update Key Vault SDK Version
+
+New Key Vault Client Libraries are split to keys, secrets, and certificates SDKs, which are integrated with recommended Azure Identity library to provide seamless authentication to Key Vault across all languages and environments. It also contains several performance fixes to issues reported by customers and proactively identified through our QA process.
+
+> [!IMPORTANT]
+> Please be aware that you can only remediate recommendation for custom applications you have access to. Recommendations can be shown due to integration with other Azure services like Storage, Disk encryption, which are in process to update to new version of our SDK. If you use .NET 4.0 in all your applications please dismiss.
+
+Learn more about [Managed HSM Service - UpgradeKeyVaultMHSMSDK (Update Key Vault SDK Version)](/azure/key-vault/general/client-libraries).
+
+## Data Exporer
+
+### Right-size Data Explorer resources for optimal performance.
+
+This recommendation surfaces all Data Explorer resources which exceed the recommended data capacity (80%). The recommended action to improve the performance is to scale to the recommended configuration shown.
+
+Learn more about [Data explorer resource - Right-size ADX resource (Right-size Data Explorer resources for optimal performance.)](https://aka.ms/adxskuperformance).
+
+### Review table cache policies for Data Explorer tables
+
+This recommendation surfaces Data Explorer tables with a high number of queries that look back beyond the configured cache period (policy). (You'll see the top 10 tables by query percentage that access out-of-cache data). The recommended action to improve the performance: Limit queries on this table to the minimal necessary time range (within the defined policy). Alternatively, if data from the entire time range is required, increase the cache period to the recommended value.
+
+Learn more about [Data explorer resource - UpdateCachePoliciesForAdxTables (Review table cache policies for Data Explorer tables)](https://aka.ms/adxcachepolicy).
+
+### Reduce Data Explorer table cache policy for better performance
+
+Reducing the table cache policy will free up unused data from the resource's cache and improve performance.
+
+Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTablesToImprovePerformance (Reduce Data Explorer table cache policy for better performance)](https://aka.ms/adxcachepolicy).
+
+## Networking
+
+### Configure DNS Time to Live to 20 seconds
+
+Time to Live (TTL) affects how recent of a response a client will get when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client will be routed to a functioning endpoint faster in the case of a failover. Configure your TTL to 20 seconds to route traffic to a health endpoint as quickly as possible.
+
+Learn more about [Traffic Manager profile - FastFailOverTTL (Configure DNS Time to Live to 20 seconds)](https://aka.ms/Ngfw4r).
+
+### Configure DNS Time to Live to 60 seconds
+
+Time to Live (TTL) affects how recent of a response a client will get when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client will be routed to a functioning endpoint faster in the case of a failover. Configure your TTL to 60 seconds to route traffic to a health endpoint as quickly as possible.
+
+Learn more about [Traffic Manager profile - ProfileTTL (Configure DNS Time to Live to 60 seconds)](https://aka.ms/Um3xr5).
+
+### Upgrade your ExpressRoute circuit bandwidth to accommodate your bandwidth needs
+
+You have been using over 90% of your procured circuit bandwidth recently. If you exceed your allocated bandwidth, you will experience an increase in dropped packets sent over ExpressRoute. Upgrade your circuit bandwidth to maintain performance if your bandwidth needs remain this high.
+
+Learn more about [ExpressRoute circuit - UpgradeERCircuitBandwidth (Upgrade your ExpressRoute circuit bandwidth to accommodate your bandwidth needs)](/azure/expressroute/about-upgrade-circuit-bandwidth).
+
+### Consider increasing the size of your VNet Gateway SKU to address consistently high CPU use
+
+Under high traffic load, the VPN gateway may drop packets due to high CPU.
+
+Learn more about [Virtual network gateway - HighCPUVNetGateway (Consider increasing the size of your VNet Gateway SKU to address consistently high CPU use)](https://aka.ms/HighCPUP2SVNetGateway).
+
+### Consider increasing the size of your VNet Gateway SKU to address high P2S use
+
+Each gateway SKU can only support a specified count of concurrent P2S connections. Your connection count is close to your gateway limit, so additional connection attempts may fail.
+
+Learn more about [Virtual network gateway - HighP2SConnectionsVNetGateway (Consider increasing the size of your VNet Gateway SKU to address high P2S use)](https://aka.ms/HighP2SConnectionsVNetGateway).
+
+### Make sure you have enough instances in your Application Gateway to support your traffic
+
+Your Application Gateway has been running on high utilization recently and under heavy load, you may experience traffic loss or increase in latency. It is important that you scale your Application Gateway according to your traffic and with a bit of a buffer so that you are prepared for any traffic surges or spikes and minimizing the impact that it may have in your QoS. Application Gateway v1 SKU (Standard/WAF) supports manual scaling and v2 SKU (Standard_v2/WAF_v2) support manual and autoscaling. In case of manual scaling, increase your instance count and if autoscaling is enabled, make sure your maximum instance count is set to a higher value so Application Gateway can scale out as the traffic increases
+
+Learn more about [Application gateway - HotAppGateway (Make sure you have enough instances in your Application Gateway to support your traffic)](https://aka.ms/hotappgw).
+
+## SQL
+
+### Create statistics on table columns
+
+We have detected that you are missing table statistics which may be impacting query performance. The query optimizer uses statistics to estimate the cardinality or number of rows in the query result which enables the query optimizer to create a high quality query plan.
+
+Learn more about [SQL data warehouse - CreateTableStatisticsSqlDW (Create statistics on table columns)](https://aka.ms/learnmorestatistics).
+
+### Remove data skew to increase query performance
+
+We have detected distribution data skew greater than 15%. This can cause costly performance bottlenecks.
+
+Learn more about [SQL data warehouse - DataSkewSqlDW (Remove data skew to increase query performance)](https://aka.ms/learnmoredataskew).
+
+### Update statistics on table columns
+
+We have detected that you do not have up-to-date table statistics which may be impacting query performance. The query optimizer uses up-to-date statistics to estimate the cardinality or number of rows in the query result which enables the query optimizer to create a high quality query plan.
+
+Learn more about [SQL data warehouse - UpdateTableStatisticsSqlDW (Update statistics on table columns)](https://aka.ms/learnmorestatistics).
+
+### Right-size overutilized SQL Databases
+
+We've analyzed the DTU consumption of your SQL Database over the past 14 days and identified SQL Databases with high usage. You can improve your database performance by right-sizing to the recommended SKU based on the 95th percentile of your everyday workload
+
+Learn more about [SQL database - sqlRightsizePerformance (Right-size overutilized SQL Databases)](https://aka.ms/SQLDBrecommendation).
+
+### Scale up to optimize cache utilization with SQL Data Warehouse
+
+We have detected that you had high cache used percentage with a low hit percentage. This indicates high cache eviction which can impact the performance of your workload.
+
+Learn more about [SQL data warehouse - SqlDwIncreaseCacheCapacity (Scale up to optimize cache utilization with SQL Data Warehouse)](https://aka.ms/learnmoreadaptivecache).
+
+### Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse
+
+We have detected that you had high tempdb utilization which can impact the performance of your workload.
+
+Learn more about [SQL data warehouse - SqlDwReduceTempdbContention (Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse)](https://aka.ms/learnmoretempdb).
+
+### Convert tables to replicated tables with SQL Data Warehouse
+
+We have detected that you may benefit from using replicated tables. When using replicated tables, this will avoid costly data movement operations and significantly increase the performance of your workload.
+
+Learn more about [SQL data warehouse - SqlDwReplicateTable (Convert tables to replicated tables with SQL Data Warehouse)](https://aka.ms/learnmorereplicatedtables).
+
+### Split staged files in the storage account to increase load performance
+
+We have detected that you can increase load throughput by splitting your compressed files that are staged in your storage account. A good rule of thumb is to split compressed files into 60 or more to maximize the parallelism of your load.
+
+Learn more about [SQL data warehouse - FileSplittingGuidance (Split staged files in the storage account to increase load performance)](https://aka.ms/learnmorefilesplit).
+
+### Increase batch size when loading to maximize load throughput, data compression, and query performance
+
+We have detected that you can increase load performance and throughput by increasing the batch size when loading into your database. You should consider using the COPY statement. If you are unable to use the COPY statement, consider increasing the batch size when using loading utilities such as the SQLBulkCopy API or BCP - a good rule of thumb is a batch size between 100K to 1M rows.
+
+Learn more about [SQL data warehouse - LoadBatchSizeGuidance (Increase batch size when loading to maximize load throughput, data compression, and query performance)](https://aka.ms/learnmoreincreasebatchsize).
+
+### Co-locate the storage account within the same region to minimize latency when loading
+
+We have detected that you are loading from a region that is different from your SQL pool. You should consider loading from a storage account that is within the same region as your SQL pool to minimize latency when loading data.
+
+Learn more about [SQL data warehouse - ColocateStorageAccount (Co-locate the storage account within the same region to minimize latency when loading)](https://aka.ms/learnmorestoragecolocation).
+
+## Storage
+
+### Use "Put Blob" for blobs smaller than 256 MB
+
+When writing a block blob that is 256 MB or less (64 MB for requests using REST versions before 2016-05-31), you can upload it in its entirety with a single write operation using "Put Blob". Based on your aggregated metrics, we believe your storage account's write operations can be optimized.
+
+Learn more about [Storage Account - StorageCallPutBlob (Use \"Put Blob\" for blobs smaller than 256 MB)](https://aka.ms/understandblockblobs).
+
+### Upgrade your Storage Client Library to the latest version for better reliability and performance
+
+The latest version of Storage Client Library/ SDK contains fixes to issues reported by customers and proactively identified through our QA process. The latest version also carries reliability and performance optimization in addition to new features that can improve your overall experience using Azure Storage.
+
+Learn more about [Storage Account - UpdateStorageDataMovementSDK (Upgrade your Storage Client Library to the latest version for better reliability and performance)](https://aka.ms/AA5wtca).
+
+### Upgrade to Standard SSD Disks for consistent and improved performance
+
+Because you are running IaaS virtual machine workloads on Standard HDD managed disks, we wanted to let you know that a Standard SSD disk option is now available for all Azure VM types. Standard SSD disks are a cost-effective storage option optimized for enterprise workloads that need consistent performance. Upgrade your disk configuration today for improved latency, reliability, and availability. Upgrading requires a VM reboot, which will take three to five minutes.
+
+Learn more about [Storage Account - StandardSSDForNonPremVM (Upgrade to Standard SSD Disks for consistent and improved performance)](/azure/virtual-machines/windows/disks-types#standard-ssd).
+
+### Upgrade your Storage Client Library to the latest version for better reliability and performance
+
+The latest version of Storage Client Library/ SDK contains fixes to issues reported by customers and proactively identified through our QA process. The latest version also carries reliability and performance optimization in addition to new features that can improve your overall experience using Azure Storage.
+
+### Use premium performance block blob storage
+
+One or more of your storage accounts has a high transaction rate per GB of block blob data stored. Use premium performance block blob storage instead of standard performance storage for your workloads that require fast storage response times and/or high transaction rates and potentially save on storage costs.
+
+Learn more about [Storage Account - PremiumBlobStorageAccount (Use premium performance block blob storage)](https://aka.ms/usePremiumBlob).
+
+### Convert Unmanaged Disks from Standard HDD to Premium SSD for performance
+
+We have noticed your Unmanaged HDD Disk is approaching performance targets. Azure premium SSDs deliver high-performance and low-latency disk support for virtual machines with IO-intensive workloads. Give your disk performance a boost by upgrading your Standard HDD disk to Premium SSD disk. Upgrading requires a VM reboot, which will take three to five minutes.
+
+Learn more about [Storage Account - UMDHDDtoPremiumForPerformance (Convert Unmanaged Disks from Standard HDD to Premium SSD for performance)](/azure/virtual-machines/windows/disks-types#premium-ssd).
+
+### No Snapshots Detected
+
+We have observed that there are no snapshots of your file shares. This means you are not protected from accidental file deletion or file corruption. Please enable snapshots to protect your data. One way to do this is through Azure
+
+Learn more about [Storage Account - EnableSnapshots (No Snapshots Detected)](/azure/backup/azure-file-share-backup-overview).
+
+## Synapse
+
+### Tables with Clustered Columnstore Indexes (CCI) with less than 60 million rows
+
+Clustered columnstore tables are organized in data into segments. Having high segment quality is critical to achieving optimal query performance on a columnstore table. Segment quality can be measured by the number of rows in a compressed row group.
+
+Learn more about [Synapse workspace - SynapseCCIGuidance (Tables with Clustered Columnstore Indexes (CCI) with less than 60 million rows)](https://aka.ms/AzureSynapseCCIGuidance).
+
+### Update SynapseManagementClient SDK Version
+
+New SynapseManagementClient is using .NET SDK 4.0 or above.
+
+Learn more about [Synapse workspace - UpgradeSynapseManagementClientSDK (Update SynapseManagementClient SDK Version)](https://aka.ms/UpgradeSynapseManagementClientSDK).
+
+## Web
+
+### Move your App Service Plan to PremiumV2 for better performance
+
+Your app served more than 1000 requests per day for the past 3 days. Your app may benefit from the higher performance infrastructure available with the Premium V2 App Service tier. The Premium V2 tier features Dv2-series VMs with faster processors, SSD storage, and doubled memory-to-core ratio when compared to the previous instances. Learn more about upgrading to Premium V2 from our documentation.
+
+Learn more about [App service - AppServiceMoveToPremiumV2 (Move your App Service Plan to PremiumV2 for better performance)](https://aka.ms/ant-premiumv2).
+
+### Check outbound connections from your App Service resource
+
+Your app has opened too many TCP/IP socket connections. Exceeding ephemeral TCP/IP port connection limits can cause unexpected connectivity issues for your apps.
+
+Learn more about [App service - AppServiceOutboundConnections (Check outbound connections from your App Service resource)](https://aka.ms/antbc-socket).
++
+## Next steps
+
+Learn more about [Performance Efficiency - Microsoft Azure Well Architected Framework](/azure/architecture/framework/scalability/overview)
advisor Advisor Reference Reliability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md
+
+ Title: Reliability recommendations
+description: Full list of available reliability recommendations in Advisor.
+ Last updated : 02/04/2022++
+# Reliability recommendations
+
+Azure Advisor helps you ensure and improve the continuity of your business-critical applications. You can get reliability recommendations on the **Reliability** tab on the Advisor dashboard.
+
+1. Sign in to the [**Azure portal**](https://portal.azure.com).
+
+1. Search for and select [**Advisor**](https://aka.ms/azureadvisordashboard) from any page.
+
+1. On the **Advisor** dashboard, select the **Reliability** tab.
+
+## FarmBeats
+
+### Upgrade to the latest FarmBeats API version
+
+We have identified calls to a FarmBeats API version that is scheduled for deprecation. We recommend switching to the latest FarmBeats API version to ensure uninterrupted access to FarmBeats, latest features, and performance improvements.
+
+Learn more about [Azure FarmBeats - FarmBeatsApiVersion (Upgrade to the latest FarmBeats API version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
+
+## API Management
+
+### Hostname certificate rotation failed
+
+API Management service failed to refresh hostname certificate from Key Vault. Ensure that certificate exists in Key Vault and API Management service identity is granted secret read access. Otherwise, API Management service will not be able to retrieve certificate updates from Key Vault, which may lead to the service using stale certificate and runtime API traffic being blocked as a result.
+
+Learn more about [Api Management - HostnameCertRotationFail (Hostname certificate rotation failed)](https://aka.ms/apimdocs/customdomain).
+
+### SSL/TLS renegotiation blocked
+
+SSL/TLS renegotiation attempt blocked. Renegotiation happens when a client certificate is requested over an already established connection. When it is blocked, reading 'context.Request.Certificate' in policy expressions will return 'null'. To support client certificate authentication scenarios, enable 'Negotiate client certificate' on listed hostnames. For browser-based clients, enabling this option might result in a certificate prompt being presented to the client.
+
+Learn more about [Api Management - TlsRenegotiationBlocked (SSL/TLS renegotiation blocked)](/azure/api-management/api-management-howto-mutual-certificates-for-clients).
+
+## Cache
+
+### Availability may be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact.
+
+Fragmentation and memory pressure can cause availability incidents during a failover or management operations. Increasing reservation of memory for fragmentation helps in reducing the cache failures when running under high memory pressure. Memory for fragmentation can be increased via maxfragmentationmemory-reserved setting available in advanced settings blade.
+
+Learn more about [Redis Cache Server - RedisCacheMemoryFragmentation (Availability may be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact.)](https://aka.ms/redis/recommendations/memory-policies).
+
+## Compute
+
+### Enable Backups on your Virtual Machines
+
+Enable backups for your virtual machines and secure your data
+
+Learn more about [Virtual machine (classic) - EnableBackup (Enable Backups on your Virtual Machines)](/azure/backup/backup-overview).
+
+### Upgrade the standard disks attached to your premium-capable VM to premium disks
+
+We have identified that you are using standard disks with your premium-capable Virtual Machines and we recommend you consider upgrading the standard disks to premium disks. For any Single Instance Virtual Machine using premium storage for all Operating System Disks and Data Disks, we guarantee you will have Virtual Machine Connectivity of at least 99.9%. Consider these factors when making your upgrade decision. The first is that upgrading requires a VM reboot and this process takes 3-5 minutes to complete. The second is if the VMs in the list are mission-critical production VMs, evaluate the improved availability against the cost of premium disks.
+
+Learn more about [Virtual machine - MigrateStandardStorageAccountToPremium (Upgrade the standard disks attached to your premium-capable VM to premium disks)](https://aka.ms/aa_storagestandardtopremium_learnmore).
+
+### Enable virtual machine replication to protect your applications from regional outage
+
+Virtual machines which do not have replication enabled to another region are not resilient to regional outages. Replicating the machines drastically reduce any adverse business impact during the time of an Azure region outage. We highly recommend enabling replication of all the business critical virtual machines from the below list so that in an event of an outage, you can quickly bring up your machines in remote Azure region.
+
+Learn more about [Virtual machine - ASRUnprotectedVMs (Enable virtual machine replication to protect your applications from regional outage)](https://aka.ms/azure-site-recovery-dr-azure-vms).
+
+### Upgrade VM from Premium Unmanaged Disks to Managed Disks at no additional cost
+
+We have identified that your VM is using premium unmanaged disks that can be migrated to managed disks at no additional cost. Azure Managed Disks provides higher resiliency, simplified service management, higher scale target and more choices among several disk types. This upgrade can be done through the portal in less than 5 minutes.
+
+Learn more about [Virtual machine - UpgradeVMToManagedDisksWithoutAdditionalCost (Upgrade VM from Premium Unmanaged Disks to Managed Disks at no additional cost)](https://aka.ms/md_overview).
+
+### Update your outbound connectivity protocol to Service Tags for Azure Site Recovery
+
+Using IP Address based filtering has been identified as a vulnerable way to control outbound connectivity for firewalls. It is advised to use Service Tags as an alternative for controlling connectivity. We highly recommend the use of Service Tags, to allow connectivity to Azure Site Recovery services for the machines.
+
+Learn more about [Virtual machine - ASRUpdateOutboundConnectivityProtocolToServiceTags (Update your outbound connectivity protocol to Service Tags for Azure Site Recovery)](https://aka.ms/azure-site-recovery-using-service-tags).
+
+### Use Managed Disks to improve data reliability
+
+Virtual machines in an Availability Set with disks that share either storage accounts or storage scale units are not resilient to single storage scale unit failures during outages. Migrate to Azure Managed Disks to ensure that the disks of different VMs in the Availability Set are sufficiently isolated to avoid a single point of failure.
+
+Learn more about [Availability set - ManagedDisksAvSet (Use Managed Disks to improve data reliability)](https://aka.ms/aa_avset_manageddisk_learnmore).
+
+### Check Point Virtual Machine may lose Network Connectivity.
+
+We have identified that your Virtual Machine might be running a version of Check Point image that has been known to lose network connectivity in the event of a platform servicing operation. It is recommended that you upgrade to a newer version of the image that addresses this issue. Please contact Check Point for further instructions on how to upgrade your image.
+
+Learn more about [Virtual machine - CheckPointPlatformServicingKnownIssueA (Check Point Virtual Machine may lose Network Connectivity.)](https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk151752&partition=Advanced&product=CloudGuard).
+
+### Access to mandatory URLs missing for your Windows Virtual Desktop environment
+
+In order for a session host to deploy and register to WVD properly, you need to add a set of URLs to allowed list in case your virtual machine runs in restricted environment. After visiting "Learn More" link, you will be able to see the minimum list of URLs you need to unblock to have a successful deployment and functional session host. For specific URL(s) missing from allowed list, you may also search Application event log for event 3702.
+
+Learn more about [Virtual machine - SessionHostNeedsAssistanceForUrlCheck (Access to mandatory URLs missing for your Windows Virtual Desktop environment)](/azure/virtual-desktop/safe-url-list).
+
+## PostgreSQL
+
+### Improve PostgreSQL availability by removing inactive logical replication slots
+
+Our internal telemetry indicates that your PostgreSQL server may have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. This can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](https://aka.ms/azure_postgresql_logical_decoding).
+
+### Improve PostgreSQL availability by removing inactive logical replication slots
+
+Our internal telemetry indicates that your PostgreSQL flexible server may have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. This can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
+
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlFlexibleServerLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](https://aka.ms/azure_postgresql_flexible_server_logical_decoding).
+
+## IoT Hub
+
+### Upgrade device client SDK to a supported version for IotHub
+
+Some or all of your devices are using outdated SDK and we recommend you upgrade to a supported version of SDK. See the details in the recommendation.
+
+Learn more about [IoT hub - UpgradeDeviceClientSdk (Upgrade device client SDK to a supported version for IotHub)](https://aka.ms/iothubsdk).
+
+## Cosmos DB
+
+### Configure Consistent indexing mode on your Azure Cosmos container
+
+We noticed that your Azure Cosmos container is configured with the Lazy indexing mode, which may impact the freshness of query results. We recommend switching to Consistent mode.
+
+Learn more about [Cosmos DB account - CosmosDBLazyIndexing (Configure Consistent indexing mode on your Azure Cosmos container)](/azure/cosmos-db/how-to-manage-indexing-policy).
+
+### Upgrade your old Azure Cosmos DB SDK to the latest version
+
+Your Azure Cosmos DB account is using an old version of the SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
+
+Learn more about [Cosmos DB account - CosmosDBUpgradeOldSDK (Upgrade your old Azure Cosmos DB SDK to the latest version)](/azure/cosmos-db/).
+
+### Upgrade your outdated Azure Cosmos DB SDK to the latest version
+
+Your Azure Cosmos DB account is using an outdated version of the SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
+
+Learn more about [Cosmos DB account - CosmosDBUpgradeOutdatedSDK (Upgrade your outdated Azure Cosmos DB SDK to the latest version)](/azure/cosmos-db/).
+
+### Configure your Azure Cosmos DB containers with a partition key
+
+Your Azure Cosmos DB non-partitioned collections are approaching their provisioned storage quota. Please migrate these collections to new collections with a partition key definition so that they can automatically be scaled out by the service.
+
+Learn more about [Cosmos DB account - CosmosDBFixedCollections (Configure your Azure Cosmos DB containers with a partition key)](/azure/cosmos-db/partitioning-overview#choose-partitionkey).
+
+### Upgrade your Azure Cosmos DB API for MongoDB account to v4.0 to save on query/storage costs and utilize new features
+
+Your Azure Cosmos DB API for MongoDB account is eligible to upgrade to version 4.0. Upgrading to v4.0 can reduce your storage costs by up to 55% and your query costs by up to 45% by leveraging a new storage format. Numerous additional features such as multi-document transactions are also included in v4.0.
+
+Learn more about [Cosmos DB account - CosmosDBMongoSelfServeUpgrade (Upgrade your Azure Cosmos DB API for MongoDB account to v4.0 to save on query/storage costs and utilize new features)](/azure/cosmos-db/mongodb-version-upgrade).
+
+### Add a second region to your production workloads on Azure Cosmos DB
+
+Based on their names and configuration, we have detected the Azure Cosmos DB accounts below as being potentially used for production workloads. These accounts currently run in a single Azure region. You can increase their availability by configuring them to span at least two Azure regions.
+
+> [!NOTE]
+> Additional regions will incur extra costs.
+
+Learn more about [Cosmos DB account - CosmosDBSingleRegionProdAccounts (Add a second region to your production workloads on Azure Cosmos DB)](/azure/cosmos-db/high-availability).
+
+### Enable Server Side Retry (SSR) on your Azure Cosmos DB's API for MongoDB account
+
+We observed your account is throwing a TooManyRequests error with the 16500 error code. Enabling Server Side Retry (SSR) can help mitigate this issue for you.
+
+Learn more about [Cosmos DB account - CosmosDBMongoServerSideRetries (Enable Server Side Retry (SSR) on your Azure Cosmos DB's API for MongoDB account)](/azure/cosmos-db/prevent-rate-limiting-errors).
+
+### Migrate your Azure Cosmos DB API for MongoDB account to v4.0 to save on query/storage costs and utilize new features
+
+Migrate your database account to a new database account to take advantage of Azure Cosmos DB's API for MongoDB v4.0. Upgrading to v4.0 can reduce your storage costs by up to 55% and your query costs by up to 45% by leveraging a new storage format. Numerous additional features such as multi-document transactions are also included in v4.0. When upgrading, you must also migrate the data in your existing account to a new account created using version 4.0. Azure Data Factory or Studio 3T can assist you in migrating your data.
+
+Learn more about [Cosmos DB account - CosmosDBMongoMigrationUpgrade (Migrate your Azure Cosmos DB API for MongoDB account to v4.0 to save on query/storage costs and utilize new features)](/azure/cosmos-db/mongodb-feature-support-40).
+
+### Your Cosmos DB account is unable to access its linked Azure Key Vault hosting your encryption key
+
+It appears that your key vault's configuration is preventing your Cosmos DB account from contacting the key vault to access your managed encryption keys. If you've recently performed a key rotation, make sure that the previous key or key version remains enabled and available until Cosmos DB has completed the rotation. The previous key or key version can be disabled after 24 hours, or after the Azure Key Vault audit logs don't show activity from Azure Cosmos DB on that key or key version anymore.
+
+Learn more about [Cosmos DB account - CosmosDBKeyVaultWrap (Your Cosmos DB account is unable to access its linked Azure Key Vault hosting your encryption key)](/azure/cosmos-db/how-to-setup-cmk).
+
+### Avoid being rate limited from metadata operations
+
+We found a high number of metadata operations on your account. Your data in Cosmos DB, including metadata about your databases and collections is distributed across partitions. Metadata operations have a system-reserved request unit (RU) limit. Avoid being rate limited from metadata operations by using static Cosmos DB client instances in your code and caching the names of databases and collections.
+
+Learn more about [Cosmos DB account - CosmosDBHighMetadataOperations (Avoid being rate limited from metadata operations)](/azure/cosmos-db/performance-tips).
+
+### Use the new 3.6+ endpoint to connect to your upgraded Azure Cosmos DB's API for MongoDB account
+
+We observed some of your applications are connecting to your upgraded Azure Cosmos DB's API for MongoDB account using the legacy 3.2 endpoint - [accountname].documents.azure.com. Use the new endpoint - [accountname].mongo.cosmos.azure.com (or its equivalent in sovereign, government, or restricted clouds).
+
+Learn more about [Cosmos DB account - CosmosDBMongoNudge36AwayFrom32 (Use the new 3.6+ endpoint to connect to your upgraded Azure Cosmos DB's API for MongoDB account)](/azure/cosmos-db/mongodb-feature-support-40).
+
+### Upgrade to 2.6.14 version of the Async Java SDK v2 to avoid a critical issue or upgrade to Java SDK v4 as Async Java SDK v2 is being deprecated
+
+There is a critical bug in version 2.6.13 and lower of the Azure Cosmos DB Async Java SDK v2 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. This happens transparent to you by the service after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container. Note: This is a critical hotfix for the Async Java SDK v2, however it is still highly recommended you migrate to the [Java SDK v4](/azure/cosmos-db/sql/sql-api-sdk-java-v4).
+
+Learn more about [Cosmos DB account - CosmosDBMaxGlobalLSNReachedV2 (Upgrade to 2.6.14 version of the Async Java SDK v2 to avoid a critical issue or upgrade to Java SDK v4 as Async Java SDK v2 is being deprecated)](/azure/cosmos-db/sql/sql-api-sdk-async-java).
+
+### Upgrade to the current recommended version of the Java SDK v4 to avoid a critical issue
+
+There is a critical bug in version 4.15 and lower of the Azure Cosmos DB Java SDK v4 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. This happens transparent to you by the service after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container.
+
+Learn more about [Cosmos DB account - CosmosDBMaxGlobalLSNReachedV4 (Upgrade to the current recommended version of the Java SDK v4 to avoid a critical issue)](/azure/cosmos-db/sql/sql-api-sdk-java-v4).
+
+## Fluid Relay
+
+### Upgrade your Azure Fluid Relay client library
+
+You have recently invoked the Azure Fluid Relay service with an old client library. Your Azure Fluid Relay client library should now be upgraded to the latest version to ensure your application remains operational. Upgrading will provide the most up-to-date functionality, as well as enhancements in performance and stability. For more information on the latest version to use and how to upgrade, please refer to the article.
+
+Learn more about [FluidRelay Server - UpgradeClientLibrary (Upgrade your Azure Fluid Relay client library)](https://github.com/microsoft/FluidFramework).
+
+## HDInsight
+
+### Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster
+
+Starting July 1, 2020, customers will not be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
+
+Learn more about [HDInsight cluster - KafkaVersionRetirement (Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster)](https://aka.ms/hdiretirekafka).
+
+### Deprecation of Older Spark Versions in HDInsight Spark cluster
+
+Starting July 1, 2020, customers will not be able to create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6, and Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without support from Microsoft.
+
+Learn more about [HDInsight cluster - SparkVersionRetirement (Deprecation of Older Spark Versions in HDInsight Spark cluster)](https://aka.ms/hdiretirespark).
+
+### Enable critical updates to be applied to your HDInsight clusters
+
+HDInsight service is applying an important certificate related update to your cluster. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Please take actions to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before Jan 13, 2021 05:00 PM UTC. The HDInsight team will be performing updates between Jan 13, 2021 05:00 PM UTC and Jan 16, 2021 05:00 PM UTC. Failure to apply this update may result in your clusters becoming unhealthy and unusable.
+
+Learn more about [HDInsight cluster - GCSCertRotation (Enable critical updates to be applied to your HDInsight clusters)](/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters).
+
+### Drop and recreate your HDInsight clusters to apply critical updates
+
+The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, due to some custom configuration changes, we are unable to apply the certificate updates on some of your clusters.
+
+Learn more about [HDInsight cluster - GCSCertRotationRound2 (Drop and recreate your HDInsight clusters to apply critical updates)](/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters).
+
+### Drop and recreate your HDInsight clusters to apply critical updates
+
+The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, due to some custom configuration changes, we are unable to apply the certificate updates on some of your clusters. Please drop and recreate your cluster before Jan 25th, 2021 to prevent the cluster from becoming unhealthy and unusable.
+
+Learn more about [HDInsight cluster - GCSCertRotationR3DropRecreate (Drop and recreate your HDInsight clusters to apply critical updates)](/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters).
+
+### Apply critical updates to your HDInsight clusters
+
+The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Please remove or update your policy assignment to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before Jan 21, 2021 05:00 PM UTC. The HDInsight team will be performing updates between Jan 21, 2021 05:00 PM UTC and Jan 23, 2021 05:00 PM UTC. To verify the policy update, you can try to create network resources (Load balancer, Network interface and Public IP address) in the same resource group and Subnet where your cluster is in. Failure to apply this update may result in your clusters becoming unhealthy and unusable. You can also drop and recreate your cluster before Jan 25th, 2021 to prevent the cluster from becoming unhealthy and unusable. The HDInsight service will send another notification if we failed to apply the update to your clusters.
+
+Learn more about [HDInsight cluster - GCSCertRotationR3PlanPatch (Apply critical updates to your HDInsight clusters)](/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters).
+
+### Action required: Migrate your A8ΓÇôA11 HDInsight cluster before 1 March 2021
+
+You're receiving this notice because you have one or more active A8, A9, A10 or A11 HDInsight cluster. The A8-A11 virtual machines (VMs) will be retired in all regions on 1 March 2021. After that date, all clusters using A8-A11 will be deallocated. Migrate your affected clusters to another HDInsight supported VM (https://azure.microsoft.com/pricing/details/hdinsight/) before that date. For more details, see 'Learn More' link or contact us at askhdinsight@microsoft.com
+
+Learn more about [HDInsight cluster - VMDeprecation (Action required: Migrate your A8ΓÇôA11 HDInsight cluster before 1 March 2021)](https://azure.microsoft.com/updates/a8-a11-azure-virtual-machine-sizes-will-be-retired-on-march-1-2021/).
+
+## Hybrid Compute
+
+### Upgrade to the latest version of the Azure Connected Machine agent
+
+The Azure Connected Machine agent is updated regularly with bug fixes, stability enhancements, and new functionality. Upgrade your agent to the latest version for the best Azure Arc experience.
+
+Learn more about [Machine - Azure Arc - ArcServerAgentVersion (Upgrade to the latest version of the Azure Connected Machine agent)](/azure/azure-arc/servers/manage-agent).
+
+## Kubernetes
+
+### Pod Disruption Budgets Recommended
+
+Pod Disruption Budgets Recommended. Improve service high availability.
+
+Learn more about [Kubernetes service - PodDisruptionBudgetsRecommended (Pod Disruption Budgets Recommended)](https://aka.ms/aks-pdb).
+
+### Upgrade to the latest agent version of Azure Arc-enabled Kubernetes
+
+Upgrade to the latest agent version for the best Azure Arc enabled Kubernetes experience, improved stability and new functionality.
+
+Learn more about [Kubernetes - Azure Arc - Arc-enabled K8s agent version upgrade (Upgrade to the latest agent version of Azure Arc-enabled Kubernetes)](https://aka.ms/ArcK8sAgentUpgradeDocs).
+
+## Media Services
+
+### Increase Media Services quotas or limits to ensure continuity of service.
+
+Please be advised that your media account is about to hit its quota limits. Please review current usage of Assets, Content Key Policies and Stream Policies for the media account. To avoid any disruption of service, you should request quota limits to be increased for the entities that are closer to hitting quota limit. You can request quota limits to be increased by opening a ticket and adding relevant details to it. Please don't create additional Azure Media accounts in an attempt to obtain higher limits.
+
+Learn more about [Media Service - AccountQuotaLimit (Increase Media Services quotas or limits to ensure continuity of service.)](https://aka.ms/ams-quota-recommendation/).
+
+## Networking
+
+### Upgrade your SKU or add more instances to ensure fault tolerance
+
+Deploying two or more medium or large sized instances will ensure business continuity during outages caused by planned or unplanned maintenance.
+
+Learn more about [Application gateway - AppGateway (Upgrade your SKU or add more instances to ensure fault tolerance)](https://aka.ms/aa_gatewayrec_learnmore).
+
+### Move to production gateway SKUs from Basic gateways
+
+The VPN gateway Basic SKU is designed for development or testing scenarios. Please move to a production SKU if you are using the VPN gateway for production purposes. The production SKUs offer higher number of tunnels, BGP support, active-active, custom IPsec/IKE policy in addition to higher stability and availability.
+
+Learn more about [Virtual network gateway - BasicVPNGateway (Move to production gateway SKUs from Basic gateways)](https://aka.ms/aa_basicvpngateway_learnmore).
+
+### Add at least one more endpoint to the profile, preferably in another Azure region
+
+Profiles should have more than one endpoint to ensure availability if one of the endpoints fails. It is also recommended that endpoints be in different regions.
+
+Learn more about [Traffic Manager profile - GeneralProfile (Add at least one more endpoint to the profile, preferably in another Azure region)](https://aka.ms/AA1o0x4).
+
+### Add an endpoint configured to "All (World)"
+
+For geographic routing, traffic is routed to endpoints based on defined regions. When a region fails, there is no pre-defined failover. Having an endpoint where the Regional Grouping is configured to "All (World)" for geographic profiles will avoid traffic black holing and guarantee service remains available.
+
+Learn more about [Traffic Manager profile - GeographicProfile (Add an endpoint configured to \"All (World)\")](https://aka.ms/Rf7vc5).
+
+### Add or move one endpoint to another Azure region
+
+All endpoints associated to this proximity profile are in the same region. Users from other regions may experience long latency when attempting to connect. Adding or moving an endpoint to another region will improve overall performance for proximity routing and provide better availability if all endpoints in one region fail.
+
+Learn more about [Traffic Manager profile - ProximityProfile (Add or move one endpoint to another Azure region)](https://aka.ms/Ldkkdb).
+
+### Implement multiple ExpressRoute circuits in your Virtual Network for cross premises resiliency
+
+We have detected that your ExpressRoute gateway only has 1 ExpressRoute circuit associated to it. Connect 1 or more additional circuits to your gateway to ensure peering location redundancy and resiliency
+
+Learn more about [Virtual network gateway - ExpressRouteGatewayRedundancy (Implement multiple ExpressRoute circuits in your Virtual Network for cross premises resiliency)](/azure/expressroute/designing-for-high-availability-with-expressroute).
+
+### Implement ExpressRoute Monitor on Network Performance Monitor for end-to-end monitoring of your ExpressRoute circuit
+
+We have detected that your ExpressRoute circuit is not currently being monitored by ExpressRoute Monitor on Network Performance Monitor. ExpressRoute monitor provides end-to-end monitoring capabilities including: Loss, latency, and performance from on-premises to Azure and Azure to on-premises
+
+Learn more about [ExpressRoute circuit - ExpressRouteGatewayE2EMonitoring (Implement ExpressRoute Monitor on Network Performance Monitor for end-to-end monitoring of your ExpressRoute circuit)](/azure/expressroute/how-to-npm).
+
+### Avoid hostname override to ensure site integrity
+
+Try to avoid overriding the hostname when configuring Application Gateway. Having a different domain on the frontend of Application Gateway than the one which is used to access the backend can potentially lead to cookies or redirect urls being broken. Note that this might not be the case in all situations and that certain categories of backends (like REST API's) in general are less sensitive to this. Please make sure the backend is able to deal with this or update the Application Gateway configuration so the hostname does not need to be overwritten towards the backend. When used with App Service, attach a custom domain name to the Web App and avoid use of the *.azurewebsites.net host name towards the backend.
+
+Learn more about [Application gateway - AppGatewayHostOverride (Avoid hostname override to ensure site integrity)](https://aka.ms/appgw-advisor-usecustomdomain).
+
+### Use ExpressRoute Global Reach to improve your design for disaster recovery
+
+You appear to have ExpressRoute circuits peered in at least two different locations. Connect them to each other using ExpressRoute Global Reach to allow traffic to continue flowing between your on-premises network and Azure environments in the event of one circuit losing connectivity. You can establish Global Reach connections between circuits in different peering locations within the same metro or across metros.
+
+Learn more about [ExpressRoute circuit - UseGlobalReachForDR (Use ExpressRoute Global Reach to improve your design for disaster recovery)](/azure/expressroute/about-upgrade-circuit-bandwidth).
+
+### Azure WAF RuleSet CRS 3.1/3.2 has been updated with log4j2 vulnerability rule
+
+In response to log4j2 vulnerability (CVE-2021-44228), Azure Web Application Firewall (WAF) RuleSet CRS 3.1/3.2 has been updated on your Application Gateway to help provide additional protection from this vulnerability. The rules are available under Rule 944240 and no action is needed to enable this.
+
+Learn more about [Application gateway - AppGwLog4JCVEPatchNotification (Azure WAF RuleSet CRS 3.1/3.2 has been updated with log4j2 vulnerability rule)](https://aka.ms/log4jcve).
+
+### Additional protection to mitigate Log4j2 vulnerability (CVE-2021-44228)
+
+To mitigate the impact of Log4j2 vulnerability, we recommend these steps:
+
+1) Upgrade Log4j2 to version 2.15.0 on your backend servers. If upgrade isn't possible, follow the system property guidance link below.
+2) Take advantage of WAF Core rule sets (CRS) by upgrading to WAF SKU
+
+Learn more about [Application gateway - AppGwLog4JCVEGenericNotification (Additional protection to mitigate Log4j2 vulnerability (CVE-2021-44228))](https://aka.ms/log4jcve).
+
+### Enable Active-Active gateways for redundancy
+
+In active-active configuration, both instances of the VPN gateway will establish S2S VPN tunnels to your on-premise VPN device. When a planned maintenance or unplanned event happens to one gateway instance, traffic will be switched over to the other active IPsec tunnel automatically.
+
+Learn more about [Virtual network gateway - VNetGatewayActiveActive (Enable Active-Active gateways for redundancy)](https://aka.ms/aa_vpnha_learnmore).
+
+## Recovery Services
+
+### Enable soft delete for your Recovery Services vaults
+
+Soft delete helps you retain your backup data in the Recovery Services vault for an additional duration after deletion, giving you an opportunity to retrieve it before it is permanently deleted.
+
+Learn more about [Recovery Services vault - AB-SoftDeleteRsv (Enable soft delete for your Recovery Services vaults)](/azure/backup/backup-azure-security-feature-cloud).
+
+### Enable Cross Region Restore for your recovery Services Vault
+
+Enabling cross region restore for your geo-redundant vaults
+
+Learn more about [Recovery Services vault - Enable CRR (Enable Cross Region Restore for your recovery Services Vault)](/azure/backup/backup-azure-arm-restore-vms#cross-region-restore).
+
+## Search
+
+### You are close to exceeding storage quota of 2GB. Create a Standard search service.
+
+You are close to exceeding storage quota of 2GB. Create a Standard search service. Indexing operations will stop working when storage quota is exceeded.
+
+Learn more about [Search service - BasicServiceStorageQuota90percent (You are close to exceeding storage quota of 2GB. Create a Standard search service.)](https://aka.ms/azs/search-limits-quotas-capacity).
+
+### You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service.
+
+You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service. Indexing operations will stop working when storage quota is exceeded.
+
+Learn more about [Search service - FreeServiceStorageQuota90percent (You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service.)](https://aka.ms/azs/search-limits-quotas-capacity).
+
+### You are close to exceeding your available storage quota. Add additional partitions if you need more storage.
+
+You are close to exceeding your available storage quota. Add additional partitions if you need more storage. After exceeding storage quota, you can still query, but indexing operations will no longer work.
+
+Learn more about [Search service - StandardServiceStorageQuota90percent (You are close to exceeding your available storage quota. Add additional partitions if you need more storage.)](https://aka.ms/azs/search-limits-quotas-capacity).
+
+## Storage
+
+### Enable Soft Delete to protect your blob data
+
+After enabling Soft Delete, deleted data transitions to a soft deleted state instead of being permanently deleted. When data is overwritten, a soft deleted snapshot is generated to save the state of the overwritten data. You can configure the amount of time soft deleted data is recoverable before it permanently expires.
+
+Learn more about [Storage Account - StorageSoftDelete (Enable Soft Delete to protect your blob data)](https://aka.ms/softdelete).
+
+### Use Managed Disks for storage accounts reaching capacity limit
+
+We have identified that you are using Premium SSD Unmanaged Disks in Storage account(s) that are about to reach Premium Storage capacity limit. To avoid failures when the limit is reached, we recommend migrating to Managed Disks that do not have account capacity limit. This migration can be done through the portal in less than 5 minutes.
+
+Learn more about [Storage Account - StoragePremiumBlobQuotaLimit (Use Managed Disks for storage accounts reaching capacity limit)](https://aka.ms/premium_blob_quota).
+
+## Web
+
+### Consider scaling out your App Service Plan to avoid CPU exhaustion
+
+Your App reached >90% CPU over the last couple of days. High CPU utilization can lead to runtime issues with your apps, to solve this you could scale out your app.
+
+Learn more about [App service - AppServiceCPUExhaustion (Consider scaling out your App Service Plan to avoid CPU exhaustion)](https://aka.ms/antbc-cpu).
+
+### Fix the backup database settings of your App Service resource
+
+Your app's backups are consistently failing due to invalid DB configuration, you can find more details in backup history.
+
+Learn more about [App service - AppServiceFixBackupDatabaseSettings (Fix the backup database settings of your App Service resource)](https://aka.ms/antbc).
+
+### Consider scaling up your App Service Plan SKU to avoid memory exhaustion
+
+The App Service Plan containing your app reached >85% memory allocated. High memory consumption can lead to runtime issues with your apps. Investigate which app in the App Service Plan is exhausting memory and scale up to a higher plan with more memory resources if needed.
+
+Learn more about [App service - AppServiceMemoryExhaustion (Consider scaling up your App Service Plan SKU to avoid memory exhaustion)](https://aka.ms/antbc-memory).
+
+### Scale up your App Service resource to remove the quota limit
+
+Your app is part of a shared App Service plan and has met its quota multiple times. After meeting a quota, your web app canΓÇÖt accept incoming requests. To remove the quota, upgrade to a Standard plan.
+
+Learn more about [App service - AppServiceRemoveQuota (Scale up your App Service resource to remove the quota limit)](https://aka.ms/ant-asp).
+
+### Use deployment slots for your App Service resource
+
+You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment impact to your production web app.
+
+Learn more about [App service - AppServiceUseDeploymentSlots (Use deployment slots for your App Service resource)](https://aka.ms/ant-staging).
+
+### Fix the backup storage settings of your App Service resource
+
+Your app's backups are consistently failing due to invalid storage settings, you can find more details in backup history.
+
+Learn more about [App service - AppServiceFixBackupStorageSettings (Fix the backup storage settings of your App Service resource)](https://aka.ms/antbc).
+
+### Move your App Service resource to Standard or higher and use deployment slots
+
+You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment impact to your production web app.
+
+Learn more about [App service - AppServiceStandardOrHigher (Move your App Service resource to Standard or higher and use deployment slots)](https://aka.ms/ant-staging).
+
+### Consider scaling out your App Service Plan to optimize user experience and availability.
+
+Consider scaling out your App Service Plan to at least two instances to avoid cold start delays and service interruptions during routine maintenance.
+
+Learn more about [App Service plan - AppServiceNumberOfInstances (Consider scaling out your App Service Plan to optimize user experience and availability.)](https://aka.ms/appsvcnuminstances).
+
+### Consider upgrading the hosting plan of the Static Web App(s) in this subscription to Standard SKU.
+
+The combined bandwidth used by all the Free SKU Static Web Apps in this subscription is exceeding the monthly limit of 100GB. Consider upgrading these apps to Standard SKU to avoid throttling.
+
+Learn more about [Static Web App - StaticWebAppsUpgradeToStandardSKU (Consider upgrading the hosting plan of the Static Web App(s) in this subscription to Standard SKU.)](https://azure.microsoft.com/pricing/details/app-service/static/).
+
+### Application code should be fixed as worker process crashed due to Unhandled Exception
+
+We identified the below thread resulted in an unhandled exception for your App and application code should be fixed to prevent impact to application availability. A crash happens when an exception in your code goes un-handled and terminates the process.
+
+Learn more about [App service - AppServiceProactiveCrashMonitoring (Application code should be fixed as worker process crashed due to Unhandled Exception)](https://azure.github.io/AppService/2020/08/11/Crash-Monitoring-Feature-in-Azure-App-Service.html).
++
+## Next steps
+
+Learn more about [Reliability - Microsoft Azure Well Architected Framework](/azure/architecture/framework/resiliency/overview)
advisor Advisor Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-security-recommendations.md
For more information about security recommendations, see [Review your security r
To learn more about Advisor recommendations, see: * [Introduction to Advisor](advisor-overview.md) * [Get started with Advisor](advisor-get-started.md)
-* [Advisor cost recommendations](advisor-cost-recommendations.md)
-* [Advisor performance recommendations](advisor-performance-recommendations.md)
-* [Advisor reliability recommendations](advisor-high-availability-recommendations.md)
-* [Advisor operational excellence recommendations](advisor-operational-excellence-recommendations.md)
+* [Advisor cost recommendations](advisor-reference-cost-recommendations.md)
+* [Advisor performance recommendations](advisor-reference-performance-recommendations.md)
+* [Advisor reliability recommendations](advisor-reference-reliability-recommendations.md)
+* [Advisor operational excellence recommendations](advisor-reference-operational-excellence-recommendations.md)
* [Advisor REST API](/rest/api/advisor/)
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/certificate-rotation.md
Title: Rotate certificates in Azure Kubernetes Service (AKS)
description: Learn how to rotate your certificates in an Azure Kubernetes Service (AKS) cluster. Previously updated : 1/9/2022 Last updated : 3/3/2022 # Rotate certificates in Azure Kubernetes Service (AKS)
az vmss run-command invoke -g MC_rg_myAKSCluster_region -n vmss-name --instance-
Azure Kubernetes Service will automatically rotate non-ca certificates on both the control plane and agent nodes before they expire with no downtime for the cluster.
-For AKS to automatically rotate non-CA certificates, the cluster must have [TLS Bootstrapping](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/). TLS Bootstrapping is currently available in the following regions:
-
-* eastus2euap
-* centraluseuap
-* westcentralus
-* uksouth
-* eastus
-* australiacentral
-* australiaest
+For AKS to automatically rotate non-CA certificates, the cluster must have [TLS Bootstrapping](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/).
#### How to check whether current agent node pool is TLS Bootstrapping enabled? To verify if TLS Bootstrapping is enabled on your cluster browse to the following paths. On a Linux node: /var/lib/kubelet/bootstrap-kubeconfig, on a Windows node, itΓÇÖs c:\k\bootstrap-config.
aks Custom Node Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-node-configuration.md
The settings below can be used to tune the operation of the virtual memory (VM)
| `transparentHugePageEnabled` | `always`, `madvise`, `never` | `always` | [Transparent Hugepages](https://www.kernel.org/doc/html/latest/admin-guide/mm/transhuge.html#admin-guide-transhuge) is a Linux kernel feature intended to improve performance by making more efficient use of your processorΓÇÖs memory-mapping hardware. When enabled the kernel attempts to allocate `hugepages` whenever possible and any Linux process will receive 2-MB pages if the `mmap` region is 2 MB naturally aligned. In certain cases when `hugepages` are enabled system wide, applications may end up allocating more memory resources. An application may `mmap` a large region but only touch 1 byte of it, in that case a 2-MB page might be allocated instead of a 4k page for no good reason. This scenario is why it's possible to disable `hugepages` system-wide or to only have them inside `MADV_HUGEPAGE madvise` regions. | | `transparentHugePageDefrag` | `always`, `defer`, `defer+madvise`, `madvise`, `never` | `madvise` | This value controls whether the kernel should make aggressive use of memory compaction to make more `hugepages` available. | ++ > [!IMPORTANT] > For ease of search and readability the OS settings are displayed in this document by their name but should be added to the configuration json file or AKS API using [camelCase capitalization convention](/dotnet/standard/design-guidelines/capitalization-conventions).
Add a new node pool specifying the Kubelet parameters using the JSON file you cr
az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-group myResourceGroup --kubelet-config ./kubeletconfig.json ``` +
+## Other configuration
+
+The settings below can be used to modify other Operating System settings.
+
+### Message of the Day
+
+Pass the ```--message-of-the-day``` flag with the location of the file to replace the Message of the Day on Linux nodes at cluster creation or node pool creation.
++
+#### Cluster creation
+```azurecli
+az aks create --cluster-name myAKSCluster --resource-group myResourceGroup --message-of-the-day ./newMOTD.txt
+```
+
+#### Nodepool creation
+```azurecli
+az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-group myResourceGroup --message-of-the-day ./newMOTD.txt
+```
+++ ## Next steps - Learn [how to configure your AKS cluster](cluster-configuration.md).
aks Open Service Mesh Deploy Addon Az Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-az-cli.md
Title: Install the Open Service Mesh (OSM) add-on using Azure CLI
-description: Install Open Service Mesh (OSM) Azure Kubernetes Service (AKS) add-on using Azure CLI
+ Title: Install the Open Service Mesh add-on by using the Azure CLI
+description: Use Azure CLI commands to install the Open Service Mesh (OSM) add-on on an Azure Kubernetes Service (AKS) cluster.
Last updated 11/10/2021
-# Install the Open Service Mesh (OSM) Azure Kubernetes Service (AKS) add-on using Azure CLI
+# Install the Open Service Mesh add-on by using the Azure CLI
-This article shows you how to install the OSM add-on on an AKS cluster and verify it is installed and running.
+This article shows you how to install the Open Service Mesh (OSM) add-on on an Azure Kubernetes Service (AKS) cluster and verify that it's installed and running.
> [!IMPORTANT] > The OSM add-on installs version *1.0.0* of OSM on your cluster.
This article shows you how to install the OSM add-on on an AKS cluster and verif
* An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). * [Azure CLI installed](/cli/azure/install-azure-cli).
-## Install the OSM AKS add-on on your cluster
+## Install the OSM add-on on your cluster
-To install the OSM AKS add-on, use `--enable-addons open-service-mesh` when creating or updating a cluster.
+To install the OSM add-on, use `--enable-addons open-service-mesh` when creating or updating a cluster.
-The following example creates a *myResourceGroup* resource group. Then creates a *myAKSCluster* cluster with a three nodes and the OSM add-on.
+The following example creates a *myResourceGroup* resource group. Then it creates a *myAKSCluster* cluster with three nodes and the OSM add-on.
```azurecli-interactive az group create --name myResourceGroup --location eastus
az aks create \
--enable-addons open-service-mesh ```
-For existing clusters, use `az aks enable-addons`. For example:
+For existing clusters, use `az aks enable-addons`. The following code shows an example.
> [!IMPORTANT] > You can't enable the OSM add-on on an existing cluster if an OSM mesh is already on your cluster. Uninstall any existing OSM meshes on your cluster before enabling the OSM add-on.
az aks enable-addons \
## Get the credentials for your cluster
-Get the credentials for your AKS cluster using the `az aks get-credentials` command. The following example command gets the credentials for the *myAKSCluster* in the *myResourceGroup* resource group.
+Get the credentials for your AKS cluster by using the `az aks get-credentials` command. The following example command gets the credentials for *myAKSCluster* in the *myResourceGroup* resource group:
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ```
-## Verify the OSM add-on is installed on your cluster
+## Verify that the OSM add-on is installed on your cluster
-To see if the OSM add-on is enabled on your cluster, verify the *enabled* value shows a *true* for *openServiceMesh* under *addonProfiles*. The following example shows the status of the OSM add-on for the *myAKSCluster* in *myResourceGroup*.
+To see if the OSM add-on is installed on your cluster, verify that the `enabled` value is `true` for `openServiceMesh` under `addonProfiles`. The following example shows the status of the OSM add-on for *myAKSCluster* in *myResourceGroup*:
```azurecli-interactive az aks show --resource-group myResourceGroup --name myAKSCluster --query 'addonProfiles.openServiceMesh.enabled' ```
-## Verify the OSM mesh is running on your cluster
+## Verify that the OSM mesh is running on your cluster
-In addition to verifying the OSM add-on has been enabled on your cluster, you can also verify the version, status, and configuration of the OSM mesh running on your cluster.
-
-To verify the version of the OSM mesh running on your cluster, use `kubectl` to display the image version of the *osm-controller* deployment. For example:
+You can verify the version, status, and configuration of the OSM mesh that's running on your cluster. Use `kubectl` to display the image version of the *osm-controller* deployment. For example:
```azurecli-interactive kubectl get deployment -n kube-system osm-controller -o=jsonpath='{$.spec.template.spec.containers[:1].image}'
$ kubectl get deployment -n kube-system osm-controller -o=jsonpath='{$.spec.temp
mcr.microsoft.com/oss/openservicemesh/osm-controller:v0.11.1 ```
-To verify the status of the OSM components running on your cluster, use `kubectl` to show the status of the *app.kubernetes.io/name=openservicemesh.io* deployments, pods, and services. For example:
+To verify the status of the OSM components running on your cluster, use `kubectl` to show the status of the `app.kubernetes.io/name=openservicemesh.io` deployments, pods, and services. For example:
```azurecli-interactive kubectl get deployments -n kube-system --selector app.kubernetes.io/name=openservicemesh.io
kubectl get services -n kube-system --selector app.kubernetes.io/name=openservic
``` > [!IMPORTANT]
-> If any pods have a status other than *Running*, such as *Pending*, your cluster may not have enough resources to run OSM. Review the sizing for your cluster, such as the number of nodes and the VM SKU, before continuing to use OSM on your cluster.
+> If any pods have a status other than `Running`, such as `Pending`, your cluster might not have enough resources to run OSM. Review the sizing for your cluster, such as the number of nodes and the virtual machine's SKU, before continuing to use OSM on your cluster.
To verify the configuration of your OSM mesh, use `kubectl get meshconfig`. For example:
To verify the configuration of your OSM mesh, use `kubectl get meshconfig`. For
kubectl get meshconfig osm-mesh-config -n kube-system -o yaml ```
-The following sample output shows the configuration of an OSM mesh:
+The following example output shows the configuration of an OSM mesh:
```yaml apiVersion: config.openservicemesh.io/v1alpha1
spec:
useHTTPSIngress: false ```
-The above example output shows `enablePermissiveTrafficPolicyMode: true`, which means OSM has a permissive traffic policy mode enabled. With permissive traffic mode enabled in your OSM mesh:
+The preceding example shows `enablePermissiveTrafficPolicyMode: true`, which means OSM has permissive traffic policy mode enabled. With this mode enabled in your OSM mesh:
* The [SMI][smi] traffic policy enforcement is bypassed. * OSM automatically discovers services that are a part of the service mesh. * OSM creates traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services. -- ## Delete your cluster
-When the cluster is no longer needed, use the `az group delete` command to remove the resource group, cluster, and all related resources.
+When you no longer need the cluster, use the `az group delete` command to remove the resource group, the cluster, and all related resources:
```azurecli-interactive az group delete --name myResourceGroup --yes --no-wait ```
-Alternatively, you can uninstall the OSM add-on and the related resources from your cluster. For more information, see [Uninstall the Open Service Mesh (OSM) add-on from your AKS cluster][osm-uninstall].
+Alternatively, you can uninstall the OSM add-on and the related resources from your cluster. For more information, see [Uninstall the Open Service Mesh add-on from your AKS cluster][osm-uninstall].
## Next steps
-This article showed you how to install the OSM add-on on an AKS cluster and verify it is installed and running. With the OSM add-on on your cluster you can [Deploy a sample application][osm-deploy-sample-app] or [Onboard an existing application][osm-onboard-app] to work with your OSM mesh.
+This article showed you how to install the OSM add-on on an AKS cluster, and then verify that it's installed and running. With the OSM add-on installed on your cluster, you can [deploy a sample application][osm-deploy-sample-app] or [onboard an existing application][osm-onboard-app] to work with your OSM mesh.
[aks-ephemeral]: cluster-configuration.md#ephemeral-os [osm-sample]: open-service-mesh-deploy-new-application.md
aks Open Service Mesh Deploy Addon Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-bicep.md
Title: Deploy Open Service Mesh AKS add-on using Bicep
-description: Deploy Open Service Mesh on Azure Kubernetes Service (AKS) using Bicep
+ Title: Deploy the Open Service Mesh add-on by using Bicep
+description: Use a Bicep template to deploy the Open Service Mesh (OSM) add-on to Azure Kubernetes Service (AKS).
Last updated 9/20/2021
-# Deploy Open Service Mesh (OSM) Azure Kubernetes Service (AKS) add-on using Bicep
+# Deploy the Open Service Mesh add-on by using Bicep
-This article will discuss how to deploy the OSM add-on to AKS using a [Bicep](../azure-resource-manager/bicep/index.yml) template.
+This article shows you how to deploy the Open Service Mesh (OSM) add-on to Azure Kubernetes Service (AKS) by using a [Bicep](../azure-resource-manager/bicep/index.yml) template.
> [!IMPORTANT] > The OSM add-on installs version *1.0.0* of OSM on your cluster.
-[Bicep](../azure-resource-manager/bicep/overview.md) is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. Bicep can be used in place of creating Azure [ARM](../azure-resource-manager/templates/overview.md) templates for deploying your infrastructure-as-code Azure resources.
+[Bicep](../azure-resource-manager/bicep/overview.md) is a domain-specific language that uses declarative syntax to deploy Azure resources. You can use Bicep in place of creating [Azure Resource Manager templates](../azure-resource-manager/templates/overview.md) to deploy your infrastructure-as-code Azure resources.
## Prerequisites -- The Azure CLI, version 2.20.0 or later-- OSM version v0.11.1 or later-- An SSH Public Key used for deploying AKS-- [Visual Studio Code](https://code.visualstudio.com/) utilizing a Bash terminal-- Visual Studio Code [Bicep extension](../azure-resource-manager/bicep/install.md)
+- Azure CLI version 2.20.0 or later
+- OSM version 0.11.1 or later
+- An SSH public key used for deploying AKS
+- [Visual Studio Code](https://code.visualstudio.com/) with a Bash terminal
+- The Visual Studio Code [Bicep extension](../azure-resource-manager/bicep/install.md)
-## Install the OSM AKS add-on for a new AKS cluster using Bicep
+## Install the OSM add-on for a new AKS cluster by using Bicep
-For a new AKS cluster deployment scenario, start with a brand new deployment of an AKS cluster with the OSM add-on enabled at the cluster create operation. The following set of directions will use a generic Bicep template that deploys an AKS cluster using ephemeral disks, using the [`kubenet`](./configure-kubenet.md) CNI, and enabling the AKS OSM add-on. For more advanced deployment scenarios visit the [Bicep](../azure-resource-manager/bicep/overview.md) documentation.
+For deployment of a new AKS cluster, you enable the OSM add-on at cluster creation. The following instructions use a generic Bicep template that deploys an AKS cluster by using ephemeral disks and the [`kubenet`](./configure-kubenet.md) container network interface, and then enables the OSM add-on. For more advanced deployment scenarios, see [What is Bicep?](../azure-resource-manager/bicep/overview.md).
### Create a resource group
-In Azure, you can associate related resources using a resource group. Create a resource group by using [az group create](/cli/azure/group#az_group_create). The following example is used to create a resource group named in a specified Azure location (region):
+In Azure, you can associate related resources by using a resource group. Create a resource group by using [az group create](/cli/azure/group#az_group_create). The following example creates a resource group named *my-osm-bicep-aks-cluster-rg* in a specified Azure location (region):
```azurecli-interactive az group create --name <my-osm-bicep-aks-cluster-rg> --location <azure-region>
az group create --name <my-osm-bicep-aks-cluster-rg> --location <azure-region>
### Create the main and parameters Bicep files
-Using Visual Studio Code with a bash terminal open, create a directory to store the necessary Bicep deployment files. The following example creates a directory named `bicep-osm-aks-addon` and changes to the directory
+By using Visual Studio Code with a Bash terminal open, create a directory to store the necessary Bicep deployment files. The following example creates a directory named *bicep-osm-aks-addon* and changes to the directory:
```azurecli-interactive mkdir bicep-osm-aks-addon cd bicep-osm-aks-addon ```
-Next create both the main and parameters files, as shown in the following example.
+Next, create both the main file and the parameters file, as shown in the following example:
```azurecli-interactive touch osm.aks.bicep && touch osm.aks.parameters.json ```
-Open the `osm.aks.bicep` file and copy the following example content to it, then save the file.
+Open the *osm.aks.bicep* file and copy the following example content to it. Then save the file.
```azurecli-interactive // https://docs.microsoft.com/azure/aks/troubleshooting#what-naming-restrictions-are-enforced-for-aks-resources-and-parameters
resource aksCluster 'Microsoft.ContainerService/managedClusters@2021-03-01' = {
} ```
-Open the `osm.aks.parameters.json` file and copy the following example content to it. Add the deployment-specific parameters, then save the file.
+Open the *osm.aks.parameters.json* file and copy the following example content to it. Add the deployment-specific parameters, and then save the file.
> [!NOTE]
-> The `osm.aks.parameters.json` is an example template parameters file needed for the Bicep deployment. You will have to update the specified parameters specifically for your deployment environment. The specific parameter values used by this example needs the following parameters to be updated. They are the _clusterName_, _clusterDNSPrefix_, _k8Version_, and _sshPubKey_. To find a list of supported Kubernetes version in your region, please use the `az aks get-versions --location <region>` command.
+> The *osm.aks.parameters.json* file is an example template parameters file needed for the Bicep deployment. Update the parameters specifically for your deployment environment. The specific parameter values in this example need the following parameters to be updated: `clusterName`, `clusterDNSPrefix`, `k8Version`, and `sshPubKey`. To find a list of supported Kubernetes versions in your region, use the `az aks get-versions --location <region>` command.
```azurecli-interactive {
Open the `osm.aks.parameters.json` file and copy the following example content t
} ```
-### Deploy the Bicep file
+### Deploy the Bicep files
-To deploy the previously created Bicep files, open the terminal and authenticate to your Azure account for the Azure CLI using the `az login` command. Once authenticated to your Azure subscription, run the following commands for deployment.
+To deploy the previously created Bicep files, open the terminal and authenticate to your Azure account for the Azure CLI by using the `az login` command. After you're authenticated to your Azure subscription, run the following commands for deployment:
```azurecli-interactive az group create --name osm-bicep-test --location eastus2
az deployment group create \
--parameters @osm.aks.parameters.json ```
-When the deployment finishes, you should see a message indicating the deployment succeeded.
+When the deployment finishes, you should see a message that says the deployment succeeded.
-## Validate the AKS OSM add-on installation
+## Validate installation of the OSM add-on
-There are several commands to run to check all of the components of the AKS OSM add-on are enabled and running:
+You use several commands to check that all of the components of the OSM add-on are enabled and running.
-First we can query the add-on profiles of the cluster to check the enabled state of the add-ons installed. The following command should return "true".
+First, query the add-on profiles of the cluster to check the enabled state of the installed add-ons. The following command should return `true`:
```azurecli-interactive az aks list -g <my-osm-aks-cluster-rg> -o json | jq -r '.[].addonProfiles.openServiceMesh.enabled' ```
-The following `kubectl` commands will report the status of the osm-controller.
+The following `kubectl` commands will report the status of *osm-controller*:
```azurecli-interactive kubectl get deployments -n kube-system --selector app=osm-controller
kubectl get pods -n kube-system --selector app=osm-controller
kubectl get services -n kube-system --selector app=osm-controller ```
-## Accessing the AKS OSM add-on configuration
+## Access the OSM add-on configuration
-Currently you can access and configure the OSM controller configuration via the OSM MeshConfig resource, and you can view the OSM controller configuration settings via the CLI use the **kubectl** get command as shown below.
+You can configure the OSM controller via the OSM MeshConfig resource, and you can view the OSM controller's configuration settings via the Azure CLI. Use the `kubectl get` command as shown in the following example:
```azurecli-interactive kubectl get meshconfig osm-mesh-config -n kube-system -o yaml ```
-Output of the MeshConfig is shown in the following:
+Here's an example output of MeshConfig:
``` apiVersion: config.openservicemesh.io/v1alpha1
spec:
useHTTPSIngress: false ```
-Notice the **enablePermissiveTrafficPolicyMode** is configured to **true**. Permissive traffic policy mode in OSM is a mode where the [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh. The discovered services will have traffic policy rules programed on each Envoy proxy sidecar to allow communications between these services.
+Notice that `enablePermissiveTrafficPolicyMode` is configured to `true`. In OSM, permissive traffic policy mode bypasses [SMI](https://smi-spec.io/) traffic policy enforcement. In this mode, OSM automatically discovers services that are a part of the service mesh. The discovered services will have traffic policy rules programmed on each Envoy proxy sidecar to allow communications between these services.
> [!WARNING]
-> Before proceeding please verify that your permissive traffic policy mode is set to true, if not please change it to **true** using the command below
-
-```OSM Permissive Mode to True
-kubectl patch meshconfig osm-mesh-config -n kube-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":true}}}' --type=merge
-```
+> Before you proceed, verify that your permissive traffic policy mode is set to `true`. If it isn't, change it to `true` by using the following command:
+>
+> ```OSM Permissive Mode to True
+> kubectl patch meshconfig osm-mesh-config -n kube-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":true}}}' --type=merge
+>```
## Clean up resources
-When the Azure resources are no longer needed, use the Azure CLI to delete the deployment test resource group.
+When you no longer need the Azure resources, use the Azure CLI to delete the deployment's test resource group:
``` az group delete --name osm-bicep-test ```
-Alternatively, you can uninstall the OSM add-on and the related resources from your cluster. For more information, see [Uninstall the Open Service Mesh (OSM) add-on from your AKS cluster][osm-uninstall].
+Alternatively, you can uninstall the OSM add-on and the related resources from your cluster. For more information, see [Uninstall the Open Service Mesh add-on from your AKS cluster][osm-uninstall].
## Next steps
-This article showed you how to install the OSM add-on on an AKS cluster and verify it is installed and running. With the OSM add-on on your cluster you can [Deploy a sample application][osm-deploy-sample-app] or [Onboard an existing application][osm-onboard-app] to work with your OSM mesh.
+This article showed you how to install the OSM add-on on an AKS cluster and verify that it's installed and running. With the OSM add-on installed on your cluster, you can [deploy a sample application][osm-deploy-sample-app] or [onboard an existing application][osm-onboard-app] to work with your OSM mesh.
<!-- Links --> <!-- Internal -->
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
For new **minor** versions of Kubernetes:
* Users have **30 days** from version removal to upgrade to a supported minor version release to continue receiving support. For new **patch** versions of Kubernetes:
- * Because of the urgent nature of patch versions, they can be introduced into the service as they become available.
+ * Because of the urgent nature of patch versions, they can be introduced into the service as they become available. Once available, patches will have a two month minimum lifecycle.
* In general, AKS does not broadly communicate the release of new patch versions. However, AKS constantly monitors and validates available CVE patches to support them in AKS in a timely manner. If a critical patch is found or user action is required, AKS will notify users to upgrade to the newly available patch.
- * Users have **30 days** from a patch release's removal from AKS to upgrade into a supported patch and continue receiving support.
+ * Users have **30 days** from a patch release's removal from AKS to upgrade into a supported patch and continue receiving support. However, you will **no longer be able to create clusters or node pools once the version is deprecated/removed.**
### Supported versions policy exceptions
No. Once a version is deprecated/removed, you cannot create a cluster with that
No. You will not be allowed to add node pools of the deprecated version to your cluster. You can add node pools of a new version. However, this may require you to update the control plane first.
+**How often do you update patches?**
+
+Patches have a two month minimum lifecycle. To keep up to date when new patches are released, follow the [AKS Release Notes](https://github.com/Azure/AKS/releases).
+ ## Next steps For information on how to upgrade your cluster, see [Upgrade an Azure Kubernetes Service (AKS) cluster][aks-upgrade].
api-management Api Management Api Import Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-api-import-restrictions.md
description: Details of known issues and restrictions on Open API, WSDL, and WAD
documentationcenter: '' - -- Previously updated : 10/26/2021+ Last updated : 03/02/2022
When importing an API, you might encounter some restrictions or need to identify and rectify issues before you can successfully import. In this article, you'll learn: * API Management's behavior during OpenAPI import.
-* Import limitations, organized by the import format of the API.
-* How OpenAPI export works.
+* OpenAPI import limitations and how OpenAPI export works.
+* Requirements and limitations for WSDL and WADL import.
## API Management during OpenAPI import
For each operation, its:
## <a name="wsdl"> </a>WSDL
-You can create SOAP pass-through and SOAP-to-REST APIs with WSDL files.
+You can create [SOAP pass-through](import-soap-api.md) and [SOAP-to-REST](restify-soap-api.md) APIs with WSDL files.
### SOAP bindings - Only SOAP bindings of "document" and ΓÇ£literalΓÇ¥ encoding style are supported. - No support for ΓÇ£rpcΓÇ¥ style or SOAP-Encoding.
-### WSDL:Import
-Not supported. Instead, merge the imports into one document.
+### Unsupported directives
+`wsdl:import`, `xsd:import`, and `xsd:include` aren't supported. Instead, merge the dependencies into one document.
+
+For an open-source tool to resolve and merge `wsdl:import`, `xsd:import`, and `xsd:include` dependencies in a WSDL file, see this [GitHub repo](https://github.com/Azure-Samples/api-management-schema-import).
### Messages with multiple parts This message type is not supported.
api-management How To Self Hosted Gateway On Kubernetes In Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-self-hosted-gateway-on-kubernetes-in-production.md
The Azure portal provides commands to create self-hosted gateway resources in th
Consider [creating and deploying](https://www.kubernetesbyexample.com/) a self-hosted gateway into a separate namespace in production. ## Number of replicas
-The minimum number of replicas suitable for production is two.
+The minimum number of replicas suitable for production is three, preferably combined with [high-available scheduling of the instances](#high-availability).
By default, a self-hosted gateway is deployed with a **RollingUpdate** deployment [strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy). Review the default values and consider explicitly setting the [maxUnavailable](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable) and [maxSurge](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-surge) fields, especially when you're using a high replica count.
api-management Import Soap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-soap-api.md
Title: Import SOAP API using the Azure portal | Microsoft Docs
-description: Learn how to import a standard XML representation of a SOAP API, and then test the API in the Azure and Developer portals.
+ Title: Import SOAP API to Azure API Management using the portal | Microsoft Docs
+description: Learn how to import a SOAP API to Azure API Management as a WSDL specification. Then, test the API in the Azure portal.
Previously updated : 02/10/2022 Last updated : 03/01/2022
-# Import SOAP API
+# Import SOAP API to API Management
-This article shows how to import a standard XML representation of a SOAP API. The article also shows how to test the API Management API.
+This article shows how to import a WSDL specification, which is a standard XML representation of a SOAP API. The article also shows how to test the API in API Management.
In this article, you learn how to: > [!div class="checklist"]
-> * Import SOAP API
+> * Import a SOAP API
> * Test the API in the Azure portal + ## Prerequisites Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md) [!INCLUDE [api-management-navigate-to-instance.md](../../includes/api-management-navigate-to-instance.md)]
-## <a name="create-api"> </a>Import and publish a back-end API
-
-1. Navigate to your API Management service in the Azure portal and select **APIs** from the menu.
-2. Select **WSDL** from the **Add a new API** list.
-
- ![Soap api](./media/import-soap-api/wsdl-api.png)
-3. In the **WSDL specification**, enter the URL to where your SOAP API resides.
-4. The **SOAP pass-through** radio button is selected by default. With this selection, the API is going to be exposed as SOAP. Consumer has to use SOAP rules. If you want to "restify" the API, follow the steps in [Import a SOAP API and convert it to REST](restify-soap-api.md).
-
- ![Screenshot shows the Create from W S D L dialog box where you can enter a W S D L specification.](./media/import-soap-api/pass-through.png)
-5. Press tab.
+## <a name="create-api"> </a>Import and publish a backend API
- The following fields get filled up with the info from the SOAP API: Display name, Name, Description.
-6. Add an API URL suffix. The suffix is a name that identifies this specific API in this API Management instance. It has to be unique in this API Management instance.
-7. Publish the API by associating the API with a product. In this case, the "*Unlimited*" product is used. If you want for the API to be published and be available to developers, add it to a product. You can do it during API creation or set it later.
+1. From the left menu, under the **APIs** section, select **APIs** > **+ Add API**.
+1. Under **Create from definition**, select **WSDL**.
- Products are associations of one or more APIs. You can include a number of APIs and offer them to developers through the developer portal. Developers must first subscribe to a product to get access to the API. When they subscribe, they get a subscription key that is good for any API in that product. If you created the API Management instance, you are an administrator already, so you are subscribed to every product by default.
+ ![SOAP API](./media/import-soap-api/wsdl-api.png)
+1. In **WSDL specification**, enter the URL to your SOAP API, or click **Select a file** to select a local WSDL file.
+1. In **Import method**, **SOAP pass-through** is selected by default.
+ With this selection, the API is exposed as SOAP, and API consumers have to use SOAP rules. If you want to "restify" the API, follow the steps in [Import a SOAP API and convert it to REST](restify-soap-api.md).
- By default, each API Management instance comes with two sample products:
+ ![Create SOAP API from WDL specification](./media/import-soap-api/pass-through.png)
+1. The following fields are filled automatically with information from the SOAP API: **Display name**, **Name**, **Description**.
+1. Enter other API settings. You can set the values during creation or configure them later by going to the **Settings** tab.
- * **Starter**
- * **Unlimited**
-8. Enter other API settings. You can set the values during creation or configure them later by going to the **Settings** tab. The settings are explained in the [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
-9. Select **Create**.
+ For more information about API settings, see [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
+1. Select **Create**.
### Test the new API in the portal
-Operations can be called directly from the administrative portal, which provides a convenient way to view and test the operations of an API.
+Operations can be called directly from the portal, which provides a convenient way to view and test the operations of an API.
1. Select the API you created in the previous step. 2. Press the **Test** tab. 3. Select some operation.
- The page displays fields for query parameters and fields for the headers. One of the headers is "Ocp-Apim-Subscription-Key", for the subscription key of the product that is associated with this API. If you created the API Management instance, you are an administrator already, so the key is filled in automatically.
+ The page displays fields for query parameters and fields for the headers. One of the headers is **Ocp-Apim-Subscription-Key**, for the subscription key of the product that is associated with this API. If you created the API Management instance, you're an administrator already, so the key is filled in automatically.
1. Press **Send**.
- Backend responds with **200 OK** and some data.
+ When the test is successful, the backend responds with **200 OK** and some data.
## Wildcard SOAP action
api-management Restify Soap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/restify-soap-api.md
Title: Import a SOAP API and convert to REST using the Azure portal | Microsoft Docs
-description: Learn how to import a SOAP API, convert it to REST with API Management, and then test the API in the Azure and Developer portals.
+ Title: Import SOAP API to Azure API Management and convert to REST using the portal | Microsoft Docs
+description: Learn how to import a SOAP API to Azure API Management as a WSDL specification and convert it to a REST API. Then, test the API in the Azure portal.
- -- Previously updated : 11/22/2017+ Last updated : 03/01/2022
-# Import a SOAP API and convert to REST
+# Import SOAP API to API Management and convert to REST
-This article shows how to import a SOAP API and convert it to REST. The article also shows how to test the APIM API.
+This article shows how to import a SOAP API as a WSDL specification and then convert it to a REST API. The article also shows how to test the API in API Management.
In this article, you learn how to: > [!div class="checklist"] > * Import a SOAP API and convert to REST > * Test the API in the Azure portal
-> * Test the API in the Developer portal
+ ## Prerequisites
Complete the following quickstart: [Create an Azure API Management instance](get
## <a name="create-api"> </a>Import and publish a back-end API
-1. Select **APIs** from under **API MANAGEMENT**.
-2. Select **WSDL** from the **Add a new API** list.
+1. From the left menu, under the **APIs** section, select **APIs** > **+ Add API**.
+1. Under **Create from definition**, select **WSDL**.
![SOAP API](./media/restify-soap-api/wsdl-api.png)
-3. In the **WSDL specification**, enter the URL to where your SOAP API resides.
-4. Click **SOAP to REST** radio button. When this option is clicked, APIM attempts to make an automatic transformation between XML and JSON. In this case consumers should be calling the API as a RESTful API, which returns JSON. APIM is converting each request into a SOAP call.
+1. In **WSDL specification**, enter the URL to your SOAP API, or click **Select a file** to select a local WSDL file.
+1. In **Import method**, select **SOAP to REST**.
+ When this option is selected, API Management attempts to make an automatic transformation between XML and JSON. In this case, consumers should call the API as a RESTful API, which returns JSON. API Management converts each request to a SOAP call.
![SOAP to REST](./media/restify-soap-api/soap-to-rest.png)
-5. Press tab.
-
- The following fields get filled up with the info from the SOAP API: Display name, Name, Description.
-6. Add an API URL suffix. The suffix is a name that identifies this specific API in this APIM instance. It has to be unique in this APIM instance.
-9. Publish the API by associating the API with a product. In this case, the "*Unlimited*" product is used. If you want for the API to be published and be available to developers, add it to a product. You can do it during API creation or set it later.
-
- Products are associations of one or more APIs. You can include a number of APIs and offer them to developers through the developer portal. Developers must first subscribe to a product to get access to the API. When they subscribe, they get a subscription key that is good for any API in that product. If you created the APIM instance, you are an administrator already, so you are subscribed to every product by default.
-
- By default, each API Management instance comes with two sample products:
+1. The following fields are filled automatically with information from the SOAP API: **Display name**, **Name**, **Description**.
+1. Enter other API settings. You can set the values during creation or configure them later by going to the **Settings** tab.
- * **Starter**
- * **Unlimited**
-10. Select **Create**.
+ For more information about API settings, see [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
+1. Select **Create**.
## Test the new API in the Azure portal Operations can be called directly from the Azure portal, which provides a convenient way to view and test the operations of an API. 1. Select the API you created in the previous step.
-2. Press the **Test** tab.
-3. Select some operation.
+2. Select the **Test** tab.
+3. Select an operation.
- The page displays fields for query parameters and fields for the headers. One of the headers is "Ocp-Apim-Subscription-Key", for the subscription key of the product that is associated with this API. If you created the APIM instance, you are an administrator already, so the key is filled in automatically.
+ The page displays fields for query parameters and fields for the headers. One of the headers is **Ocp-Apim-Subscription-Key**, for the subscription key of the product that is associated with this API. If you created the API Management instance, you're an administrator already, so the key is filled in automatically.
1. Press **Send**.
- Backend responds with **200 OK** and some data.
+ When the test is successful, the backend responds with **200 OK** and some data.
[!INCLUDE [api-management-navigate-to-instance.md](../../includes/api-management-append-apis.md)]
app-service Configure Language Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-dotnetcore.md
az webapp config show --resource-group <resource-group-name> --name <app-name> -
To show all supported .NET Core versions, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive
-az webapp list-runtimes --linux | grep DOTNET
+az webapp list-runtimes --os linux | grep DOTNET
``` ::: zone-end
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md
az webapp config show --name <app-name> --resource-group <resource-group-name> -
To show all supported Java versions, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive
-az webapp list-runtimes | grep java
+az webapp list-runtimes --os windows | grep java
``` ::: zone-end
az webapp config show --resource-group <resource-group-name> --name <app-name> -
To show all supported Java versions, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive
-az webapp list-runtimes --linux | grep "JAVA\|TOMCAT\|JBOSSEAP"
+az webapp list-runtimes --os linux | grep "JAVA\|TOMCAT\|JBOSSEAP"
``` ::: zone-end
az webapp list-runtimes --linux | grep "JAVA\|TOMCAT\|JBOSSEAP"
### Build Tools #### Maven+ With the [Maven Plugin for Azure Web Apps](https://github.com/microsoft/azure-maven-plugins/tree/develop/azure-webapp-maven-plugin), you can prepare your Maven Java project for Azure Web App easily with one command in your project root: ```shell
mvn com.microsoft.azure:azure-webapp-maven-plugin:2.2.0:config
``` This command adds a `azure-webapp-maven-plugin` plugin and related configuration by prompting you to select an existing Azure Web App or create a new one. Then you can deploy your Java app to Azure using the following command:+ ```shell mvn package azure-webapp:deploy ``` Here is a sample configuration in `pom.xml`:+ ```xml <plugin> <groupId>com.microsoft.azure</groupId>
Here is a sample configuration in `pom.xml`:
``` #### Gradle+ 1. Setup the [Gradle Plugin for Azure Web Apps](https://github.com/microsoft/azure-gradle-plugins/tree/master/azure-webapp-gradle-plugin) by adding the plugin to your `build.gradle`:+ ```groovy plugins { id "com.microsoft.azure.azurewebapp" version "1.2.0"
Here is a sample configuration in `pom.xml`:
1. Configure your Web App details, corresponding Azure resources will be created if not exist. Here is a sample configuration, for details, refer to this [document](https://github.com/microsoft/azure-gradle-plugins/wiki/Webapp-Configuration).+ ```groovy azurewebapp { subscription = '<your subscription id>'
Here is a sample configuration, for details, refer to this [document](https://gi
``` 1. Deploy with one command.+ ```shell gradle azureWebAppDeploy ```
-
+ ### IDEs+ Azure provides seamless Java App Service development experience in popular Java IDEs, including:+ - *VS Code*: [Java Web Apps with Visual Studio Code](https://code.visualstudio.com/docs/java/java-webapp#_deploy-web-apps-to-the-cloud) - *IntelliJ IDEA*:[Create a Hello World web app for Azure App Service using IntelliJ](/azure/developer/java/toolkit-for-intellij/create-hello-world-web-app) - *Eclipse*:[Create a Hello World web app for Azure App Service using Eclipse](/azure/developer/java/toolkit-for-eclipse/create-hello-world-web-app) ### Kudu API+ #### Java SE
-To deploy .jar files to Java SE, use the `/api/publish/` endpoint of the Kudu site. For more information on this API, see [this documentation](./deploy-zip.md#deploy-warjarear-packages).
+To deploy .jar files to Java SE, use the `/api/publish/` endpoint of the Kudu site. For more information on this API, see [this documentation](./deploy-zip.md#deploy-warjarear-packages).
> [!NOTE]
-> Your .jar application must be named `app.jar` for App Service to identify and run your application. The Maven Plugin (mentioned above) will automatically rename your application for you during deployment. If you do not wish to rename your JAR to *app.jar*, you can upload a shell script with the command to run your .jar app. Paste the absolute path to this script in the [Startup File](./faq-app-service-linux.yml) textbox in the Configuration section of the portal. The startup script does not run from the directory into which it is placed. Therefore, always use absolute paths to reference files in your startup script (for example: `java -jar /home/myapp/myapp.jar`).
+> Your .jar application must be named `app.jar` for App Service to identify and run your application. The Maven Plugin (mentioned above) will automatically rename your application for you during deployment. If you do not wish to rename your JAR to *app.jar*, you can upload a shell script with the command to run your .jar app. Paste the absolute path to this script in the [Startup File](./faq-app-service-linux.yml) textbox in the Configuration section of the portal. The startup script does not run from the directory into which it is placed. Therefore, always use absolute paths to reference files in your startup script (for example: `java -jar /home/myapp/myapp.jar`).
#### Tomcat
Enable [application logging](troubleshoot-diagnostic-logs.md#enable-application-
Enable [application logging](troubleshoot-diagnostic-logs.md#enable-application-logging-linuxcontainer) through the Azure portal or [Azure CLI](/cli/azure/webapp/log#az_webapp_log_config) to configure App Service to write your application's standard console output and standard console error streams to the local filesystem or Azure Blob Storage. If you need longer retention, configure the application to write output to a Blob storage container. Your Java and Tomcat app logs can be found in the */home/LogFiles/Application/* directory.
-Azure Blob Storage logging for Linux based App Services can only be configured using [Azure Monitor](./troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor)
+Azure Blob Storage logging for Linux based App Services can only be configured using [Azure Monitor](./troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor)
::: zone-end
To configure the app setting from the Maven plugin, add setting/value tags in th
::: zone pivot="platform-windows" > [!NOTE]
-> You do not need to create a web.config file when using Tomcat on Windows App Service.
+> You do not need to create a web.config file when using Tomcat on Windows App Service.
::: zone-end
To enable via the Azure CLI, you will need to create an Application Insights res
> To retrieve a list of other locations, run `az account list-locations`. ::: zone pivot="platform-windows"
-
+ 3. Set the instrumentation key, connection string, and monitoring agent version as app settings on the web app. Replace `<instrumentationKey>` and `<connectionString>` with the values from the previous step. ```azurecli
To enable via the Azure CLI, you will need to create an Application Insights res
::: zone-end ::: zone pivot="platform-linux"
-
+ 3. Set the instrumentation key, connection string, and monitoring agent version as app settings on the web app. Replace `<instrumentationKey>` and `<connectionString>` with the values from the previous step. ```azurecli
To enable via the Azure CLI, you will need to create an Application Insights res
5. Upload the unpacked NewRelic Java agent files into a directory under */home/site/wwwroot/apm*. The files for your agent should be in */home/site/wwwroot/apm/newrelic*. 6. Modify the YAML file at */home/site/wwwroot/apm/newrelic/newrelic.yml* and replace the placeholder license value with your own license key. 7. In the Azure portal, browse to your application in App Service and create a new Application Setting.
-
+ - For **Java SE** apps, create an environment variable named `JAVA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/newrelic/newrelic.jar`. - For **Tomcat**, create an environment variable named `CATALINA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/newrelic/newrelic.jar`. ::: zone-end
-> If you already have an environment variable for `JAVA_OPTS` or `CATALINA_OPTS`, append the `-javaagent:/...` option to the end of the current value.
+> If you already have an environment variable for `JAVA_OPTS` or `CATALINA_OPTS`, append the `-javaagent:/...` option to the end of the current value.
### Configure AppDynamics
To enable via the Azure CLI, you will need to create an Application Insights res
::: zone-end > [!NOTE]
-> If you already have an environment variable for `JAVA_OPTS` or `CATALINA_OPTS`, append the `-javaagent:/...` option to the end of the current value.
+> If you already have an environment variable for `JAVA_OPTS` or `CATALINA_OPTS`, append the `-javaagent:/...` option to the end of the current value.
## Configure data sources
Next, determine if the data source should be available to one application or to
#### Shared server-level resources
-Tomcat installations on App Service on Windows exist in shared space on the App Service Plan. You can't directly modify a Tomcat installation for server-wide configuration. To make server-level configuration changes to your Tomcat installation, you must copy Tomcat to a local folder, in which you can modify Tomcat's configuration.
+Tomcat installations on App Service on Windows exist in shared space on the App Service Plan. You can't directly modify a Tomcat installation for server-wide configuration. To make server-level configuration changes to your Tomcat installation, you must copy Tomcat to a local folder, in which you can modify Tomcat's configuration.
##### Automate creating custom Tomcat on app start
Finally, place the driver JARs in the Tomcat classpath and restart your App Serv
There are three core steps when [registering a data source with JBoss EAP](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.0/html/configuration_guide/datasource_management): uploading the JDBC driver, adding the JDBC driver as a module, and registering the module. App Service is a stateless hosting service, so the configuration commands for adding and registering the data source module must be scripted and applied as the container starts.
-1. Obtain your database's JDBC driver.
+1. Obtain your database's JDBC driver.
2. Create an XML module definition file for the JDBC driver. The example shown below is a module definition for PostgreSQL. ```xml
There are three core steps when [registering a data source with JBoss EAP](https
data-source add --name=postgresDS --driver-name=postgres --jndi-name=java:jboss/datasources/postgresDS --connection-url=${POSTGRES_CONNECTION_URL,env.POSTGRES_CONNECTION_URL:jdbc:postgresql://db:5432/postgres} --user-name=${POSTGRES_SERVER_ADMIN_FULL_NAME,env.POSTGRES_SERVER_ADMIN_FULL_NAME:postgres} --password=${POSTGRES_SERVER_ADMIN_PASSWORD,env.POSTGRES_SERVER_ADMIN_PASSWORD:example} --use-ccm=true --max-pool-size=5 --blocking-timeout-wait-millis=5000 --enabled=true --driver-class=org.postgresql.Driver --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter --jta=true --use-java-context=true --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker ```
-1. Create a startup script, `startup_script.sh` that calls the JBoss CLI commands. The example below shows how to call your `jboss-cli-commands.cli`. Later you will configure App Service to run this script when the container starts.
+1. Create a startup script, `startup_script.sh` that calls the JBoss CLI commands. The example below shows how to call your `jboss-cli-commands.cli`. Later you will configure App Service to run this script when the container starts.
```bash $JBOSS_HOME/bin/jboss-cli.sh --connect --file=/home/site/deployments/tools/jboss-cli-commands.cli
If you choose to pin the minor version, you will need to periodically update the
::: zone pivot="platform-linux" ## JBoss EAP App Service Plans+ <a id="jboss-eap-hardware-options"></a> JBoss EAP is only available on the Premium v3 and Isolated v2 App Service Plan types. Customers that created a JBoss EAP site on a different tier during the public preview should scale up to Premium or Isolated hardware tier to avoid unexpected behavior.
Microsoft and Adoptium builds of OpenJDK are provided and supported on App Servi
| Java 11 | 11.0.13 (MSFT) | 11.0.13 (MSFT) | | Java 17 | 17.0.1 (MSFT) | 17.0.1 (MSFT) |
-\* In following releases, Java 8 on Linux will be distributed from Adoptium builds of the OpenJDK.
+\* In following releases, Java 8 on Linux will be distributed from Adoptium builds of the OpenJDK.
If you are [pinned](#choosing-a-java-runtime-version) to an older minor version of Java your site may be using the [Zulu for Azure](https://www.azul.com/downloads/azure-only/zulu/) binaries provided through [Azul Systems](https://www.azul.com/). You can continue to use these binaries for your site, but any security patches or improvements will only be available in new versions of the OpenJDK, so we recommend that you periodically update your Web Apps to a later version of Java.
Supported JDKs are automatically patched on a quarterly basis in January, April,
Patches and fixes for major security vulnerabilities will be released as soon as they become available in Microsoft builds of the OpenJDK. A "major" vulnerability is defined by a base score of 9.0 or higher on the [NIST Common Vulnerability Scoring System, version 2](https://nvd.nist.gov/vuln-metrics/cvss).
-Tomcat 8.0 has reached [End of Life (EOL) as of September 30, 2018](https://tomcat.apache.org/tomcat-80-eol.html). While the runtime is still available on Azure App Service, Azure will not apply security updates to Tomcat 8.0. If possible, migrate your applications to Tomcat 8.5 or 9.0. Both Tomcat 8.5 and 9.0 are available on Azure App Service. See the [official Tomcat site](https://tomcat.apache.org/whichversion.html) for more information.
+Tomcat 8.0 has reached [End of Life (EOL) as of September 30, 2018](https://tomcat.apache.org/tomcat-80-eol.html). While the runtime is still available on Azure App Service, Azure will not apply security updates to Tomcat 8.0. If possible, migrate your applications to Tomcat 8.5 or 9.0. Both Tomcat 8.5 and 9.0 are available on Azure App Service. See the [official Tomcat site](https://tomcat.apache.org/whichversion.html) for more information.
-Community support for Java 7 will terminate on July 29th, 2022 and [Java 7 will be retired from App Service](https://azure.microsoft.com/updates/transition-to-java-11-or-8-by-29-july-2022/) at that time. If you have a web app runnning on Java 7, please upgrade to Java 8 or 11 before July 29th.
+Community support for Java 7 will terminate on July 29th, 2022 and [Java 7 will be retired from App Service](https://azure.microsoft.com/updates/transition-to-java-11-or-8-by-29-july-2022/) at that time. If you have a web app runnning on Java 7, please upgrade to Java 8 or 11 before July 29th.
### Deprecation and retirement
app-service Configure Language Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-nodejs.md
az webapp config appsettings list --name <app-name> --resource-group <resource-g
To show all supported Node.js versions, navigate to `https://<sitename>.scm.azurewebsites.net/api/diagnostics/runtime` or run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive
-az webapp list-runtimes | grep node
+az webapp list-runtimes --os windows | grep node
``` ::: zone-end
az webapp config show --resource-group <resource-group-name> --name <app-name> -
To show all supported Node.js versions, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive
-az webapp list-runtimes --linux | grep NODE
+az webapp list-runtimes --os linux | grep NODE
``` ::: zone-end
To set your app to a [supported Node.js version](#show-nodejs-version), run the
az webapp config appsettings set --name <app-name> --resource-group <resource-group-name> --settings WEBSITE_NODE_DEFAULT_VERSION="~16" ```
-> [!NOTE]
+> [!NOTE]
> This example uses the recommended "tilde syntax" to target the latest available version of Node.js 16 runtime on App Service.
->
+>
>Since the runtime is regularly patched and updated by the platform it's not recommended to target a specific minor version/patch as these are not guaranteed to be available due to potential security risks. > [!NOTE]
The Node.js containers come with [PM2](https://pm2.keymetrics.io/), a production
|[Run npm start](#run-npm-start)|Development use only.| |[Run custom command](#run-custom-command)|Either development or staging.| - ### Run with PM2 The container automatically starts your app with PM2 when one of the common Node.js files is found in your project:
To use a custom *package.json* in your project, run the following command in the
az webapp config set --resource-group <resource-group-name> --name <app-name> --startup-file "<filename>.json" ``` - ## Debug remotely > [!NOTE] > Remote debugging is currently in Preview.
-You can debug your Node.js app remotely in [Visual Studio Code](https://code.visualstudio.com/) if you configure it to [run with PM2](#run-with-pm2), except when you run it using a *.config.js, *.yml, or *.yaml*.
+You can debug your Node.js app remotely in [Visual Studio Code](https://code.visualstudio.com/) if you configure it to [run with PM2](#run-with-pm2), except when you run it using a *.config.js,*.yml, or *.yaml*.
In most cases, no extra configuration is required for your app. If your app is run with a *process.json* file (default or custom), it must have a `script` property in the JSON root. For example:
if (req.secure) {
::: zone-end - ::: zone pivot="platform-linux" ## Monitor with Application Insights Application Insights allows you to monitor your application's performance, exceptions, and usage without making any code changes. To attach the App Insights agent, go to your web app in the Portal and select **Application Insights** under **Settings**, then select **Turn on Application Insights**. Next, select an existing App Insights resource or create a new one. Finally, select **Apply** at the bottom. To instrument your web app using PowerShell, please see [these instructions](../azure-monitor/app/azure-web-apps-nodejs.md#enable-through-powershell)
-This agent will monitor your server-side Node.js application. To monitor your client-side JavaScript, [add the JavaScript SDK to your project](../azure-monitor/app/javascript.md).
+This agent will monitor your server-side Node.js application. To monitor your client-side JavaScript, [add the JavaScript SDK to your project](../azure-monitor/app/javascript.md).
For more information, see the [Application Insights extension release notes](../azure-monitor/app/web-app-extension-release-notes.md).
When a working Node.js app behaves differently in App Service or has errors, try
- [Access the log stream](#access-diagnostic-logs). - Test the app locally in production mode. App Service runs your Node.js apps in production mode, so you need to make sure that your project works as expected in production mode locally. For example:
- - Depending on your *package.json*, different packages may be installed for production mode (`dependencies` vs. `devDependencies`).
- - Certain web frameworks may deploy static files differently in production mode.
- - Certain web frameworks may use custom startup scripts when running in production mode.
+ - Depending on your *package.json*, different packages may be installed for production mode (`dependencies` vs. `devDependencies`).
+ - Certain web frameworks may deploy static files differently in production mode.
+ - Certain web frameworks may use custom startup scripts when running in production mode.
- Run your app in App Service in development mode. For example, in [MEAN.js](https://meanjs.org/), you can set your app to development mode in runtime by [setting the `NODE_ENV` app setting](configure-common.md). ::: zone pivot="platform-windows"
If you deploy your files by using Git, or by using ZIP deployment [with build au
- Your project root has a *package.json* that defines a `start` script that contains the path of a JavaScript file. - Your project root has either a *server.js* or an *app.js*.
-The generated *web.config* is tailored to the detected start script. For other deployment methods, add this *web.config* manually. Make sure the file is formatted properly.
+The generated *web.config* is tailored to the detected start script. For other deployment methods, add this *web.config* manually. Make sure the file is formatted properly.
If you use [ZIP deployment](deploy-zip.md) (through Visual Studio Code, for example), be sure to [enable build automation](deploy-zip.md#enable-build-automation-for-zip-deploy) because it's not enabled by default. [`az webapp up`](/cli/azure/webapp#az_webapp_up) uses ZIP deployment with build automation enabled.
app-service Configure Language Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-php.md
az webapp config show --resource-group <resource-group-name> --name <app-name> -
To show all supported PHP versions, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive
-az webapp list-runtimes | grep php
+az webapp list-runtimes --os windows | grep php
``` ::: zone-end
az webapp config show --resource-group <resource-group-name> --name <app-name> -
To show all supported PHP versions, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive
-az webapp list-runtimes --linux | grep PHP
+az webapp list-runtimes --os linux | grep PHP
``` ::: zone-end
Commit all your changes and deploy your code using Git, or Zip deploy [with buil
## Run Grunt/Bower/Gulp
-If you want App Service to run popular automation tools at deployment time, such as Grunt, Bower, or Gulp, you need to supply a [custom deployment script](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script). App Service runs this script when you deploy with Git, or with [Zip deployment](deploy-zip.md) with [with build automation enabled](deploy-zip.md#enable-build-automation-for-zip-deploy).
+If you want App Service to run popular automation tools at deployment time, such as Grunt, Bower, or Gulp, you need to supply a [custom deployment script](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script). App Service runs this script when you deploy with Git, or with [Zip deployment](deploy-zip.md) with [with build automation enabled](deploy-zip.md#enable-build-automation-for-zip-deploy).
To enable your repository to run these tools, you need to add them to the dependencies in *package.json.* For example:
getenv("DB_HOST")
The web framework of your choice may use a subdirectory as the site root. For example, [Laravel](https://laravel.com/), uses the *public/* subdirectory as the site root.
-To customize the site root, set the virtual application path for the app by using the [`az resource update`](/cli/azure/resource#az_resource_update) command. The following example sets the site root to the *public/* subdirectory in your repository.
+To customize the site root, set the virtual application path for the app by using the [`az resource update`](/cli/azure/resource#az_resource_update) command. The following example sets the site root to the *public/* subdirectory in your repository.
```azurecli-interactive az resource update --name web --resource-group <group-name> --namespace Microsoft.Web --resource-type config --parent sites/<app-name> --set properties.virtualApplications[0].physicalPath="site\wwwroot\public" --api-version 2015-06-01 ```
-By default, Azure App Service points the root virtual application path (_/_) to the root directory of the deployed application files (_sites\wwwroot_).
+By default, Azure App Service points the root virtual application path (*/*) to the root directory of the deployed application files (*sites\wwwroot*).
::: zone-end
az webapp config appsettings set --name <app-name> --resource-group <resource-gr
Navigate to the Kudu console (`https://<app-name>.scm.azurewebsites.net/DebugConsole`) and navigate to `d:\home\site`.
-Create a directory in `d:\home\site` called `ini`, then create an *.ini* file in the `d:\home\site\ini` directory (for example, *settings.ini)* with the directives you want to customize. Use the same syntax you would use in a *php.ini* file.
+Create a directory in `d:\home\site` called `ini`, then create an *.ini* file in the `d:\home\site\ini` directory (for example, *settings.ini)* with the directives you want to customize. Use the same syntax you would use in a *php.ini* file.
For example, to change the value of [expose_php](https://php.net/manual/ini.core.php#ini.expose-php) run the following commands:
az webapp config appsettings set --name <app-name> --resource-group <resource-gr
Navigate to the web SSH session with your Linux container (`https://<app-name>.scm.azurewebsites.net/webssh/host`).
-Create a directory in `/home/site` called `ini`, then create an *.ini* file in the `/home/site/ini` directory (for example, *settings.ini)* with the directives you want to customize. Use the same syntax you would use in a *php.ini* file.
+Create a directory in `/home/site` called `ini`, then create an *.ini* file in the `/home/site/ini` directory (for example, *settings.ini)* with the directives you want to customize. Use the same syntax you would use in a *php.ini* file.
> [!TIP]
-> In the built-in Linux containers in App Service, */home* is used as persisted shared storage.
+> In the built-in Linux containers in App Service, */home* is used as persisted shared storage.
> For example, to change the value of [expose_php](https://php.net/manual/ini.core.php#ini.expose-php) run the following commands:
When a working PHP app behaves differently in App Service or has errors, try the
- [Access the log stream](#access-diagnostic-logs). - Test the app locally in production mode. App Service runs your app in production mode, so you need to make sure that your project works as expected in production mode locally. For example:
- - Depending on your *composer.json*, different packages may be installed for production mode (`require` vs. `require-dev`).
- - Certain web frameworks may deploy static files differently in production mode.
- - Certain web frameworks may use custom startup scripts when running in production mode.
+ - Depending on your *composer.json*, different packages may be installed for production mode (`require` vs. `require-dev`).
+ - Certain web frameworks may deploy static files differently in production mode.
+ - Certain web frameworks may use custom startup scripts when running in production mode.
- Run your app in App Service in debug mode. For example, in [Laravel](https://laravel.com/), you can configure your app to output debug messages in production by [setting the `APP_DEBUG` app setting to `true`](configure-common.md#configure-app-settings). ::: zone pivot="platform-linux"
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-python.md
You can use either the [Azure portal](https://portal.azure.com) or the Azure CLI
- **Azure CLI**: you have two options.
- - Run commands in the [Azure Cloud Shell](../cloud-shell/overview.md).
- - Run commands locally by installing the latest version of the [Azure CLI](/cli/azure/install-azure-cli), then sign in to Azure using [az login](/cli/azure/reference-index#az_login).
-
+ - Run commands in the [Azure Cloud Shell](../cloud-shell/overview.md).
+ - Run commands locally by installing the latest version of the [Azure CLI](/cli/azure/install-azure-cli), then sign in to Azure using [az login](/cli/azure/reference-index#az_login).
+ > [!NOTE] > Linux is currently the recommended option for running Python apps in App Service. For information on the Windows option, see [Python on the Windows flavor of App Service](/visualstudio/python/managing-python-on-azure-app-service).
You can use either the [Azure portal](https://portal.azure.com) or the Azure CLI
- **Azure CLI**:
- - Show the current Python version with [az webapp config show](/cli/azure/webapp/config#az_webapp_config_show):
-
- ```azurecli
- az webapp config show --resource-group <resource-group-name> --name <app-name> --query linuxFxVersion
- ```
-
- Replace `<resource-group-name>` and `<app-name>` with the names appropriate for your web app.
-
- - Set the Python version with [az webapp config set](/cli/azure/webapp/config#az_webapp_config_set)
-
- ```azurecli
- az webapp config set --resource-group <resource-group-name> --name <app-name> --linux-fx-version "PYTHON|3.7"
- ```
-
- - Show all Python versions that are supported in Azure App Service with [az webapp list-runtimes](/cli/azure/webapp#az_webapp_list_runtimes):
-
- ```azurecli
- az webapp list-runtimes --linux | grep PYTHON
- ```
-
+ - Show the current Python version with [az webapp config show](/cli/azure/webapp/config#az_webapp_config_show):
+
+ ```azurecli
+ az webapp config show --resource-group <resource-group-name> --name <app-name> --query linuxFxVersion
+ ```
+
+ Replace `<resource-group-name>` and `<app-name>` with the names appropriate for your web app.
+
+ - Set the Python version with [az webapp config set](/cli/azure/webapp/config#az_webapp_config_set)
+
+ ```azurecli
+ az webapp config set --resource-group <resource-group-name> --name <app-name> --linux-fx-version "PYTHON|3.7"
+ ```
+
+ - Show all Python versions that are supported in Azure App Service with [az webapp list-runtimes](/cli/azure/webapp#az_webapp_list_runtimes):
+
+ ```azurecli
+ az webapp list-runtimes --os linux | grep PYTHON
+ ```
+ You can run an unsupported version of Python by building your own container image instead. For more information, see [use a custom Docker image](tutorial-custom-container.md?pivots=container-linux). <!-- <a> element here to preserve external links-->
App Service's build system, called Oryx, performs the following steps when you d
1. Run custom post-build script if specified by the `POST_BUILD_COMMAND` setting. (Again, the script can run other Python and Node.js scripts, pip and npm commands, and Node-based tools.)
-By default, the `PRE_BUILD_COMMAND`, `POST_BUILD_COMMAND`, and `DISABLE_COLLECTSTATIC` settings are empty.
+By default, the `PRE_BUILD_COMMAND`, `POST_BUILD_COMMAND`, and `DISABLE_COLLECTSTATIC` settings are empty.
- To disable running collectstatic when building Django apps, set the `DISABLE_COLLECTSTATIC` setting to true.
By default, the `PRE_BUILD_COMMAND`, `POST_BUILD_COMMAND`, and `DISABLE_COLLECTS
- To run post-build commands, set the `POST_BUILD_COMMAND` setting to contain either a command, such as `echo Post-build command`, or a path to a script file relative to your project root, such as `scripts/postbuild.sh`. All commands must use relative paths to the project root folder.
-For additional settings that customize build automation, see [Oryx configuration](https://github.com/microsoft/Oryx/blob/master/doc/configuration.md).
+For additional settings that customize build automation, see [Oryx configuration](https://github.com/microsoft/Oryx/blob/master/doc/configuration.md).
To access the build and deployment logs, see [Access deployment logs](#access-deployment-logs).
For more information on how App Service runs and builds Python apps in Linux, se
> [!NOTE] > The `PRE_BUILD_SCRIPT_PATH` and `POST_BUILD_SCRIPT_PATH` settings are identical to `PRE_BUILD_COMMAND` and `POST_BUILD_COMMAND` and are supported for legacy purposes.
->
+>
> A setting named `SCM_DO_BUILD_DURING_DEPLOYMENT`, if it contains `true` or 1, triggers an Oryx build happens during deployment. The setting is true when deploying using git, the Azure CLI command `az webapp up`, and Visual Studio Code. > [!NOTE]
For more information on how App Service runs and builds Python apps in Linux, se
Existing web applications can be redeployed to Azure as follows: 1. **Source repository**: Maintain your source code in a suitable repository like GitHub, which enables you to set up continuous deployment later in this process.
- 1. Your *requirements.txt* file must be at the root of your repository for App Service to automatically install the necessary packages.
+ 1. Your *requirements.txt* file must be at the root of your repository for App Service to automatically install the necessary packages.
1. **Database**: If your app depends on a database, provision the necessary resources on Azure as well. See [Tutorial: Deploy a Django web app with PostgreSQL - create a database](tutorial-python-postgresql-app.md#3-create-postgres-database-in-azure) for an example.
If your Django web app includes static front-end files, first follow the instruc
For App Service, you then make the following modifications:
-1. Consider using environment variables (for local development) and App Settings (when deploying to the cloud) to dynamically set the Django `STATIC_URL` and `STATIC_ROOT` variables. For example:
+1. Consider using environment variables (for local development) and App Settings (when deploying to the cloud) to dynamically set the Django `STATIC_URL` and `STATIC_ROOT` variables. For example:
```python STATIC_URL = os.environ.get("DJANGO_STATIC_URL", "/static/")
When deployed to App Service, Python apps run within a Linux Docker container th
This container has the following characteristics: - Apps are run using the [Gunicorn WSGI HTTP Server](https://gunicorn.org/), using the additional arguments `--bind=0.0.0.0 --timeout 600`.
- - You can provide configuration settings for Gunicorn through a *gunicorn.conf.py* file in the project root, as described on [Gunicorn configuration overview](https://docs.gunicorn.org/en/stable/configure.html#configuration-file) (docs.gunicorn.org). You can alternately [customize the startup command](#customize-startup-command).
+ - You can provide configuration settings for Gunicorn through a *gunicorn.conf.py* file in the project root, as described on [Gunicorn configuration overview](https://docs.gunicorn.org/en/stable/configure.html#configuration-file) (docs.gunicorn.org). You can alternately [customize the startup command](#customize-startup-command).
- - To protect your web app from accidental or deliberate DDOS attacks, Gunicorn is run behind an Nginx reverse proxy as described on [Deploying Gunicorn](https://docs.gunicorn.org/en/latest/deploy.html) (docs.gunicorn.org).
+ - To protect your web app from accidental or deliberate DDOS attacks, Gunicorn is run behind an Nginx reverse proxy as described on [Deploying Gunicorn](https://docs.gunicorn.org/en/latest/deploy.html) (docs.gunicorn.org).
- By default, the base container image includes only the Flask web framework, but the container supports other frameworks that are WSGI-compliant and compatible with Python 3.6+, such as Django.
This container has the following characteristics:
The *requirements.txt* file *must* be in the project root for dependencies to be installed. Otherwise, the build process reports the error: "Could not find setup.py or requirements.txt; Not running pip install." If you encounter this error, check the location of your requirements file. -- App Service automatically defines an environment variable named `WEBSITE_HOSTNAME` with the web app's URL, such as `msdocs-hello-world.azurewebsites.net`. It also defines `WEBSITE_SITE_NAME` with the name of your app, such as `msdocs-hello-world`.
-
+- App Service automatically defines an environment variable named `WEBSITE_HOSTNAME` with the web app's URL, such as `msdocs-hello-world.azurewebsites.net`. It also defines `WEBSITE_SITE_NAME` with the name of your app, such as `msdocs-hello-world`.
+ - npm and Node.js are installed in the container so you can run Node-based build tools, such as yarn. ## Container startup process
If your main app module is contained in a different file, use a different name f
### Default behavior
-If the App Service doesn't find a custom command, a Django app, or a Flask app, then it runs a default read-only app, located in the _opt/defaultsite_ folder and shown in the following image.
+If the App Service doesn't find a custom command, a Django app, or a Flask app, then it runs a default read-only app, located in the *opt/defaultsite* folder and shown in the following image.
If you deployed code and still see the default app, see [Troubleshooting - App doesn't appear](#app-doesnt-appear).
To specify a startup command or command file:
```azurecli az webapp config set --resource-group <resource-group-name> --name <app-name> --startup-file "<custom-command>" ```
-
+ Replace `<custom-command>` with either the full text of your startup command or the name of your startup command file.
-
+ App Service ignores any errors that occur when processing a custom startup command or file, then continues its startup process by looking for Django and Flask apps. If you don't see the behavior you expect, check that your startup command or file is error-free and that a startup command file is deployed to App Service along with your app code. You can also check the [Diagnostic logs](#access-diagnostic-logs) for additional information. Also check the app's **Diagnose and solve problems** page on the [Azure portal](https://portal.azure.com). ### Example startup commands -- **Added Gunicorn arguments**: The following example adds the `--workers=4` to a Gunicorn command line for starting a Django app:
+- **Added Gunicorn arguments**: The following example adds the `--workers=4` to a Gunicorn command line for starting a Django app:
```bash # <module-path> is the relative path to the folder that contains the module # that contains wsgi.py; <module> is the name of the folder containing wsgi.py. gunicorn --bind=0.0.0.0 --timeout 600 --workers=4 --chdir <module_path> <module>.wsgi
- ```
+ ```
For more information, see [Running Gunicorn](https://docs.gunicorn.org/en/stable/run.html) (docs.gunicorn.org). - **Enable production logging for Django**: Add the `--access-logfile '-'` and `--error-logfile '-'` arguments to the command line:
- ```bash
+ ```bash
# '-' for the log files means stdout for --access-logfile and stderr for --error-logfile. gunicorn --bind=0.0.0.0 --timeout 600 --workers=4 --chdir <module_path> <module>.wsgi --access-logfile '-' --error-logfile '-'
- ```
+ ```
These logs will appear in the [App Service log stream](#access-diagnostic-logs). For more information, see [Gunicorn logging](https://docs.gunicorn.org/en/stable/settings.html#logging) (docs.gunicorn.org).
-
+ - **Custom Flask main module**: by default, App Service assumes that a Flask app's main module is *application.py* or *app.py*. If your main module uses a different name, then you must customize the startup command. For example, if you have a Flask app whose main module is *hello.py* and the Flask app object in that file is named `myapp`, then the command is as follows: ```bash gunicorn --bind=0.0.0.0 --timeout 600 hello:myapp ```
-
+ If your main module is in a subfolder, such as `website`, specify that folder with the `--chdir` argument:
-
+ ```bash gunicorn --bind=0.0.0.0 --timeout 600 --chdir website hello:myapp ```
-
+ - **Use a non-Gunicorn server**: To use a different web server, such as [aiohttp](https://aiohttp.readthedocs.io/en/stable/web_quickstart.html), use the appropriate command as the startup command or in the startup command file: ```bash
The following sections provide additional guidance for specific issues.
- **You see the default app after deploying your own app code.** The [default app](#default-behavior) appears because you either haven't deployed your app code to App Service, or App Service failed to find your app code and ran the default app instead.
- - Restart the App Service, wait 15-20 seconds, and check the app again.
-
- - Be sure you're using App Service for Linux rather than a Windows-based instance. From the Azure CLI, run the command `az webapp show --resource-group <resource-group-name> --name <app-name> --query kind`, replacing `<resource-group-name>` and `<app-name>` accordingly. You should see `app,linux` as output; otherwise, recreate the App Service and choose Linux.
-
- - Use [SSH](#open-ssh-session-in-browser) to connect directly to the App Service container and verify that your files exist under *site/wwwroot*. If your files don't exist, use the following steps:
+ - Restart the App Service, wait 15-20 seconds, and check the app again.
+
+ - Be sure you're using App Service for Linux rather than a Windows-based instance. From the Azure CLI, run the command `az webapp show --resource-group <resource-group-name> --name <app-name> --query kind`, replacing `<resource-group-name>` and `<app-name>` accordingly. You should see `app,linux` as output; otherwise, recreate the App Service and choose Linux.
+
+ - Use [SSH](#open-ssh-session-in-browser) to connect directly to the App Service container and verify that your files exist under *site/wwwroot*. If your files don't exist, use the following steps:
1. Create an app setting named `SCM_DO_BUILD_DURING_DEPLOYMENT` with the value of 1, redeploy your code, wait a few minutes, then try to access the app again. For more information on creating app settings, see [Configure an App Service app in the Azure portal](configure-common.md). 1. Review your deployment process, [check the deployment logs](#access-deployment-logs), correct any errors, and redeploy the app.
-
- - If your files exist, then App Service wasn't able to identify your specific startup file. Check that your app is structured as App Service expects for [Django](#django-app) or [Flask](#flask-app), or use a [custom startup command](#customize-startup-command).
+
+ - If your files exist, then App Service wasn't able to identify your specific startup file. Check that your app is structured as App Service expects for [Django](#django-app) or [Flask](#flask-app), or use a [custom startup command](#customize-startup-command).
- <a name="service-unavailable"></a>**You see the message "Service Unavailable" in the browser.** The browser has timed out waiting for a response from App Service, which indicates that App Service started the Gunicorn server, but the app itself did not start. This condition could indicate that the Gunicorn arguments are incorrect, or that there's an error in the app code.
- - Refresh the browser, especially if you're using the lowest pricing tiers in your App Service Plan. The app may take longer to start up when using free tiers, for example, and becomes responsive after you refresh the browser.
+ - Refresh the browser, especially if you're using the lowest pricing tiers in your App Service Plan. The app may take longer to start up when using free tiers, for example, and becomes responsive after you refresh the browser.
- - Check that your app is structured as App Service expects for [Django](#django-app) or [Flask](#flask-app), or use a [custom startup command](#customize-startup-command).
+ - Check that your app is structured as App Service expects for [Django](#django-app) or [Flask](#flask-app), or use a [custom startup command](#customize-startup-command).
- - Examine the [app log stream](#access-diagnostic-logs) for any error messages. The logs will show any errors in the app code.
+ - Examine the [app log stream](#access-diagnostic-logs) for any error messages. The logs will show any errors in the app code.
#### Could not find setup.py or requirements.txt - **The log stream shows "Could not find setup.py or requirements.txt; Not running pip install."**: The Oryx build process failed to find your *requirements.txt* file.
- - Connect to the web app's container via [SSH](#open-ssh-session-in-browser) and verify that *requirements.txt* is named correctly and exists directly under *site/wwwroot*. If it doesn't exist, make site the file exists in your repository and is included in your deployment. If it exists in a separate folder, move it to the root.
+ - Connect to the web app's container via [SSH](#open-ssh-session-in-browser) and verify that *requirements.txt* is named correctly and exists directly under *site/wwwroot*. If it doesn't exist, make site the file exists in your repository and is included in your deployment. If it exists in a separate folder, move it to the root.
#### ModuleNotFoundError when app starts
If you're encountering this error with the sample in [Tutorial: Deploy a Django
- **You see the message, "Fatal SSL Connection is Required"**: Check any usernames and passwords used to access resources (such as databases) from within the app.
-## More resources:
+## More resources
- [Tutorial: Python app with PostgreSQL](tutorial-python-postgresql-app.md) - [Tutorial: Deploy from private container repository](tutorial-custom-container.md?pivots=container-linux)
app-service Configure Language Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-ruby.md
az webapp config show --resource-group <resource-group-name> --name <app-name> -
To show all supported Ruby versions, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive
-az webapp list-runtimes --linux | grep RUBY
+az webapp list-runtimes --os linux | grep RUBY
``` You can run an unsupported version of Ruby by building your own container image instead. For more information, see [use a custom Docker image](tutorial-custom-container.md?pivots=container-linux).
az webapp config set --resource-group <resource-group-name> --name <app-name> --
> [!NOTE] > If you see errors similar to the following during deployment time:
+>
> ``` > Your Ruby version is 2.3.3, but your Gemfile specified 2.3.1 > ```
+>
> or
+>
> ``` > rbenv: version `2.3.1' is not installed > ```
+>
> It means that the Ruby version configured in your project is different than the version that's installed in the container you're running (`2.3.3` in the example above). In the example above, check both *Gemfile* and *.ruby-version* and verify that the Ruby version is not set, or is set to the version that's installed in the container you're running (`2.3.3` in the example above). ## Access environment variables
ENV['WEBSITE_SITE_NAME']
When you deploy a [Git repository](deploy-local-git.md), or a [Zip package](deploy-zip.md) [with build automation enabled](deploy-zip.md#enable-build-automation-for-zip-deploy), the deployment engine (Kudu) automatically runs the following post-deployment steps by default: 1. Check if a *Gemfile* exists.
-1. Run `bundle clean`.
+1. Run `bundle clean`.
1. Run `bundle install --path "vendor/bundle"`. 1. Run `bundle package` to package gems into vendor/cache folder.
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md
*Azure App Service* is an HTTP-based service for hosting web applications, REST APIs, and mobile back ends. You can develop in your favorite language, be it .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python. Applications run and scale with ease on both Windows and [Linux](#app-service-on-linux)-based environments.
-App Service not only adds the power of Microsoft Azure to your application, such as security, load balancing, autoscaling, and automated management. You can also take advantage of its DevOps capabilities, such as continuous deployment from Azure DevOps, GitHub, Docker Hub, and other sources, package management, staging environments, custom domain, and TLS/SSL certificates.
+App Service not only adds the power of Microsoft Azure to your application, such as security, load balancing, autoscaling, and automated management. You can also take advantage of its DevOps capabilities, such as continuous deployment from Azure DevOps, GitHub, Docker Hub, and other sources, package management, staging environments, custom domain, and TLS/SSL certificates.
-With App Service, you pay for the Azure compute resources you use. The compute resources you use are determined by the _App Service plan_ that you run your apps on. For more information, see [Azure App Service plans overview](overview-hosting-plans.md).
+With App Service, you pay for the Azure compute resources you use. The compute resources you use are determined by the *App Service plan* that you run your apps on. For more information, see [Azure App Service plans overview](overview-hosting-plans.md).
## Why use App Service?
App Service can also host web apps natively on Linux for supported application s
### Built-in languages and frameworks
-App Service on Linux supports a number of language specific built-in images. Just deploy your code. Supported languages include: Node.js, Java (JRE 8 & JRE 11), PHP, Python, .NET Core, and Ruby. Run [`az webapp list-runtimes --linux`](/cli/azure/webapp#az_webapp_list_runtimes) to view the latest languages and supported versions. If the runtime your application requires is not supported in the built-in images, you can deploy it with a custom container.
+App Service on Linux supports a number of language specific built-in images. Just deploy your code. Supported languages include: Node.js, Java (JRE 8 & JRE 11), PHP, Python, .NET Core, and Ruby. Run [`az webapp list-runtimes --os linux`](/cli/azure/webapp#az_webapp_list_runtimes) to view the latest languages and supported versions. If the runtime your application requires is not supported in the built-in images, you can deploy it with a custom container.
-Outdated runtimes are periodically removed from the Web Apps Create and Configuration blades in the Portal. These runtimes are hidden from the Portal when they are deprecated by the maintaining organization or found to have significant vulnerabilities. These options are hidden to guide customers to the latest runtimes where they will be the most successful.
+Outdated runtimes are periodically removed from the Web Apps Create and Configuration blades in the Portal. These runtimes are hidden from the Portal when they are deprecated by the maintaining organization or found to have significant vulnerabilities. These options are hidden to guide customers to the latest runtimes where they will be the most successful.
When an outdated runtime is hidden from the Portal, any of your existing sites using that version will continue to run. If a runtime is fully removed from the App Service platform, your Azure subscription owner(s) will receive an email notice before the removal.
If you need to create another web app with an outdated runtime version that is n
> Linux and Windows App Service plans can now share resource groups. This limitation has been lifted from the platform and existing resource groups have been updated to support this. > -- App Service on Linux is not supported on [Shared](https://azure.microsoft.com/pricing/details/app-service/plans/) pricing tier. -- The Azure portal shows only features that currently work for Linux apps. As features are enabled, they're activated on the portal.-- When deployed to built-in images, your code and content are allocated a storage volume for web content, backed by Azure Storage. The disk latency of this volume is higher and more variable than the latency of the container filesystem. Apps that require heavy read-only access to content files may benefit from the custom container option, which places files in the container filesystem instead of on the content volume.
+* App Service on Linux is not supported on [Shared](https://azure.microsoft.com/pricing/details/app-service/plans/) pricing tier.
+* The Azure portal shows only features that currently work for Linux apps. As features are enabled, they're activated on the portal.
+* When deployed to built-in images, your code and content are allocated a storage volume for web content, backed by Azure Storage. The disk latency of this volume is higher and more variable than the latency of the container filesystem. Apps that require heavy read-only access to content files may benefit from the custom container option, which places files in the container filesystem instead of on the content volume.
## Next steps
app-service Quickstart Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-arc.md
az group create --name myResourceGroup --location eastus
[!INCLUDE [app-service-arc-get-custom-location](../../includes/app-service-arc-get-custom-location.md)] - ## 3. Create an app
-The following example creates a Node.js app. Replace `<app-name>` with a name that's unique within your cluster (valid characters are `a-z`, `0-9`, and `-`). To see all supported runtimes, run [`az webapp list-runtimes --linux`](/cli/azure/webapp).
+The following example creates a Node.js app. Replace `<app-name>` with a name that's unique within your cluster (valid characters are `a-z`, `0-9`, and `-`). To see all supported runtimes, run [`az webapp list-runtimes --os linux`](/cli/azure/webapp).
```azurecli-interactive az webapp create \
az webapp deployment source config-zip --resource-group myResourceGroup --name <
> [!NOTE] > To use Log Analytics, you should've previously enabled it when [installing the App Service extension](manage-create-arc-environment.md#install-the-app-service-extension). If you installed the extension without Log Analytics, skip this step.
-Navigate to the [Log Analytics workspace that's configured with your App Service extension](manage-create-arc-environment.md#install-the-app-service-extension), then click Logs in the left navigation. Run the following sample query to show logs over the past 72 hours. Replace `<app-name>` with your web app name. If there's an error when running a query, try again in 10-15 minutes (there may be a delay for Log Analytics to start receiving logs from your application).
+Navigate to the [Log Analytics workspace that's configured with your App Service extension](manage-create-arc-environment.md#install-the-app-service-extension), then click Logs in the left navigation. Run the following sample query to show logs over the past 72 hours. Replace `<app-name>` with your web app name. If there's an error when running a query, try again in 10-15 minutes (there may be a delay for Log Analytics to start receiving logs from your application).
```kusto let StartTime = ago(72h);
AppServiceConsoleLogs_CL
| where AppName_s =~ "<app-name>" ```
-The application logs for all the apps hosted in your Kubernetes cluster are logged to the Log Analytics workspace in the custom log table named `AppServiceConsoleLogs_CL`.
+The application logs for all the apps hosted in your Kubernetes cluster are logged to the Log Analytics workspace in the custom log table named `AppServiceConsoleLogs_CL`.
**Log_s** contains application logs for a given App Service and **AppName_s** contains the App Service app name. In addition to logs you write via your application code, the Log_s column also contains logs on container startup, shutdown, and Function Apps.
app-service Quickstart Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-ruby.md
## Download the sample
-1. In a terminal window, clone the sample application to your local machine, and navigate to the directory containing the sample code.
+1. In a terminal window, clone the sample application to your local machine, and navigate to the directory containing the sample code.
```bash git clone https://github.com/Azure-Samples/ruby-docs-hello-world
```bash git branch -m main ```
-
+ > [!TIP] > The branch name change isn't required by App Service. However, since many repositories are changing their default branch to `main`, this tutorial also shows you how to deploy a repository from `main`. For more information, see [Change deployment branch](deploy-local-git.md#change-deployment-branch).
## Create a web app
-1. Create a [web app](overview.md#app-service-on-linux) in the `myAppServicePlan` App Service plan.
+1. Create a [web app](overview.md#app-service-on-linux) in the `myAppServicePlan` App Service plan.
- In the Cloud Shell, you can use the [`az webapp create`](/cli/azure/webapp) command. In the following example, replace `<app-name>` with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). The runtime is set to `RUBY|2.6`. To see all supported runtimes, run [`az webapp list-runtimes --linux`](/cli/azure/webapp).
+ In the Cloud Shell, you can use the [`az webapp create`](/cli/azure/webapp) command. In the following example, replace `<app-name>` with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). The runtime is set to `RUBY|2.6`. To see all supported runtimes, run [`az webapp list-runtimes --os linux`](/cli/azure/webapp).
```azurecli-interactive az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime 'RUBY|2.6' --deployment-local-git
&lt; JSON data removed for brevity. &gt; } </pre>
-
+ You've created an empty new web app, with git deployment enabled. > [!NOTE]
## Deploy your application <pre> remote: Using turbolinks 5.2.0
app-service Tutorial Connect Msi Key Vault Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-key-vault-javascript.md
+
+ Title: 'Tutorial: JavaScript connect to Azure services securely with Key Vault'
+description: Learn how to secure connectivity to back-end Azure services that don't support managed identity natively from a JavaScript web app
+ms.devlang: javascript, azurecli
+ Last updated : 10/26/2021+++++
+# Tutorial: Secure Cognitive Service connection from JavaScript App Service using Key Vault
+++
+## Configure JavaScript app
+
+Clone the sample repository locally and deploy the sample application to App Service. Replace *\<app-name>* with a unique name.
+
+```azurecli-interactive
+# Clone and prepare sample application
+git clone https://github.com/Azure-Samples/app-service-language-detector.git
+cd app-service-language-detector/javascript
+zip default.zip *.*
+
+# Save app name as variable for convenience
+appName=<app-name>
+
+az appservice plan create --resource-group $groupName --name $appName --sku FREE --location $region --is-linux
+az webapp create --resource-group $groupName --plan $appName --name $appName --runtime "node|14-lts"
+az webapp config appsettings set --resource-group $groupName --name $appName --settings SCM_DO_BUILD_DURING_DEPLOYMENT=true
+az webapp deployment source config-zip --resource-group $groupName --name $appName --src ./default.zip
+```
+
+The preceding commands:
+* Create a linux app service plan
+* Create a web app for Node.js 14 LTS
+* Configure the web app to install the npm packages on deployment
+* Upload the zip file, and install the npm packages
+
+## Configure secrets as app settings
+
app-service Tutorial Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-custom-container.md
The sample project contains a simple ASP.NET application that uses a custom font
### Install the font
-In Windows Explorer, navigate to _custom-font-win-container-master/CustomFontSample_, right-click _FrederickatheGreat-Regular.ttf_, and select **Install**.
+In Windows Explorer, navigate to *custom-font-win-container-master/CustomFontSample*, right-click *FrederickatheGreat-Regular.ttf*, and select **Install**.
This font is publicly available from [Google Fonts](https://fonts.google.com/specimen/Fredericka+the+Great).
This font is publicly available from [Google Fonts](https://fonts.google.com/spe
Open the *custom-font-win-container-master/CustomFontSample.sln* file in Visual Studio.
-Type `Ctrl+F5` to run the app without debugging. The app is displayed in your default browser.
+Type `Ctrl+F5` to run the app without debugging. The app is displayed in your default browser.
:::image type="content" source="media/tutorial-custom-container/local-app-in-browser.png" alt-text="Screenshot showing the app displayed in the default browser.":::
At the end of the file, add the following line and save the file:
RUN ${source:-obj/Docker/publish/InstallFont.ps1} ```
-You can find _InstallFont.ps1_ in the **CustomFontSample** project. It's a simple script that installs the font. You can find a more complex version of the script in the [Script Center](https://gallery.technet.microsoft.com/scriptcenter/fb742f92-e594-4d0c-8b79-27564c575133).
+You can find *InstallFont.ps1* in the **CustomFontSample** project. It's a simple script that installs the font. You can find a more complex version of the script in the [Script Center](https://gallery.technet.microsoft.com/scriptcenter/fb742f92-e594-4d0c-8b79-27564c575133).
> [!NOTE] > To test the Windows container locally, ensure that Docker is started on your local machine.
A terminal window is opened and displays the image deployment progress. Wait for
## Sign in to Azure
-Sign in to the Azure portal at https://portal.azure.com.
+Sign in to the Azure portal at <https://portal.azure.com>.
## Create a web app
The streamed logs look like this:
::: zone pivot="container-linux" + Azure App Service uses the Docker container technology to host both built-in images and custom images. To see a list of built-in images, run the Azure CLI command, ['az webapp list-runtimes--linux'](/cli/azure/webapp#az_webapp_list_runtimes). If those images don't satisfy your needs, you can build and deploy a custom image. In this tutorial, you learn how to: > [!div class="checklist"]
-> * Push a custom Docker image to Azure Container Registry
-> * Deploy the custom image to App Service
-> * Configure environment variables
-> * Pull image into App Service using a managed identity
-> * Access diagnostic logs
-> * Enable CI/CD from Azure Container Registry to App Service
-> * Connect to the container using SSH
+>
+> - Push a custom Docker image to Azure Container Registry
+> - Deploy the custom image to App Service
+> - Configure environment variables
+> - Pull image into App Service using a managed identity
+> - Access diagnostic logs
+> - Enable CI/CD from Azure Container Registry to App Service
+> - Connect to the container using SSH
Completing this tutorial incurs a small charge in your Azure account for the container registry and can incur more costs for hosting the container for longer than a month.
cd docker-django-webapp-linux
### Download from GitHub
-Instead of using git clone, you can visit [https://github.com/Azure-Samples/docker-django-webapp-linux](https://github.com/Azure-Samples/docker-django-webapp-linux), select **Clone**, and then select **Download ZIP**.
+Instead of using git clone, you can visit [https://github.com/Azure-Samples/docker-django-webapp-linux](https://github.com/Azure-Samples/docker-django-webapp-linux), select **Clone**, and then select **Download ZIP**.
-Unpack the ZIP file into a folder named *docker-django-webapp-linux*.
+Unpack the ZIP file into a folder named *docker-django-webapp-linux*.
Then, open a terminal window in the*docker-django-webapp-linux* folder. ## (Optional) Examine the Docker file
-The file in the sample named _Dockerfile_ that describes the docker image and contains configuration instructions:
+The file in the sample named *Dockerfile* that describes the docker image and contains configuration instructions:
```Dockerfile FROM tiangolo/uwsgi-nginx-flask:python3.6
ENV SSH_PASSWD "root:Docker!"
RUN apt-get update \ && apt-get install -y --no-install-recommends dialog \ && apt-get update \
- && apt-get install -y --no-install-recommends openssh-server \
- && echo "$SSH_PASSWD" | chpasswd
+ && apt-get install -y --no-install-recommends openssh-server \
+ && echo "$SSH_PASSWD" | chpasswd
COPY sshd_config /etc/ssh/ COPY init.sh /usr/local/bin/
-
+
RUN chmod u+x /usr/local/bin/init.sh EXPOSE 8000 2222
EXPOSE 8000 2222
ENTRYPOINT ["init.sh"] ```
-* The first group of commands installs the app's requirements in the environment.
-* The second group of commands create an [SSH](https://www.ssh.com/ssh/protocol/) server for secure communication between the container and the host.
-* The last line, `ENTRYPOINT ["init.sh"]`, invokes `init.sh` to start the SSH service and Python server.
+- The first group of commands installs the app's requirements in the environment.
+- The second group of commands create an [SSH](https://www.ssh.com/ssh/protocol/) server for secure communication between the container and the host.
+- The last line, `ENTRYPOINT ["init.sh"]`, invokes `init.sh` to start the SSH service and Python server.
## Build and test the image locally > [!NOTE] > Docker Hub has [quotas on the number of anonymous pulls per IP and the number of authenticated pulls per free user (see **Data transfer**)](https://www.docker.com/pricing). If you notice your pulls from Docker Hub are being limited, try `docker login` if you're not already logged in.
->
+>
1. Run the following command to build the image: ```bash docker build --tag appsvc-tutorial-custom-image . ```
-
+ 1. Test that the build works by running the Docker container locally: ```bash docker run -it -p 8000:8000 appsvc-tutorial-custom-image ```
-
+ This [`docker run`](https://docs.docker.com/engine/reference/commandline/run/) command specifies the port with the `-p` argument followed by the name of the image. `-it` lets you stop it with `Ctrl+C`.
-
+ > [!TIP] > If you're running on Windows and see the error, *standard_init_linux.go:211: exec user process caused "no such file or directory"*, the *init.sh* file contains CR-LF line endings instead of the expected LF endings. This error happens if you used git to clone the sample repository but omitted the `--config core.autocrlf=input` parameter. In this case, clone the repository again with the `--config`` argument. You might also see the error if you edited *init.sh* and saved it with CRLF endings. In this case, save the file again with LF endings only.
In this section, you push the image to Azure Container Registry from which App S
```azurecli-interactive az acr credential show --resource-group myResourceGroup --name <registry-name> ```
-
+ The JSON output of this command provides two passwords along with the registry's user name.
-
+ 1. Use the `docker login` command to sign in to the container registry: ```bash docker login <registry-name>.azurecr.io --username <registry-username> ```
-
+ Replace `<registry-name>` and `<registry-username>` with values from the previous steps. When prompted, type in one of the passwords from the previous step. You use the same registry name in all the remaining steps of this section.
In this section, you push the image to Azure Container Registry from which App S
```bash docker tag appsvc-tutorial-custom-image <registry-name>.azurecr.io/appsvc-tutorial-custom-image:latest
- ```
+ ```
1. Use the `docker push` command to push the image to the registry:
In this section, you push the image to Azure Container Registry from which App S
```azurecli-interactive az acr repository list -n <registry-name> ```
-
- The output should show the name of your image.
+ The output should show the name of your image.
## Configure App Service to deploy the image from the registry
To deploy a container to Azure App Service, you first create a web app on App Se
```azurecli-interactive az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --deployment-container-image-name <registry-name>.azurecr.io/appsvc-tutorial-custom-image:latest ```
-
+ Replace `<app-name>` with a name for the web app, which must be unique across all of Azure. Also replace `<registry-name>` with the name of your registry from the previous section.
-1. Use [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set) to set the `WEBSITES_PORT` environment variable as expected by the app code:
+1. Use [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set) to set the `WEBSITES_PORT` environment variable as expected by the app code:
```azurecli-interactive az webapp config appsettings set --resource-group myResourceGroup --name <app-name> --settings WEBSITES_PORT=8000 ``` Replace `<app-name>` with the name you used in the previous step.
-
+ For more information on this environment variable, see the [readme in the sample's GitHub repository](https://github.com/Azure-Samples/docker-django-webapp-linux). 1. Enable [the system-assigned managed identity](./overview-managed-identity.md) for the web app by using the [`az webapp identity assign`](/cli/azure/webapp/identity#az_webapp_identity-assign) command:
To deploy a container to Azure App Service, you first create a web app on App Se
```azurecli-interactive az account show --query id --output tsv
- ```
+ ```
1. Grant the managed identity permission to access the container registry:
To deploy a container to Azure App Service, you first create a web app on App Se
```azurecli-interactive az resource update --ids /subscriptions/<subscription-id>/resourceGroups/myResourceGroup/providers/Microsoft.Web/sites/<app-name>/config/web --set properties.acrUseManagedIdentityCreds=True ```
-
+ Replace the following values: - `<subscription-id>` with the subscription ID retrieved from the `az account show` command. - `<app-name>` with the name of your web app.
You can complete these steps once the image is pushed to the container registry
```azurecli-interactive az webapp config container set --name <app-name> --resource-group myResourceGroup --docker-custom-image-name <registry-name>.azurecr.io/appsvc-tutorial-custom-image:latest --docker-registry-server-url https://<registry-name>.azurecr.io ```
-
- Replace `<app-name>` with the name of your web app and replace `<registry-name>` in two places with the name of your registry.
+
+ Replace `<app-name>` with the name of your web app and replace `<registry-name>` in two places with the name of your registry.
- When using a registry other than Docker Hub (as this example shows), `--docker-registry-server-url` must be formatted as `https://` followed by the fully qualified domain name of the registry. - The message, "No credential was provided to access Azure Container Registry. Trying to look up..." indicates that Azure is using the app's managed identity to authenticate with the container registry rather than asking for a username and password.
While you're waiting for the App Service to pull in the image, it's helpful to s
```azurecli-interactive az webapp log config --name <app-name> --resource-group myResourceGroup --docker-container-logging filesystem ```
-
+ 1. Enable the log stream: ```azurecli-interactive az webapp log tail --name <app-name> --resource-group myResourceGroup ```
-
+ If you don't see console logs immediately, check again in 30 seconds. You can also inspect the log files from the browser at `https://<app-name>.scm.azurewebsites.net/api/logs/docker`.
In this section, you make a change to the web app code, rebuild the image, and t
</div> </nav> ```
-
+ 1. Save your changes. 1. Change to the *docker-django-webapp-linux* folder and rebuild the image:
COPY sshd_config /etc/ssh/
EXPOSE 8000 2222 ```
-Port 2222 is an internal port accessible only by containers within the bridge network of a private virtual network.
+Port 2222 is an internal port accessible only by containers within the bridge network of a private virtual network.
Finally, the entry script, *init.sh*, starts the SSH server.
service ssh start
1. When you sign in, you're redirected to an informational page for the web app. Select **SSH** at the top of the page to open the shell and use commands. For example, you can examine the processes running within it using the `top` command.
-
+ ## Clean up resources The resources you created in this article might incur ongoing costs. To clean up the resources, you only need to delete the resource group that contains them:
What you learned:
::: zone pivot="container-windows" > [!div class="checklist"]
-> * Deploy a custom image to a private container registry
-> * Deploy and the custom image in App Service
-> * Update and redeploy the image
-> * Access diagnostic logs
-> * Connect to the container using SSH
+>
+> - Deploy a custom image to a private container registry
+> - Deploy and the custom image in App Service
+> - Update and redeploy the image
+> - Access diagnostic logs
+> - Connect to the container using SSH
::: zone-end ::: zone pivot="container-linux" > [!div class="checklist"]
-> * Push a custom Docker image to Azure Container Registry
-> * Deploy the custom image to App Service
-> * Configure environment variables
-> * Pull image into App Service using a managed identity
-> * Access diagnostic logs
-> * Enable CI/CD from Azure Container Registry to App Service
-> * Connect to the container using SSH
+>
+> - Push a custom Docker image to Azure Container Registry
+> - Deploy the custom image to App Service
+> - Configure environment variables
+> - Pull image into App Service using a managed identity
+> - Access diagnostic logs
+> - Enable CI/CD from Azure Container Registry to App Service
+> - Connect to the container using SSH
::: zone-end - In the next tutorial, you learn how to map a custom DNS name to your app. > [!div class="nextstepaction"]
application-gateway Configure Alerts With Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-alerts-with-templates.md
+
+ Title: Configure Azure Monitor alerts for Application Gateway
+description: Learn how to use ARM templates to configure Azure Monitor alerts for Application Gateway
++++ Last updated : 03/03/2022++
+# Configure Azure Monitor alerts for Application Gateway
++
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. For more information about Azure Monitor Alerts for Application Gateway, see [Monitoring Azure Application Gateway](monitor-application-gateway.md#alerts).
+
+## Configure alerts using ARM templates
+
+You can use ARM templates to quickly configure important alerts for Application Gateway. Before you begin, consider the following details:
+
+- Azure Monitor alert rules are charged based on the type and number of signals it monitors. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) before deploying for pricing information. Or you can see the estimated cost in the portal after deployment:
+ :::image type="content" source="media/configure-alerts-with-templates/alert-pricing.png" alt-text="Image showing application gateway pricing details":::
+- You need to create an Azure Monitor action group in advance and then use the Resource ID for as many alerts as you need. Azure Monitor alerts use this action group to notify users that an alert has been triggered. For more information, see [Create and manage action groups in the Azure portal](../azure-monitor/alerts/action-groups.md).
+>[!TIP]
+> You can manually form the ResourceID for your Action Group by following these steps.
+> 1. Select Azure Monitor in your Azure portal.
+> 1. Open the Alerts page and select Action Groups.
+> 1. Select the action group to view its details.
+> 1. Use the Resource Group Name, Action Group Name and Subscription Info here to form the ResourceID for the action group as shown here: <br>
+> `/subscriptions/<subscription-id-from-your-account>/resourcegroups/<resource-group-name>/providers/microsoft.insights/actiongroups/<action-group-name>`
+- The templates for alerts described here are defined generically for settings like Severity, Aggregation Granularity, Frequency of Evaluation, Condition Type, and so on. You can modify the settings after deployment to meet your needs. See [Understand how metric alerts work in Azure Monitor](../azure-monitor/alerts/alerts-metric-overview.md) for more information.
+- The templates for metric-based alerts use the **Dynamic threshold** value with [High sensitivity](../azure-monitor/alerts/alerts-dynamic-thresholds.md#what-does-sensitivity-setting-in-dynamic-thresholds-mean). You can choose to adjust these settings based on your needs.
+
+## ARM templates
+
+The following ARM templates are available to configure Azure Monitor alerts for Application Gateway.
+
+### Alert for Backend Response Status as 5xx
+
+[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-backend-5xx%2Fazuredeploy.json)
+
+This notification is based on Metrics signal.
+
+### Alert for average Unhealthy Host Count
+
+[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-unhealthy-host%2Fazuredeploy.json)
+
+This notification is based on Metrics signal.
+
+### Alert for Backend Last Byte Response Time
+
+[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-backend-lastbyte-resp%2Fazuredeploy.json)
+
+This notification is based on Metrics signal.
+
+### Alert for Key Vault integration issues
+
+[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-keyvault-advisor%2Fazuredeploy.json)
+
+This notification is based on its Azure Advisor recommendation.
++
+## Next steps
+
+<!-- Add additional links. You can change the wording of these and add more if useful. -->
+
+- See [Monitoring Application Gateway data reference](monitor-application-gateway-reference.md) for a reference of the metrics, logs, and other important values created by Application Gateway.
+
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
application-gateway High Traffic Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/high-traffic-support.md
Check your Compute Unit metric for the past one month. Compute unit metric is a
To get notified of any traffic or utilization anomalies, you can set up alerts on certain metrics. See [metrics documentation](./application-gateway-metrics.md) for the complete list of metrics offered by Application Gateway. See [visualize metrics](./application-gateway-metrics.md#metrics-visualization) in the Azure portal and the [Azure monitor documentation](../azure-monitor/alerts/alerts-metric.md) on how to set alerts for metrics.
+To configure alerts using ARM templates, see [Configure Azure Monitor alerts for Application Gateway](configure-alerts-with-templates.md).
+ ## Alerts for Application Gateway v1 SKU (Standard/WAF) ### Alert if average CPU utilization crosses 80%
automation Delete Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/delete-account.md
To recover an Automation account, ensure that the following conditions are met:
- Before you attempt to recover a deleted Automation account, ensure that resource group for that account exists. > [!NOTE]
-> You can't recover your Automation account if the resource group is deleted.
+> If the resource group of the Automation account is deleted, to recover, you must recreate the resource group with the same name. After a few hours, the Automation account is repopulated in the list of deleted accounts. Then you can restore the account.
### Recover a deleted Automation account
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Azure Connected Machine agent description: This article provides a detailed overview of the Azure Arc-enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 03/01/2022 Last updated : 03/03/2022
The following versions of the Windows and Linux operating system are officially
* SUSE Linux Enterprise Server (SLES) 12 and 15 (x64) * Red Hat Enterprise Linux (RHEL) 7 and 8 (x64) * Amazon Linux 2 (x64)
-* Oracle Linux 7 (x64)
+* Oracle Linux 7 and 8 (x64)
> [!WARNING] > The Linux hostname or Windows computer name cannot use one of the reserved words or trademarks in the name, otherwise attempting to register the connected machine with Azure will fail. For a list of reserved words, see [Resolve reserved resource name errors](../../azure-resource-manager/templates/error-reserved-resource-name.md).
Connecting machines in your hybrid environment directly with Azure can be accomp
| At scale | [Connect machines with a Configuration Manager custom task sequence](onboard-configuration-manager-custom-task.md) | At scale | [Connect machines from Automation Update Management](onboard-update-management-machines.md) to create a service principal that installs and configures the agent for multiple machines managed with Azure Automation Update Management to connect machines non-interactively. | --- > [!IMPORTANT] > The Connected Machine agent cannot be installed on an Azure Windows virtual machine. If you attempt to, the installation detects this and rolls back.
The Connected Machine agent for Windows can be installed by using one of the fol
* Manually by running the Windows Installer package `AzureConnectedMachineAgent.msi` from the Command shell. * From a PowerShell session using a scripted method.
-Installing, updating, and removing the Connected Machine agent will not require you to restart your server.
+Installing, upgrading, or removing the Connected Machine agent will not require you to restart your server.
After installing the Connected Machine agent for Windows, the following system-wide configuration changes are applied.
After installing the Connected Machine agent for Windows, the following system-w
|GCArcService |Guest configuration Arc Service |gc_service |Monitors the desired state configuration of the machine.| |ExtensionService |Guest configuration Extension Service | gc_service |Installs the required extensions targeting the machine.|
+* The following virtual service account is created during agent installation.
+
+ | Virtual Account | Description |
+ |||
+ | NT SERVICE\\himds | Unprivileged account used to run the Hybrid Instance Metadata Service. |
+
+ > [!TIP]
+ > This account requires the "Log on as a service" right. This right is automatically granted during agent installation, but if your organization configures user rights assignments with Group Policy, you may need to adjust your Group Policy Object to grant the right to "NT SERVICE\\himds" or "NT SERVICE\\ALL SERVICES" to allow the agent to function.
+
+* The following local security group is created during agent installation.
+
+ | Security group name | Description |
+ ||-|
+ | Hybrid agent extension applications | Members of this security group can request Azure Active Directory tokens for the system-assigned managed identity |
+ * The following environmental variables are created during agent installation. |Name |Default value |Description |
After installing the Connected Machine agent for Windows, the following system-w
The Connected Machine agent for Linux is provided in the preferred package format for the distribution (.RPM or .DEB) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/). The agent is installed and configured with the shell script bundle [Install_linux_azcmagent.sh](https://aka.ms/azcmagent).
-Installing, updating, and removing the Connected Machine agent will not require you to restart your server.
+Installing, upgrading, or removing the Connected Machine agent will not require you to restart your server.
After installing the Connected Machine agent for Linux, the following system-wide configuration changes are applied.
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
This page is updated monthly, so revisit it regularly. If you're looking for ite
- Azure Arc network endpoints are now required, onboarding will abort if they are not accessible - New `--skip-network-check` flag to override the new network check behavior - [Proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) is now available for customers using private endpoints. This allows you to send Azure Active Directory and Azure Resource Manager traffic through a proxy server, but skip the proxy server for traffic that should stay on the local network to reach private endpoints.
+- Oracle Linux 8 is now supported
### Fixed
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
To clear a configuration property's value, run the following command:
The Azure Connected Machine agent is updated regularly to address bug fixes, stability enhancements, and new functionality. [Azure Advisor](../../advisor/advisor-overview.md) identifies resources that are not using the latest version of machine agent and recommends that you upgrade to the latest version. It will notify you when you select the Azure Arc-enabled server by presenting a banner on the **Overview** page or when you access Advisor through the Azure portal.
-The Azure Connected Machine agent for Windows and Linux can be upgraded to the latest release manually or automatically depending on your requirements. Installing, upgrading, and uninstalling the Azure Connected Machine Agent will not require you to restart your server.
+The Azure Connected Machine agent for Windows and Linux can be upgraded to the latest release manually or automatically depending on your requirements. Installing, upgrading, or uninstalling the Azure Connected Machine Agent will not require you to restart your server.
The following table describes the methods supported to perform the agent upgrade.
azure-arc Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-overview.md
To manage the Azure Connected Machine agent (azcmagent) on Windows, your user ac
The Azure Connected Machine agent is composed of three services, which run on your machine.
-* The Hybrid Instance Metadata Service (himds) service is responsible for all core functionality of Arc. This includes sending heartbeats to Azure, exposing a local instance metadata service for other apps to learn about the machineΓÇÖs Azure resource ID, and retrieve Azure AD tokens to authenticate to other Azure services. This service runs as an unprivileged virtual service account on Windows, and as the **himds** user on Linux.
+* The Hybrid Instance Metadata Service (himds) service is responsible for all core functionality of Arc. This includes sending heartbeats to Azure, exposing a local instance metadata service for other apps to learn about the machineΓÇÖs Azure resource ID, and retrieve Azure AD tokens to authenticate to other Azure services. This service runs as an unprivileged virtual service account (NT SERVICE\\himds) on Windows, and as the **himds** user on Linux. The virtual service account requires the Log on as a Service right on Windows.
* The Guest Configuration service (GCService) is responsible for evaluating Azure Policy on the machine.
-* The Guest Configuration Extension service (ExtensionService) is responsible for installing, updating, and deleting extensions (agents, scripts, or other software) on the machine.
+* The Guest Configuration Extension service (ExtensionService) is responsible for installing, upgrading, and deleting extensions (agents, scripts, or other software) on the machine.
The guest configuration and extension services run as Local System on Windows, and as root on Linux.
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
recommendations: false Previously updated : 02/26/2022 Last updated : 03/02/2022 # Compare Azure Government and global Azure
This section outlines variations and considerations when using Identity services
### [Azure Active Directory Premium P1 and P2](../active-directory/index.yml)
+For feature variations and limitations, see [Cloud feature availability](../active-directory/authentication/feature-availability.md).
+ The following features have known limitations in Azure Government: - Limitations with B2B Collaboration in supported Azure US Government tenants: - For more information about B2B collaboration limitations in Azure Government and to find out if B2B collaboration is available in your Azure Government tenant, see [Azure AD B2B in government and national clouds](../active-directory/external-identities/b2b-government-national-clouds.md). - B2B collaboration via Power BI is not supported. When you invite a guest user from within Power BI, the B2B flow is not used and the guest user won't appear in the tenant's user list. If a guest user is invited through other means, they'll appear in the Power BI user list, but any sharing request to the user will fail and display a 403 Forbidden error. -- Limitations with multifactor authentication:
- - Hardware OATH tokens are not available in Azure Government.
- - Trusted IPs are not supported in Azure Government. Instead, use Conditional Access policies with named locations to establish when multifactor authentication should and should not be required based off the user's current IP address.
--- Limitations with Azure AD join:
- - Enterprise state roaming for Windows 10 devices is not available
+- Limitations with multi-factor authentication:
+ - Trusted IPs are not supported in Azure Government. Instead, use Conditional Access policies with named locations to establish when multi-factor authentication should and should not be required based off the user's current IP address.
## Management and governance
azure-government Connect With Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/connect-with-azure-pipelines.md
Title: Deploy an app in Azure Government with Azure Pipelines
-description: Information on configuring continuous deployment to your applications hosted with a subscription in Azure Government by connecting from Azure Pipelines.
+description: Configure continuous deployment to your applications hosted in Azure Government by connecting from Azure Pipelines.
Previously updated : 11/02/2021 Last updated : 03/02/2022 # Deploy an app in Azure Government with Azure Pipelines
-This article helps you use Azure Pipelines to set up continuous integration (CI) and continuous deployment (CD) of your web app running in Azure Government. CI/CD automates the build of your code from a repo along with the deployment (release) of the built code artifacts to a service or set of services in Azure Government. In this tutorial, you will build a web app and deploy it to an Azure Governments app service. This build and release process is triggered by a change to a code file in the repo.
-
-> [!NOTE]
-> For special considerations when deploying apps to Azure Government, see **[Deploy apps to Azure Government Cloud](/azure/devops/pipelines/library/government-cloud).**
+This article helps you use Azure Pipelines to set up continuous integration (CI) and continuous deployment (CD) of your web app running in Azure Government. CI/CD automates the build of your code from a repo along with the deployment (release) of the built code artifacts to a service or set of services in Azure Government. In this tutorial, you'll build a web app and deploy it to an Azure Governments app service. This build and release process is triggered by a change to a code file in the repo.
[Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) is used by teams to configure continuous deployment for applications hosted in Azure subscriptions. We can use this service for applications running in Azure Government by defining [service connections](/azure/devops/pipelines/library/service-endpoints) for Azure Government.
This article helps you use Azure Pipelines to set up continuous integration (CI)
## Prerequisites
-Before starting this tutorial, you must have the following:
+Before starting this tutorial, you must complete the following prerequisites:
+ [Create an organization in Azure DevOps](/azure/devops/organizations/accounts/create-organization) + [Create and add a project to the Azure DevOps organization](/azure/devops/organizations/projects/create-project?;bc=%2fazure%2fdevops%2fuser-guide%2fbreadcrumb%2ftoc.json&tabs=new-nav&toc=%2fazure%2fdevops%2fuser-guide%2ftoc.json) + Install and set up [Azure PowerShell](/powershell/azure/install-az-ps)
-If you don't have an active Azure Government subscription, create a [free account](https://azure.microsoft.com/overview/clouds/government/) before you begin.
+If you don't have an active Azure Government subscription, create a [free account](https://azure.microsoft.com/global-infrastructure/government/request/) before you begin.
## Create Azure Government app service
Follow through one of the quickstarts below to set up a Build for your specific
1. Download or copy and paste the [service principal creation](https://github.com/yujhongmicrosoft/spncreationn/blob/master/spncreation.ps1) PowerShell script into an IDE or editor.
+ > [!NOTE]
+ > This script will be updated to use the Azure Az PowerShell module instead of the deprecated AzureRM PowerShell module.
+ 2. Open up the file and navigate to the `param` parameter. Replace the `$environmentName` variable with
-AzureUSGovernment." This sets the service principal to be created in Azure Government.
+AzureUSGovernment." This action sets the service principal to be created in Azure Government.
3. Open your PowerShell window and run the following command. This command sets a policy that enables running local files. `Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass`
- When you are asked whether you want to change the execution policy, enter "A" (for "Yes to All").
+ When you're asked whether you want to change the execution policy, enter "A" (for "Yes to All").
4. Navigate to the directory that has the edited script above.
AzureUSGovernment." This sets the service principal to be created in Azure Gover
7. When prompted for the "password" parameter, enter your desired password.
-8. After providing your Azure Government subscription credentials, you should see the following:
+8. After providing your Azure Government subscription credentials, you should see the following message:
> [!NOTE] > The Environment variable should be `AzureUSGovernment`.
-9. After the script has run, you should see your service connection values. Copy these values as we will need them when setting up our endpoint.
+9. After the script has run, you should see your service connection values. Copy these values as we'll need them when setting up our endpoint.
![ps4](./media/documentation-government-vsts-img11.png)
Follow [Deploy a web app to Azure App Services](/azure/devops/pipelines/apps/cd/
**Do I need a build agent?** <br/> You need at least one [agent](/azure/devops/pipelines/agents/agents) to run your deployments. By default, the build and deployment processes are configured to use the [hosted agents](/azure/devops/pipelines/agents/agents#microsoft-hosted-agents). Configuring a private agent would limit data sharing outside of Azure Government.
-**I use Team Foundation Server on-premises. Can I configure CD on my server to target Azure Government?** <br/>
-Currently, Team Foundation Server cannot be used to deploy to an Azure Government Cloud.
+**I use Team Foundation Server on premises. Can I configure CD on my server to target Azure Government?** <br/>
+Currently, Team Foundation Server can't be used to deploy to an Azure Government Cloud.
## Next steps -- Subscribe to the [Azure Government blog](https://blogs.msdn.microsoft.com/azuregov/)
+- Subscribe to the [Azure Government blog](https://devblogs.microsoft.com/azuregov/)
- Get help on Stack Overflow by using the "[azure-gov](https://stackoverflow.com/questions/tagged/azure-gov)" tag
azure-government Documentation Government Overview Jps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-jps.md
recommendations: false Previously updated : 03/01/2022 Last updated : 03/02/2022 # Public safety and justice in Azure Government
Microsoft treats Criminal Justice Information Services (CJIS) compliance as a co
The [Criminal Justice Information Services](https://www.fbi.gov/services/cjis) (CJIS) Division of the US Federal Bureau of Investigation (FBI) gives state, local, and federal law enforcement and criminal justice agencies access to criminal justice information (CJI), for example, fingerprint records and criminal histories. Law enforcement and other government agencies in the United States must ensure that their use of cloud services for the transmission, storage, or processing of CJI complies with the [CJIS Security Policy](https://www.fbi.gov/services/cjis/cjis-security-policy-resource-center/view), which establishes minimum security requirements and controls to safeguard CJI.
-The CJIS Security Policy integrates presidential and FBI directives, federal laws, and the criminal justice community's Advisory Policy Board decisions, along with guidance from the National Institute of Standards and Technology (NIST). The CJIS Security Policy is updated periodically to reflect evolving security requirements.
+### Azure Government and CJIS Security Policy
-The CJIS Security Policy defines 13 areas that private contractors such as cloud service providers must evaluate to determine if their use of cloud services can be consistent with CJIS requirements. These areas correspond closely to control families in [NIST SP 800-53](https://csrc.nist.gov/Projects/risk-management/sp800-53-controls/release-search#!/800-53), which is also the basis for the US Federal Risk and Authorization Management Program (FedRAMP). The FBI CJIS Information Security Officer (ISO) Program Office has published a [security control mapping of CJIS Security Policy requirements to NIST SP 800-53](https://www.fbi.gov/file-repository/csp-v5_5-to-nist-controls-mapping-1.pdf/view). The corresponding NIST SP 800-53 controls are listed for each CJIS Security Policy section.
+Microsoft's commitment to meeting the applicable CJIS regulatory controls help criminal justice organizations be compliant with the CJIS Security Policy when implementing cloud-based solutions. For more information about Azure support for CJIS, see [Azure CJIS compliance offering](/azure/compliance/offerings/offering-cjis).
-All private contractors who process CJI must sign the CJIS Security Addendum, a uniform agreement approved by the US Attorney General that helps ensure the security and confidentiality of CJI required by the Security Policy. It commits the contractor to maintaining a security program consistent with federal and state laws, regulations, and standards. The addendum also limits the use of CJI to the purposes for which a government agency provided it.
-
-### Azure and CJIS Security Policy
-
-Microsoft will sign the CJIS Security Addendum in states with CJIS Information Agreements. These agreements tell state law enforcement authorities responsible for compliance with CJIS Security Policy how Microsoft's cloud security controls help protect the full lifecycle of data and ensure appropriate background screening of operating personnel with potential access to CJI.
-
-Microsoft has agreements signed with nearly all 50 states and the District of Columbia except for the following states: Delaware, Louisiana, Maryland, New Mexico, Ohio, and South Dakota. Microsoft continues to work with state governments to enter into CJIS Information Agreements.
-
-Microsoft's commitment to meeting the applicable CJIS regulatory controls help criminal justice organizations be compliant with the CJIS Security Policy when implementing cloud-based solutions. Microsoft can accommodate customers subject to the CJIS Security Policy requirements in:
--- [Azure Government](./documentation-government-welcome.md)-- [Dynamics 365 US Government](/power-platform/admin/microsoft-dynamics-365-government#certifications-and-accreditations)-- [Office 365 GCC](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc#us-government-community-compliance)-
-Microsoft has assessed the operational policies and procedures of Microsoft Azure Government, Dynamics 365 US Government, and Office 365 GCC, and will attest to their ability in the applicable services agreements to meet FBI requirements. For more information about Azure support for CJIS, see [Azure CJIS compliance offering](/azure/compliance/offerings/offering-cjis).
-
-The remainder of this article discusses technologies that you can use to safeguard CJI stored or processed in Azure cloud services. These technologies can help you establish sole control over CJI that you're responsible for.
+The remainder of this article discusses technologies that you can use to safeguard CJI stored or processed in Azure cloud services. **These technologies can help you establish sole control over CJI that you're responsible for.**
> [!NOTE] > You are wholly responsible for ensuring your own compliance with all applicable laws and regulations. Information provided in this article does not constitute legal advice, and you should consult your legal advisor for any questions regarding regulatory compliance.
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
We strongly recommended to update to generally available versions listed as foll
| August 2021 | Fixed issue allowing Azure Monitor Metrics as the only destination | 1.1.2.0 | 1.10.9.0<sup>Hotfix</sup> | | September 2021 | <ul><li>Fixed issue causing data loss on restarting the agent</li><li>Fixed issue for Arc Windows servers</li></ul> | 1.1.3.2<sup>Hotfix</sup> | 1.12.2.0 <sup>1</sup> | | December 2021 | <ul><li>Fixed issues impacting Linux Arc-enabled servers</li><li>'Heartbeat' table > 'Category' column reports "Azure Monitor Agent" in Log Analytics for Windows</li></ul> | 1.1.4.0 | 1.14.7.0<sup>2</sup> |
-| January 2021 | <ul><li>Syslog RFC compliance for Linux</li><li>Fixed issue for Linux perf counters not flowing on restart</li><li>Fixed installation failure on Windows Server 2008 R2 SP1</li></ul> | 1.1.5.1<sup>Hotfix</sup> | 1.15.2.0<sup>Hotfix</sup> |
+| January 2022 | <ul><li>Syslog RFC compliance for Linux</li><li>Fixed issue for Linux perf counters not flowing on restart</li><li>Fixed installation failure on Windows Server 2008 R2 SP1</li></ul> | 1.1.5.1<sup>Hotfix</sup> | 1.15.2.0<sup>Hotfix</sup> |
<sup>Hotfix</sup> Do not use AMA Linux versions v1.10.7, v1.15.1 and AMA Windows v1.1.3.1, v1.1.5.0. Please use hotfixed versions listed above. <sup>1</sup> Known issue: No data collected from Linux Arc-enabled servers
The following prerequisites must be met prior to installing the Azure Monitor ag
- [Managed system identity](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) must be enabled on Azure virtual machines. This is not required for Azure Arc-enabled servers. The system identity will be enabled automatically if the agent is installed via [creating and assigning a data collection rule using the Azure portal](data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal). - The [AzureResourceManager service tag](../../virtual-network/service-tags-overview.md) must be enabled on the virtual network for the virtual machine. - The virtual machine must have access to the following HTTPS endpoints:
- - *.ods.opinsights.azure.com
- - *.ingest.monitor.azure.com
- - *.control.monitor.azure.com
+ - global.handler.control.monitor.azure.com
+ - `<virtual-machine-region-name>`.handler.control.monitor.azure.com (example: westus.handler.control.azure.com)
+ - `<log-analytics-workspace-id>`.ods.opinsights.azure.com (example: 12345a01-b1cd-1234-e1f2-1234567g8h99.ods.opsinsights.azure.com)
+ (If using private links on the agent, you must also add the [dce endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint))
+ > [!NOTE] > This article only pertains to agent installation or management. After you install the agent, you must review the next article to [configure data collection rules and associate them with the machines](./data-collection-rule-azure-monitor-agent.md) with agents installed.
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
dependencies
Normally, the SDK sends data at fixed intervals (typically 30 secs) or whenever buffer is full (typically 500 items). However, in some cases, you might want to flush the buffer--for example, if you are using the SDK in an application that shuts down.
-*C#*
+*.NET*
```csharp telemetry.Flush();
telemetry.flush();
The function is asynchronous for the [server telemetry channel](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel/).
-Ideally, flush() method should be used in the shutdown activity of the Application.
+We recommend using the flush() or flushAsync() methods in the shutdown activity of the Application when using the .NET or JS SDK.
+
+For Example:
+
+*JS*
+
+```javascript
+// Immediately send all queued telemetry. By default, it is sent async.
+flush(async?: boolean = true)
+```
## Authenticated users
azure-monitor Change Analysis Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/change-analysis-visualizations.md
The UI supports selecting multiple subscriptions to view resource changes. Use t
:::image type="content" source="./media/change-analysis/multiple-subscriptions-support.png" alt-text="Screenshot of subscription filter that supports selecting multiple subscriptions":::
-## Application Change Analysis in the Diagnose and solve problems tool
+## Diagnose and solve problems tool
Application Change Analysis is: - A standalone detector in the Web App **Diagnose and solve problems** tool. - Aggregated in **Application Crashes** and **Web App Down detectors**.
-From your app service's overview page in Azure portal, select **Diagnose and solve problems** the left menu. As you enter the Diagnose and Solve Problems tool, the **Microsoft.ChangeAnalysis** resource provider will automatically be registered. Enable web app in-guest change tracking with the following instructions:
+From your resource's overview page in Azure portal, select **Diagnose and solve problems** the left menu. As you enter the Diagnose and Solve Problems tool, the **Microsoft.ChangeAnalysis** resource provider will automatically be registered.
+
+### Diagnose and solve problems tool for Web App
+
+> [!NOTE]
+> You may not immediately see web app in-guest file changes and configuration changes. Restart your web app and you should be able to view changes within 30 minutes. If not, refer to [the troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app).
1. Select **Availability and Performance**.
By default, the graph displays changes from within the past 24 hours help with i
:::image type="content" source="./media/change-analysis/change-view.png" alt-text="Screenshot of the change diff view":::
-## Diagnose and Solve Problems tool
-Change Analysis displays as an insight card in a virtual machine's **Diagnose and solve problems** tool. The insight card displays the number of changes or issues a resource experiences within the past 72 hours.
-
-Under **Common problems**, select **View change details** to view the filtered view from Change Analysis standalone UI.
-
+### Diagnose and solve problems tool for Virtual Machines
-## Virtual Machine Diagnose and Solve Problems
+Change Analysis displays as an insight card in a your virtual machine's **Diagnose and solve problems** tool. The insight card displays the number of changes or issues a resource experiences within the past 72 hours.
1. Within your virtual machine, select **Diagnose and solve problems** from the left menu. 1. Go to **Troubleshooting tools**. 1. Scroll to the end of the troubleshooting options and select **Analyze recent changes** to view changes on the virtual machine.
+ :::image type="content" source="./media/change-analysis/vm-dnsp-troubleshootingtools.png" alt-text="Screenshot of the VM Diagnose and Solve Problems":::
+
+ :::image type="content" source="./media/change-analysis/analyze-recent-changes.png" alt-text="Change analyzer in troubleshooting tools":::
+
+### Diagnose and solve problems tool for Azure SQL Database and other resources
+
+You can view Change Analysis data for [multiple Azure resources](./change-analysis.md#supported-resource-types), but we highlight Azure SQL Database below.
+
+1. Within your resource, select **Diagnose and solve problems** from the left menu.
+1. Under **Common problems**, select **View change details** to view the filtered view from Change Analysis standalone UI.
+ :::image type="content" source="./media/change-analysis/diagnose-tool-other-resources.png" alt-text="Screenshot of viewing common problems in Diagnose and Solve Problems tool.":::
## Activity Log change history
Use the [View change history](../essentials/activity-log.md#view-change-history)
1. Once registered, you can view changes from **Azure Resource Graph** immediately from the past 14 days. - Changes from other sources will be available after ~4 hours after subscription is onboard.
+ :::image type="content" source="./media/change-analysis/activity-log-change-history.png" alt-text="Activity Log change history integration":::
## VM Insights integration
azure-monitor Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/change-analysis.md
Application Change Analysis service supports resource property level changes in
- Virtual Machine - Virtual machine scale set - App Service-- Azure Kubernetes service
+- Azure Kubernetes Service (AKS)
- Azure Function - Networking resources: - Network Security Group
Unlike Azure Resource Graph, Change Analysis securely queries and computes IP Co
### Changes in web app deployment and configuration (in-guest changes)
-Every 4 hours, Change Analysis captures the deployment and configuration state of an application. For example, it can detect changes in the application environment variables. The tool computes the differences and presents the changes.
+Every 30 minutes, Change Analysis captures the deployment and configuration state of an application. For example, it can detect changes in the application environment variables. The tool computes the differences and presents the changes.
Unlike Azure Resource Manager changes, code deployment change information might not be available immediately in the Change Analysis tool. To view the latest changes in Change Analysis, select **Refresh**. :::image type="content" source="./media/change-analysis/scan-changes.png" alt-text="Screenshot of the Scan changes now button":::
-Currently all text-based files under site root **wwwroot** with the following extensions are supported:
+If you don't see changes within 30 minutes, refer to [our troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app).
+
+Currently, all text-based files under site root **wwwroot** with the following extensions are supported:
- *.json - *.xml - *.ini
You'll need to register the `Microsoft.ChangeAnalysis` resource provider with an
- Enter the Web App **Diagnose and Solve Problems** tool, or - Bring up the Change Analysis standalone tab.
-For web app in-guest changes, separate enablement is required for scanning code files within a web app. For more information, see [Change Analysis in the Diagnose and solve problems tool](change-analysis-visualizations.md#application-change-analysis-in-the-diagnose-and-solve-problems-tool) section.
+For web app in-guest changes, separate enablement is required for scanning code files within a web app. For more information, see [Change Analysis in the Diagnose and solve problems tool](change-analysis-visualizations.md#diagnose-and-solve-problems-tool-for-web-app) section.
+
+If you don't see changes within 30 minutes, refer to [the troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app).
+ ## Cost Application Change Analysis is a free service. Once enabled, the Change Analysis **Diagnose and solve problems** tool does not:
azure-monitor Mobile Center Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/mobile-center-quickstart.md
To onboard your app, follow the App Center quickstart for each platform your app
## Track events in your app
-After your app is onboarded to App Center, it needs to be modified to send custom event telemetry using the App Center SDK. Custom events are the only type of App Center telemetry that is exported to Application Insights.
+After your app is onboarded to App Center, it needs to be modified to send custom event telemetry using the App Center SDK.
To send custom events from iOS apps, use the `trackEvent` or `trackEvent:withProperties` methods in the App Center SDK. [Learn more about tracking events from iOS apps.](/mobile-center/sdk/analytics/ios)
To delete the Application Insights resource:
## Next steps > [!div class="nextstepaction"]
-> [Understand how customers are using your app](../app/usage-overview.md)
+> [Understand how customers are using your app](../app/usage-overview.md)
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
To receive, store, and explore your monitoring data, include the SDK in your code, and then set up a corresponding Application Insights resource in Azure. The SDK sends data to that resource for further analysis and exploration.
-The Node.js SDK can automatically monitor incoming and outgoing HTTP requests, exceptions, and some system metrics. Beginning in version 0.20, the SDK also can monitor some common [third-party packages](https://github.com/microsoft/node-diagnostic-channel/tree/master/src/diagnostic-channel-publishers#currently-supported-modules), like MongoDB, MySQL, and Redis. All events related to an incoming HTTP request are correlated for faster troubleshooting.
+The Node.js client library can automatically monitor incoming and outgoing HTTP requests, exceptions, and some system metrics. Beginning in version 0.20, the client library also can monitor some common [third-party packages](https://github.com/microsoft/node-diagnostic-channel/tree/master/src/diagnostic-channel-publishers#currently-supported-modules), like MongoDB, MySQL, and Redis. All events related to an incoming HTTP request are correlated for faster troubleshooting.
You can use the TelemetryClient API to manually instrument and monitor additional aspects of your app and system. We describe the TelemetryClient API in more detail later in this article.
Before you begin, make sure that you have an Azure subscription, or [get a new o
1. Sign in to the [Azure portal][portal]. 2. [Create an Application Insights resource](create-new-resource.md)
-### <a name="sdk"></a> Set up the Node.js SDK
+### <a name="sdk"></a> Set up the Node.js client library
Include the SDK in your app, so it can gather data.
Include the SDK in your app, so it can gather data.
![Copy instrumentation key](./media/nodejs/instrumentation-key-001.png)
-2. Add the Node.js SDK library to your app's dependencies via package.json. From the root folder of your app, run:
+2. Add the Node.js client library to your app's dependencies via package.json. From the root folder of your app, run:
```bash npm install applicationinsights --save
appInsights
For a full description of the TelemetryClient API, see [Application Insights API for custom events and metrics](./api-custom-events-metrics.md).
-You can track any request, event, metric, or exception by using the Application Insights Node.js SDK. The following code example demonstrates some of the APIs that you can use:
+You can track any request, event, metric, or exception by using the Application Insights client library for Node.js. The following code example demonstrates some of the APIs that you can use:
```javascript let appInsights = require("applicationinsights");
azure-monitor Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/performance-counters.md
# System performance counters in Application Insights
-Windows provides a wide variety of [performance counters](/windows/desktop/perfctrs/about-performance-counters) such as CPU occupancy, memory, disk, and network usage. You can also define your own performance counters. Performance counters collection is supported as long as your application is running under IIS on an on-premises host, or virtual machine to which you have administrative access. Though applications running as Azure Web Apps don't have direct access to performance counters, a subset of available counters are collected by Application Insights.
+Windows provides a wide variety of [performance counters](/windows/desktop/perfctrs/about-performance-counters) such as processor, memory, and disk usage statistics. You can also define your own performance counters. Performance counters collection is supported as long as your application is running under IIS on an on-premises host, or virtual machine to which you have administrative access. Though applications running as Azure Web Apps don't have direct access to performance counters, a subset of available counters are collected by Application Insights.
## View counters
azure-monitor Usage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md
With specific business events, you can chart your users' progress through your s
Events can be logged from the client side of the app: ```JavaScript
- appInsights.trackEvent("ExpandDetailTab", {DetailTab: tabName});
+ appInsights.trackEvent({name: "incrementCount"});
``` Or from the server side:
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-cost-storage.md
The **Data Retention** page allows retention settings of 30, 31, 60, 90, 120, 18
Workspaces with 30 days retention might actually retain data for 31 days. If it's imperative that data be kept for only 30 days, use the Azure Resource Manager to set the retention to 30 days and with the `immediatePurgeDataOn30Days` parameter.
-By default, two data types - `Usage` and `AzureActivity` - are retained for a minimum of 90 days at no charge. If the workspace retention is increased to more than 90 days, the retention of these data types is also increased. These data types are also free from data ingestion charges.
+By default, two data types - `Usage` and `AzureActivity` - are retained for a minimum of 90 days at no charge. When you increase the workspace retention to more than 90 days, you also increase the retention of these data types, and you'll be charged for retaining this data beyond the 90-day period. These data types are also free from data ingestion charges.
Data types from workspace-based Application Insights resources (`AppAvailabilityResults`, `AppBrowserTimings`, `AppDependencies`, `AppExceptions`, `AppEvents`, `AppMetrics`, `AppPageViews`, `AppPerformanceCounters`, `AppRequests`, `AppSystemEvents`, and `AppTraces`) are also retained for 90 days at no charge by default. Their retention can be adjusted using the retention by data type functionality.
To facilitate this assessment, the following query can be used to make a recomme
Here is the pricing tier recommendation query: ```kusto
-// Set these parameters before running query
-// Pricing details available at https://azure.microsoft.com/pricing/details/monitor/
+// Set these parameters before running query.
+// For Pay-As-You-Go (per-GB) pricing details, see https://azure.microsoft.com/pricing/details/monitor/.
+// You can see your per-node costs in your Azure usage and charge data. For more information, see https://docs.microsoft.com/en-us/azure/cost-management-billing/understand/download-azure-daily-usage.
let daysToEvaluate = 7; // Enter number of previous days to analyze (reduce if the query is taking too long) let workspaceHasSecurityCenter = false; // Specify if the workspace has Defender for Cloud (formerly known as Azure Security Center) let PerNodePrice = 15.; // Enter your monthly price per monitored nodes
azure-netapp-files Azacsnap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-get-started.md
na Previously updated : 04/21/2021 Last updated : 03/03/2022
For more information about using GPG, see [The GNU Privacy Handbook](https://www
## Supported scenarios
-The snapshot tools can be used in the following scenarios.
--- Single SID-- Multiple SID-- HSR-- Scale-out-- MDC (Only single tenant supported)-- Single Container-- SUSE Operating System-- RHEL Operating System-- SKU TYPE I-- SKU TYPE II-
-See [Supported scenarios for HANA Large Instances](../virtual-machines/workloads/sap/hana-supported-scenario.md)
+The snapshot tools can be used in the following [Supported scenarios for HANA Large Instances](../virtual-machines/workloads/sap/hana-supported-scenario.md) and
+[SAP HANA with Azure NetApp Files](../virtual-machines/workloads/sap/hana-vm-operations-netapp.md).
## Snapshot Support Matrix from SAP
azure-portal Azure Portal Video Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-video-series.md
Title: Azure portal how-to video series description: Find video demos for how to work with Azure services in the portal. View and link directly to the latest how-to videos. keywords: Previously updated : 03/16/2021 Last updated : 03/03/2022 # Azure portal how-to video series
-The Azure portal how-to video series showcases how to work with Azure services in the Azure portal. Each week the Azure portal team adds to the video playlist. These interactive demos can help you be more efficient and productive.
+The [Azure portal how-to video series](https://www.youtube.com/playlist?list=PLLasX02E8BPBKgXP4oflOL29TtqTzwhxR) showcases how to work with Azure services in the Azure portal. Each week the Azure portal team adds to the video playlist. These interactive demos can help you be more efficient and productive.
## Featured video
-In this featured video, we show you how to build tabs and alerts in Azure workbooks.
+In this featured video, we show you how to move your resources in Azure between resource groups and locations.
-> [!VIDEO https://www.youtube.com/embed/3XY3lYgrRvA]
+> [!VIDEO https://www.youtube.com/embed/8HVAP4giLdc]
-[How to build tabs and alerts in Azure workbooks](https://www.youtube.com/watch?v=3XY3lYgrRvA)
+[How to move Azure resources](https://www.youtube.com/watch?v=8HVAP4giLdc)
-Catch up on these recent videos you may have missed:
+Catch up on these videos you may have missed:
| [How to easily manage your virtual machine](https://www.youtube.com/watch?v=vQClJHt2ulQ) | [How to use pills to filter in the Azure portal](https://www.youtube.com/watch?v=XyKh_3NxUlM) | [How to get a visualization view of your resources](https://www.youtube.com/watch?v=wudqkkJd5E4) | | | | |
Explore the [Azure portal how-to series](https://www.youtube.com/playlist?list=P
## Next steps Explore hundreds of videos for Azure services in the [video library](https://azure.microsoft.com/resources/videos/index/?tag=microsoft-azure-portal).+
azure-resource-manager Resource Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/resource-dependencies.md
Title: Set resource dependencies in Bicep description: Describes how to specify the order resources are deployed. Previously updated : 02/04/2022 Last updated : 03/02/2022 # Resource dependencies in Bicep
resource otherZone 'Microsoft.Network/dnszones@2018-05-01' = {
} ```
-While you may be inclined to use `dependsOn` to map relationships between your resources, it's important to understand why you're doing it. For example, to document how resources are interconnected, `dependsOn` isn't the right approach. You can't query which resources were defined in the `dependsOn` element after deployment. Setting unnecessary dependencies slows deployment time because Resource Manager can't deploy those resources in parallel.
+While you may be inclined to use `dependsOn` to map relationships between your resources, it's important to understand why you're doing it. For example, to document how resources are interconnected, `dependsOn` isn't the right approach. After deployment, the resource doesn't retain deployment dependencies in its properties, so there are no commands or operations that let you see dependencies. Setting unnecessary dependencies slows deployment time because Resource Manager can't deploy those resources in parallel.
Even though explicit dependencies are sometimes required, the need for them is rare. In most cases, you can use a symbolic name to imply the dependency between resources. If you find yourself setting explicit dependencies, you should consider if there's a way to remove it.
azure-resource-manager Resource Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/resource-dependency.md
Title: Set deployment order for resources description: Describes how to set one Azure resource as dependent on another resource during deployment. The dependencies ensure resources are deployed in the correct order. Previously updated : 12/21/2020 Last updated : 03/02/2022 # Define the order for deploying resources in ARM templates
The following example shows a network interface that depends on a virtual networ
} ```
-While you may be inclined to use `dependsOn` to map relationships between your resources, it's important to understand why you're doing it. For example, to document how resources are interconnected, `dependsOn` isn't the right approach. You can't query which resources were defined in the `dependsOn` element after deployment. Setting unnecessary dependencies slows deployment time because Resource Manager can't deploy those resources in parallel.
+While you may be inclined to use `dependsOn` to map relationships between your resources, it's important to understand why you're doing it. For example, to document how resources are interconnected, `dependsOn` isn't the right approach. After deployment, the resource doesn't retain deployment dependencies in its properties, so there are no commands or operations that let you see dependencies. Setting unnecessary dependencies slows deployment time because Resource Manager can't deploy those resources in parallel.
## Child resources
azure-sql Active Geo Replication Configure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/active-geo-replication-configure-portal.md
Last updated 08/20/2021
This article shows you how to configure [active geo-replication for Azure SQL Database](active-geo-replication-overview.md#active-geo-replication-terminology-and-capabilities) using the [Azure portal](https://portal.azure.com) or Azure CLI and to initiate failover.
-For best practices using auto-failover groups, see [Best practices for Azure SQL Database](auto-failover-group-overview.md#best-practices-for-sql-database) and [Best practices for Azure SQL Managed Instance](auto-failover-group-overview.md#best-practices-for-sql-managed-instance).
+For best practices using auto-failover groups, see [Auto-failover groups with Azure SQL Database](auto-failover-group-sql-db.md) and [Auto-failover groups with Azure SQL Managed Instance](../managed-instance/auto-failover-group-sql-mi.md).
azure-sql Auto Failover Group Configure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/auto-failover-group-configure-sql-db.md
+
+ Title: Configure an auto-failover group
+
+description: Learn how to configure an auto-failover group for a single or pooled database in Azure SQL Database using the Azure portal and PowerShell.
+++++
+ms.devlang:
+++ Last updated : 03/01/2022
+zone_pivot_groups: azure-sql-deployment-option-single-elastic
+
+# Configure an auto-failover group for Azure SQL Database
+
+> [!div class="op_single_selector"]
+> * [Azure SQL Database](auto-failover-group-configure-sql-db.md)
+> * [Azure SQL Managed Instance](../managed-instance/auto-failover-group-configure-sql-mi.md)
+
+This topic teaches you how to configure an [auto-failover group](auto-failover-group-sql-db.md) for single and pooled databases in Azure SQL Database by using the Azure portal and Azure PowerShell. For an end-to-end experience, review the [Auto-failover group tutorial](failover-group-add-single-database-tutorial.md).
+
+> [!NOTE]
+> This article covers auto-failover groups for Azure SQL Database. For Azure SQL Managed Instance, see [Configure auto-failover groups in Azure SQL Managed Instance](../managed-instance/auto-failover-group-configure-sql-mi.md).
++++
+## Prerequisites
+
+Consider the following prerequisites for creating your failover group for a single database:
+
+- The server login and firewall settings for the secondary server must match that of your primary server.
+
+## Create failover group
+
+# [Portal](#tab/azure-portal)
+
+Create your failover group and add your single database to it using the Azure portal.
+
+1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type Azure SQL in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
+1. Select the database you want to add to the failover group.
+1. Select the name of the server under **Server name** to open the settings for the server.
+
+ ![Open server for single db](./media/auto-failover-group-configure-sql-db/open-sql-db-server.png)
+
+1. Select **Failover groups** under the **Settings** pane, and then select **Add group** to create a new failover group.
+
+ ![Add new failover group](./media/auto-failover-group-configure-sql-db/sqldb-add-new-failover-group.png)
+
+1. On the **Failover Group** page, enter or select the required values, and then select **Create**.
+
+ - **Databases within the group**: Choose the database you want to add to your failover group. Adding the database to the failover group will automatically start the geo-replication process.
+
+ ![Add SQL Database to failover group](./media/auto-failover-group-configure-sql-db/add-sqldb-to-failover-group.png)
+
+# [PowerShell](#tab/azure-powershell)
+
+Create your failover group and add your database to it using PowerShell.
+
+ ```powershell-interactive
+ $subscriptionId = "<SubscriptionID>"
+ $resourceGroupName = "<Resource-Group-Name>"
+ $location = "<Region>"
+ $adminLogin = "<Admin-Login>"
+ $password = "<Complex-Password>"
+ $serverName = "<Primary-Server-Name>"
+ $databaseName = "<Database-Name>"
+ $drLocation = "<DR-Region>"
+ $drServerName = "<Secondary-Server-Name>"
+ $failoverGroupName = "<Failover-Group-Name>"
+
+ # Create a secondary server in the failover region
+ Write-host "Creating a secondary server in the failover region..."
+ $drServer = New-AzSqlServer -ResourceGroupName $resourceGroupName `
+ -ServerName $drServerName `
+ -Location $drLocation `
+ -SqlAdministratorCredentials $(New-Object -TypeName System.Management.Automation.PSCredential `
+ -ArgumentList $adminlogin, $(ConvertTo-SecureString -String $password -AsPlainText -Force))
+ $drServer
+
+ # Create a failover group between the servers
+ $failovergroup = Write-host "Creating a failover group between the primary and secondary server..."
+ New-AzSqlDatabaseFailoverGroup `
+ ResourceGroupName $resourceGroupName `
+ -ServerName $serverName `
+ -PartnerServerName $drServerName `
+ FailoverGroupName $failoverGroupName `
+ FailoverPolicy Automatic `
+ -GracePeriodWithDataLossHours 2
+ $failovergroup
+
+ # Add the database to the failover group
+ Write-host "Adding the database to the failover group..."
+ Get-AzSqlDatabase `
+ -ResourceGroupName $resourceGroupName `
+ -ServerName $serverName `
+ -DatabaseName $databaseName | `
+ Add-AzSqlDatabaseToFailoverGroup `
+ -ResourceGroupName $resourceGroupName `
+ -ServerName $serverName `
+ -FailoverGroupName $failoverGroupName
+ Write-host "Successfully added the database to the failover group..."
+ ```
+++
+## Test failover
+
+Test failover of your failover group using the Azure portal or PowerShell.
+
+# [Portal](#tab/azure-portal)
+
+Test failover of your failover group using the Azure portal.
+
+1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type "Azure SQL" in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
+1. Select the database you want to add to the failover group.
+
+ ![Open server for single db](./media/auto-failover-group-configure-sql-db/open-sql-db-server.png)
+
+1. Select **Failover groups** under the **Settings** pane and then choose the failover group you just created.
+
+ ![Select the failover group from the portal](./media/auto-failover-group-configure-sql-db/select-failover-group.png)
+
+1. Review which server is primary and which server is secondary.
+1. Select **Failover** from the task pane to fail over your failover group containing your database.
+1. Select **Yes** on the warning that notifies you that TDS sessions will be disconnected.
+
+ ![Fail over your failover group containing your database](./media/auto-failover-group-configure-sql-db/failover-sql-db.png)
+
+1. Review which server is now primary and which server is secondary. If failover succeeded, the two servers should have swapped roles.
+1. Select **Failover** again to fail the servers back to their original roles.
+
+# [PowerShell](#tab/azure-powershell)
+
+Test failover of your failover group using PowerShell.
+
+Check the role of the secondary replica:
+
+ ```powershell-interactive
+ # Set variables
+ $resourceGroupName = "<Resource-Group-Name>"
+ $serverName = "<Primary-Server-Name>"
+ $failoverGroupName = "<Failover-Group-Name>"
+
+ # Check role of secondary replica
+ Write-host "Confirming the secondary replica is secondary...."
+ (Get-AzSqlDatabaseFailoverGroup `
+ -FailoverGroupName $failoverGroupName `
+ -ResourceGroupName $resourceGroupName `
+ -ServerName $drServerName).ReplicationRole
+ ```
+
+Fail over to the secondary server:
+
+ ```powershell-interactive
+ # Set variables
+ $resourceGroupName = "<Resource-Group-Name>"
+ $serverName = "<Primary-Server-Name>"
+ $failoverGroupName = "<Failover-Group-Name>"
+
+ # Failover to secondary server
+ Write-host "Failing over failover group to the secondary..."
+ Switch-AzSqlDatabaseFailoverGroup `
+ -ResourceGroupName $resourceGroupName `
+ -ServerName $drServerName `
+ -FailoverGroupName $failoverGroupName
+ Write-host "Failed failover group to successfully to" $drServerName
+ ```
+
+Revert failover group back to the primary server:
+
+ ```powershell-interactive
+ # Set variables
+ $resourceGroupName = "<Resource-Group-Name>"
+ $serverName = "<Primary-Server-Name>"
+ $failoverGroupName = "<Failover-Group-Name>"
+
+ # Revert failover to primary server
+ Write-host "Failing over failover group to the primary...."
+ Switch-AzSqlDatabaseFailoverGroup `
+ -ResourceGroupName $resourceGroupName `
+ -ServerName $serverName `
+ -FailoverGroupName $failoverGroupName
+ Write-host "Failed failover group successfully to back to" $serverName
+ ```
+++
+> [!IMPORTANT]
+> If you need to delete the secondary database, remove it from the failover group before deleting it. Deleting a secondary database before it is removed from the failover group can cause unpredictable behavior.
++++
+## Prerequisites
+
+Consider the following prerequisites for creating your failover group for a pooled database:
+
+- The server login and firewall settings for the secondary server must match that of your primary server.
+
+## Create failover group
+
+Create the failover group for your elastic pool using the Azure portal or PowerShell.
+
+# [Portal](#tab/azure-portal)
+
+Create your failover group and add your elastic pool to it using the Azure portal.
+
+1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type "Azure SQL" in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
+1. Select the elastic pool you want to add to the failover group.
+1. On the **Overview** pane, select the name of the server under **Server name** to open the settings for the server.
+
+ ![Open server for elastic pool](./media/auto-failover-group-configure-sql-db/server-for-elastic-pool.png)
+
+1. Select **Failover groups** under the **Settings** pane, and then select **Add group** to create a new failover group.
+
+ ![Add new failover group](./media/auto-failover-group-configure-sql-db/sqldb-add-new-failover-group.png)
+
+1. On the **Failover Group** page, enter or select the required values, and then select **Create**. Either create a new secondary server, or select an existing secondary server.
+
+1. Select **Databases within the group** then choose the elastic pool you want to add to the failover group. If an elastic pool does not already exist on the secondary server, a warning appears prompting you to create an elastic pool on the secondary server. Select the warning, and then select **OK** to create the elastic pool on the secondary server.
+
+ ![Add elastic pool to failover group](./media/auto-failover-group-configure-sql-db/add-elastic-pool-to-failover-group.png)
+
+1. Select **Select** to apply your elastic pool settings to the failover group, and then select **Create** to create your failover group. Adding the elastic pool to the failover group will automatically start the geo-replication process.
+
+# [PowerShell](#tab/azure-powershell)
+
+Create your failover group and add your elastic pool to it using PowerShell.
+
+ ```powershell-interactive
+ $subscriptionId = "<SubscriptionID>"
+ $resourceGroupName = "<Resource-Group-Name>"
+ $location = "<Region>"
+ $adminLogin = "<Admin-Login>"
+ $password = "<Complex-Password>"
+ $serverName = "<Primary-Server-Name>"
+ $databaseName = "<Database-Name>"
+ $poolName = "myElasticPool"
+ $drLocation = "<DR-Region>"
+ $drServerName = "<Secondary-Server-Name>"
+ $failoverGroupName = "<Failover-Group-Name>"
+
+ # Create a failover group between the servers
+ Write-host "Creating failover group..."
+ New-AzSqlDatabaseFailoverGroup `
+ ResourceGroupName $resourceGroupName `
+ -ServerName $serverName `
+ -PartnerServerName $drServerName `
+ FailoverGroupName $failoverGroupName `
+ FailoverPolicy Automatic `
+ -GracePeriodWithDataLossHours 2
+ Write-host "Failover group created successfully."
+
+ # Add elastic pool to the failover group
+ Write-host "Enumerating databases in elastic pool...."
+ $FailoverGroup = Get-AzSqlDatabaseFailoverGroup `
+ -ResourceGroupName $resourceGroupName `
+ -ServerName $serverName `
+ -FailoverGroupName $failoverGroupName
+ $databases = Get-AzSqlElasticPoolDatabase `
+ -ResourceGroupName $resourceGroupName `
+ -ServerName $serverName `
+ -ElasticPoolName $poolName
+ Write-host "Adding databases to failover group..."
+ $failoverGroup = $failoverGroup | Add-AzSqlDatabaseToFailoverGroup `
+ -Database $databases
+ Write-host "Databases added to failover group successfully."
+ ```
+++
+## Test failover
+
+Test failover of your elastic pool using the Azure portal or PowerShell.
+
+# [Portal](#tab/azure-portal)
+
+Fail your failover group over to the secondary server, and then fail back using the Azure portal.
+
+1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type "Azure SQL" in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
+1. Select the elastic pool you want to add to the failover group.
+1. On the **Overview** pane, select the name of the server under **Server name** to open the settings for the server.
+
+ ![Open server for elastic pool](./media/auto-failover-group-configure-sql-db/server-for-elastic-pool.png)
+1. Select **Failover groups** under the **Settings** pane and then choose the failover group you created in section 2.
+
+ ![Select the failover group from the portal](./media/auto-failover-group-configure-sql-db/select-failover-group.png)
+
+1. Review which server is primary, and which server is secondary.
+1. Select **Failover** from the task pane to fail over your failover group containing your elastic pool.
+1. Select **Yes** on the warning that notifies you that TDS sessions will be disconnected.
+
+ ![Fail over your failover group containing your database](./media/auto-failover-group-configure-sql-db/failover-sql-db.png)
+
+1. Review which server is primary, which server is secondary. If failover succeeded, the two servers should have swapped roles.
+1. Select **Failover** again to fail the failover group back to the original settings.
+
+# [PowerShell](#tab/azure-powershell)
+
+Test failover of your failover group using PowerShell.
+
+Check the role of the secondary replica:
+
+ ```powershell-interactive
+ # Set variables
+ $resourceGroupName = "<Resource-Group-Name>"
+ $serverName = "<Primary-Server-Name>"
+ $failoverGroupName = "<Failover-Group-Name>"
+
+ # Check role of secondary replica
+ Write-host "Confirming the secondary replica is secondary...."
+ (Get-AzSqlDatabaseFailoverGroup `
+ -FailoverGroupName $failoverGroupName `
+ -ResourceGroupName $resourceGroupName `
+ -ServerName $drServerName).ReplicationRole
+ ```
+
+Fail over to the secondary server:
+
+ ```powershell-interactive
+ # Set variables
+ $resourceGroupName = "<Resource-Group-Name>"
+ $serverName = "<Primary-Server-Name>"
+ $failoverGroupName = "<Failover-Group-Name>"
+
+ # Failover to secondary server
+ Write-host "Failing over failover group to the secondary..."
+ Switch-AzSqlDatabaseFailoverGroup `
+ -ResourceGroupName $resourceGroupName `
+ -ServerName $drServerName `
+ -FailoverGroupName $failoverGroupName
+ Write-host "Failed failover group to successfully to" $drServerName
+ ```
+++
+> [!IMPORTANT]
+> If you need to delete the secondary database, remove it from the failover group before deleting it. Deleting a secondary database before it is removed from the failover group can cause unpredictable behavior.
++
+## Use Private Link
+
+Using a private link allows you to associate a logical server to a specific private IP address within the virtual network and subnet.
+
+To use a private link with your failover group, do the following:
+
+1. Ensure your primary and secondary servers are in a [paired region](../../availability-zones/cross-region-replication-azure.md).
+1. Create the virtual network and subnet in each region to host private endpoints for primary and secondary servers such that they have non-overlapping IP address spaces. For example, the primary virtual network address range of 10.0.0.0/16 and the secondary virtual network address range of 10.0.0.1/16 overlaps. For more information about virtual network address ranges, see the blog [designing Azure virtual networks](https://devblogs.microsoft.com/premier-developer/understanding-cidr-notation-when-designing-azure-virtual-networks-and-subnets/).
+1. Create a [private endpoint and Azure Private DNS zone for the primary server](../../private-link/create-private-endpoint-portal.md#create-a-private-endpoint).
+1. Create a private endpoint for the secondary server as well, but this time choose to reuse the same Private DNS zone that was created for the primary server.
+1. Once the private link is established, you can create the failover group following the steps outlined previously in this article.
++
+## Locate listener endpoint
+
+Once your failover group is configured, update the connection string for your application to the listener endpoint. This will keep your application connected to the failover group listener, rather than the primary database, elastic pool, or instance database. That way, you don't have to manually update the connection string every time your database entity fails over, and traffic is routed to whichever entity is currently primary.
+
+The listener endpoint is in the form of `fog-name.database.windows.net`, and is visible in the Azure portal, when viewing the failover group:
+
+![Failover group connection string](./media/auto-failover-group-configure-sql-db/find-failover-group-connection-string.png)
+
+## <a name="changing-secondary-region-of-the-failover-group"></a> Change the secondary region
+
+To illustrate the change sequence, we will assume that server A is the primary server, server B is the existing secondary server, and server C is the new secondary in the third region. To make the transition, follow these steps:
+
+1. Create additional secondaries of each database on server A to server C using [active geo-replication](active-geo-replication-overview.md). Each database on server A will have two secondaries, one on server B and one on server C. This will guarantee that the primary databases remain protected during the transition.
+1. Delete the failover group. At this point login attempts using failover group endpoints will be failing.
+1. Re-create the failover group with the same name between servers A and C.
+1. Add all primary databases on server A to the new failover group. At this point the login attempts will stop failing.
+1. Delete server B. All databases on B will be deleted automatically.
+
+## <a name="changing-primary-region-of-the-failover-group"></a> Change the primary region
+
+To illustrate the change sequence, we will assume server A is the primary server, server B is the existing secondary server, and server C is the new primary in the third region. To make the transition, follow these steps:
+
+1. Perform a planned geo-failover to switch the primary server to B. Server A will become the new secondary server. The failover may result in several minutes of downtime. The actual time will depend on the size of failover group.
+1. Create additional secondaries of each database on server B to server C using [active geo-replication](active-geo-replication-overview.md). Each database on server B will have two secondaries, one on server A and one on server C. This will guarantee that the primary databases remain protected during the transition.
+1. Delete the failover group. At this point login attempts using failover group endpoints will be failing.
+1. Re-create the failover group with the same name between servers B and C.
+1. Add all primary databases on B to the new failover group. At this point the login attempts will stop failing.
+1. Perform a planned geo-failover of the failover group to switch B and C. Now server C will become the primary and B the secondary. All secondary databases on server A will be automatically linked to the primaries on C. As in step 1, the failover may result in several minutes of downtime.
+1. Delete server A. All databases on A will be deleted automatically.
+
+> [!IMPORTANT]
+> When the failover group is deleted, the DNS records for the listener endpoints are also deleted. At that point, there is a non-zero probability of somebody else creating a failover group or a server DNS alias with the same name. Because failover group names and DNS aliases must be globally unique, this will prevent you from using the same name again. To minimize this risk, don't use generic failover group names.
+
+## Permissions
+
+<!--
+There is some overlap of content in the following articles, be sure to make changes to all if necessary:
+/azure-sql/auto-failover-group-overview.md
+/azure-sql/database/auto-failover-group-sql-db.md
+/azure-sql/database/auto-failover-group-configure-sql-db.md
+/azure-sql/managed-instance/auto-failover-group-sql-mi.md
+/azure-sql/managed-instance/auto-failover-group-configure-sql-mi.md
+-->
+
+Permissions for a failover group are managed via [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
+
+Azure RBAC write access is necessary to create and manage failover groups. The [SQL Server Contributor role](../../role-based-access-control/built-in-roles.md#sql-server-contributor) has all the necessary permissions to manage failover groups.
+
+The following table lists specific permission scopes for Azure SQL Database:
+
+| **Action** | **Permission** | **Scope**|
+| :- | :- | :- |
+| **Create failover group**| Azure RBAC write access | Primary server </br> Secondary server </br> All databases in failover group |
+| **Update failover group** | Azure RBAC write access | Failover group </br> All databases on the current primary server|
+| **Fail over failover group** | Azure RBAC write access | Failover group on new server |
+| | |
+
+## Remarks
+
+- Removing a failover group for a single or pooled database does not stop replication, and it does not delete the replicated database. You will need to manually stop geo-replication and delete the database from the secondary server if you want to add a single or pooled database back to a failover group after it's been removed. Failing to do either may result in an error similar to `The operation cannot be performed due to multiple errors` when attempting to add the database to the failover group.
+
+## Next steps
+
+For detailed steps configuring a failover group, see the following tutorials:
+
+- [Add a single database to a failover group](failover-group-add-single-database-tutorial.md)
+- [Add an elastic pool to a failover group](failover-group-add-elastic-pool-tutorial.md)
+- [Add a managed instance to a failover group](../managed-instance/failover-group-add-instance-tutorial.md)
+
+For an overview of Azure SQL Database high availability options, see [geo-replication](active-geo-replication-overview.md) and [auto-failover groups](auto-failover-group-overview.md).
azure-sql Auto Failover Group Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/auto-failover-group-configure.md
- Title: Configure a failover group-
-description: Learn how to configure an auto-failover group for an Azure SQL Database (both single and pooled) and SQL Managed Instance, using the Azure portal, the Azure CLI, and PowerShell.
-------- Previously updated : 08/14/2019-
-# Configure a failover group for Azure SQL Database
-
-This topic teaches you how to configure an [auto-failover group](auto-failover-group-overview.md) for Azure SQL Database and Azure SQL Managed Instance.
-
-## Single database
-
-Create the failover group and add a single database to it using the Azure portal or PowerShell.
-
-### Prerequisites
-
-Consider the following prerequisites:
--- The server login and firewall settings for the secondary server must match that of your primary server.-
-### Create failover group
-
-# [Portal](#tab/azure-portal)
-
-Create your failover group and add your single database to it using the Azure portal.
-
-1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type Azure SQL in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
-1. Select the database you want to add to the failover group.
-1. Select the name of the server under **Server name** to open the settings for the server.
-
- ![Open server for single db](./media/auto-failover-group-configure/open-sql-db-server.png)
-
-1. Select **Failover groups** under the **Settings** pane, and then select **Add group** to create a new failover group.
-
- ![Add new failover group](./media/auto-failover-group-configure/sqldb-add-new-failover-group.png)
-
-1. On the **Failover Group** page, enter or select the required values, and then select **Create**.
-
- - **Databases within the group**: Choose the database you want to add to your failover group. Adding the database to the failover group will automatically start the geo-replication process.
-
- ![Add SQL Database to failover group](./media/auto-failover-group-configure/add-sqldb-to-failover-group.png)
-
-# [PowerShell](#tab/azure-powershell)
-
-Create your failover group and add your database to it using PowerShell.
-
- ```powershell-interactive
- $subscriptionId = "<SubscriptionID>"
- $resourceGroupName = "<Resource-Group-Name>"
- $location = "<Region>"
- $adminLogin = "<Admin-Login>"
- $password = "<Complex-Password>"
- $serverName = "<Primary-Server-Name>"
- $databaseName = "<Database-Name>"
- $drLocation = "<DR-Region>"
- $drServerName = "<Secondary-Server-Name>"
- $failoverGroupName = "<Failover-Group-Name>"
-
- # Create a secondary server in the failover region
- Write-host "Creating a secondary server in the failover region..."
- $drServer = New-AzSqlServer -ResourceGroupName $resourceGroupName `
- -ServerName $drServerName `
- -Location $drLocation `
- -SqlAdministratorCredentials $(New-Object -TypeName System.Management.Automation.PSCredential `
- -ArgumentList $adminlogin, $(ConvertTo-SecureString -String $password -AsPlainText -Force))
- $drServer
-
- # Create a failover group between the servers
- $failovergroup = Write-host "Creating a failover group between the primary and secondary server..."
- New-AzSqlDatabaseFailoverGroup `
- ResourceGroupName $resourceGroupName `
- -ServerName $serverName `
- -PartnerServerName $drServerName `
- FailoverGroupName $failoverGroupName `
- FailoverPolicy Automatic `
- -GracePeriodWithDataLossHours 2
- $failovergroup
-
- # Add the database to the failover group
- Write-host "Adding the database to the failover group..."
- Get-AzSqlDatabase `
- -ResourceGroupName $resourceGroupName `
- -ServerName $serverName `
- -DatabaseName $databaseName | `
- Add-AzSqlDatabaseToFailoverGroup `
- -ResourceGroupName $resourceGroupName `
- -ServerName $serverName `
- -FailoverGroupName $failoverGroupName
- Write-host "Successfully added the database to the failover group..."
- ```
---
-### Test failover
-
-Test failover of your failover group using the Azure portal or PowerShell.
-
-# [Portal](#tab/azure-portal)
-
-Test failover of your failover group using the Azure portal.
-
-1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type "Azure SQL" in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
-1. Select the database you want to add to the failover group.
-
- ![Open server for single db](./media/auto-failover-group-configure/open-sql-db-server.png)
-
-1. Select **Failover groups** under the **Settings** pane and then choose the failover group you just created.
-
- ![Select the failover group from the portal](./media/auto-failover-group-configure/select-failover-group.png)
-
-1. Review which server is primary and which server is secondary.
-1. Select **Failover** from the task pane to fail over your failover group containing your database.
-1. Select **Yes** on the warning that notifies you that TDS sessions will be disconnected.
-
- ![Fail over your failover group containing your database](./media/auto-failover-group-configure/failover-sql-db.png)
-
-1. Review which server is now primary and which server is secondary. If failover succeeded, the two servers should have swapped roles.
-1. Select **Failover** again to fail the servers back to their original roles.
-
-# [PowerShell](#tab/azure-powershell)
-
-Test failover of your failover group using PowerShell.
-
-Check the role of the secondary replica:
-
- ```powershell-interactive
- # Set variables
- $resourceGroupName = "<Resource-Group-Name>"
- $serverName = "<Primary-Server-Name>"
- $failoverGroupName = "<Failover-Group-Name>"
-
- # Check role of secondary replica
- Write-host "Confirming the secondary replica is secondary...."
- (Get-AzSqlDatabaseFailoverGroup `
- -FailoverGroupName $failoverGroupName `
- -ResourceGroupName $resourceGroupName `
- -ServerName $drServerName).ReplicationRole
- ```
-
-Fail over to the secondary server:
-
- ```powershell-interactive
- # Set variables
- $resourceGroupName = "<Resource-Group-Name>"
- $serverName = "<Primary-Server-Name>"
- $failoverGroupName = "<Failover-Group-Name>"
-
- # Failover to secondary server
- Write-host "Failing over failover group to the secondary..."
- Switch-AzSqlDatabaseFailoverGroup `
- -ResourceGroupName $resourceGroupName `
- -ServerName $drServerName `
- -FailoverGroupName $failoverGroupName
- Write-host "Failed failover group to successfully to" $drServerName
- ```
-
-Revert failover group back to the primary server:
-
- ```powershell-interactive
- # Set variables
- $resourceGroupName = "<Resource-Group-Name>"
- $serverName = "<Primary-Server-Name>"
- $failoverGroupName = "<Failover-Group-Name>"
-
- # Revert failover to primary server
- Write-host "Failing over failover group to the primary...."
- Switch-AzSqlDatabaseFailoverGroup `
- -ResourceGroupName $resourceGroupName `
- -ServerName $serverName `
- -FailoverGroupName $failoverGroupName
- Write-host "Failed failover group successfully to back to" $serverName
- ```
---
-> [!IMPORTANT]
-> If you need to delete the secondary database, remove it from the failover group before deleting it. Deleting a secondary database before it is removed from the failover group can cause unpredictable behavior.
-
-## Elastic pool
-
-Create the failover group and add an elastic pool to it using the Azure portal, or PowerShell.
-
-### Prerequisites
-
-Consider the following prerequisites:
--- The server login and firewall settings for the secondary server must match that of your primary server.-
-### Create the failover group
-
-Create the failover group for your elastic pool using the Azure portal or PowerShell.
-
-# [Portal](#tab/azure-portal)
-
-Create your failover group and add your elastic pool to it using the Azure portal.
-
-1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type "Azure SQL" in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
-1. Select the elastic pool you want to add to the failover group.
-1. On the **Overview** pane, select the name of the server under **Server name** to open the settings for the server.
-
- ![Open server for elastic pool](./media/auto-failover-group-configure/server-for-elastic-pool.png)
-
-1. Select **Failover groups** under the **Settings** pane, and then select **Add group** to create a new failover group.
-
- ![Add new failover group](./media/auto-failover-group-configure/sqldb-add-new-failover-group.png)
-
-1. On the **Failover Group** page, enter or select the required values, and then select **Create**. Either create a new secondary server, or select an existing secondary server.
-
-1. Select **Databases within the group** then choose the elastic pool you want to add to the failover group. If an elastic pool does not already exist on the secondary server, a warning appears prompting you to create an elastic pool on the secondary server. Select the warning, and then select **OK** to create the elastic pool on the secondary server.
-
- ![Add elastic pool to failover group](./media/auto-failover-group-configure/add-elastic-pool-to-failover-group.png)
-
-1. Select **Select** to apply your elastic pool settings to the failover group, and then select **Create** to create your failover group. Adding the elastic pool to the failover group will automatically start the geo-replication process.
-
-# [PowerShell](#tab/azure-powershell)
-
-Create your failover group and add your elastic pool to it using PowerShell.
-
- ```powershell-interactive
- $subscriptionId = "<SubscriptionID>"
- $resourceGroupName = "<Resource-Group-Name>"
- $location = "<Region>"
- $adminLogin = "<Admin-Login>"
- $password = "<Complex-Password>"
- $serverName = "<Primary-Server-Name>"
- $databaseName = "<Database-Name>"
- $poolName = "myElasticPool"
- $drLocation = "<DR-Region>"
- $drServerName = "<Secondary-Server-Name>"
- $failoverGroupName = "<Failover-Group-Name>"
-
- # Create a failover group between the servers
- Write-host "Creating failover group..."
- New-AzSqlDatabaseFailoverGroup `
- ResourceGroupName $resourceGroupName `
- -ServerName $serverName `
- -PartnerServerName $drServerName `
- FailoverGroupName $failoverGroupName `
- FailoverPolicy Automatic `
- -GracePeriodWithDataLossHours 2
- Write-host "Failover group created successfully."
-
- # Add elastic pool to the failover group
- Write-host "Enumerating databases in elastic pool...."
- $FailoverGroup = Get-AzSqlDatabaseFailoverGroup `
- -ResourceGroupName $resourceGroupName `
- -ServerName $serverName `
- -FailoverGroupName $failoverGroupName
- $databases = Get-AzSqlElasticPoolDatabase `
- -ResourceGroupName $resourceGroupName `
- -ServerName $serverName `
- -ElasticPoolName $poolName
- Write-host "Adding databases to failover group..."
- $failoverGroup = $failoverGroup | Add-AzSqlDatabaseToFailoverGroup `
- -Database $databases
- Write-host "Databases added to failover group successfully."
- ```
---
-### Test failover
-
-Test failover of your elastic pool using the Azure portal or PowerShell.
-
-# [Portal](#tab/azure-portal)
-
-Fail your failover group over to the secondary server, and then fail back using the Azure portal.
-
-1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type "Azure SQL" in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
-1. Select the elastic pool you want to add to the failover group.
-1. On the **Overview** pane, select the name of the server under **Server name** to open the settings for the server.
-
- ![Open server for elastic pool](./media/auto-failover-group-configure/server-for-elastic-pool.png)
-1. Select **Failover groups** under the **Settings** pane and then choose the failover group you created in section 2.
-
- ![Select the failover group from the portal](./media/auto-failover-group-configure/select-failover-group.png)
-
-1. Review which server is primary, and which server is secondary.
-1. Select **Failover** from the task pane to fail over your failover group containing your elastic pool.
-1. Select **Yes** on the warning that notifies you that TDS sessions will be disconnected.
-
- ![Fail over your failover group containing your database](./media/auto-failover-group-configure/failover-sql-db.png)
-
-1. Review which server is primary, which server is secondary. If failover succeeded, the two servers should have swapped roles.
-1. Select **Failover** again to fail the failover group back to the original settings.
-
-# [PowerShell](#tab/azure-powershell)
-
-Test failover of your failover group using PowerShell.
-
-Check the role of the secondary replica:
-
- ```powershell-interactive
- # Set variables
- $resourceGroupName = "<Resource-Group-Name>"
- $serverName = "<Primary-Server-Name>"
- $failoverGroupName = "<Failover-Group-Name>"
-
- # Check role of secondary replica
- Write-host "Confirming the secondary replica is secondary...."
- (Get-AzSqlDatabaseFailoverGroup `
- -FailoverGroupName $failoverGroupName `
- -ResourceGroupName $resourceGroupName `
- -ServerName $drServerName).ReplicationRole
- ```
-
-Fail over to the secondary server:
-
- ```powershell-interactive
- # Set variables
- $resourceGroupName = "<Resource-Group-Name>"
- $serverName = "<Primary-Server-Name>"
- $failoverGroupName = "<Failover-Group-Name>"
-
- # Failover to secondary server
- Write-host "Failing over failover group to the secondary..."
- Switch-AzSqlDatabaseFailoverGroup `
- -ResourceGroupName $resourceGroupName `
- -ServerName $drServerName `
- -FailoverGroupName $failoverGroupName
- Write-host "Failed failover group to successfully to" $drServerName
- ```
---
-> [!IMPORTANT]
-> If you need to delete the secondary database, remove it from the failover group before deleting it. Deleting a secondary database before it is removed from the failover group can cause unpredictable behavior.
-
-## SQL Managed Instance
-
-Create a failover group between two managed instances in Azure SQL Managed Instance by using the Azure portal or PowerShell.
-
-You will need to either configure [ExpressRoute](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md) or create a gateway for the virtual network of each SQL Managed Instance, connect the two gateways, and then create the failover group.
-
-Deploy both managed instances to [paired regions](../../availability-zones/cross-region-replication-azure.md) for performance reasons. Managed instances residing in geo-paired regions have much better performance compared to unpaired regions.
-
-### Prerequisites
-
-Consider the following prerequisites:
--- The secondary managed instance must be empty.-- The subnet range for the secondary virtual network must not overlap the subnet range of the primary virtual network.-- The collation and timezone of the secondary managed instance must match that of the primary managed instance.-- When connecting the two gateways, the **Shared Key** should be the same for both connections.-
-### Create primary virtual network gateway
-
-If you have not configured [ExpressRoute](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md), you can create the primary virtual network gateway with the Azure portal, or PowerShell.
-
-> [!NOTE]
-> The SKU of the gateway affects throughput performance. This article deploys a gateway with the most basic SKU (`HwGw1`). Deploy a higher SKU (example: `VpnGw3`) to achieve higher throughput. For all available options, see [Gateway SKUs](../../vpn-gateway/vpn-gateway-about-vpngateways.md#benchmark)
-
-# [Portal](#tab/azure-portal)
-
-Create the primary virtual network gateway using the Azure portal.
-
-1. In the [Azure portal](https://portal.azure.com), go to your resource group and select the **Virtual network** resource for your primary managed instance.
-1. Select **Subnets** under **Settings** and then select to add a new **Gateway subnet**. Leave the default values.
-
- ![Add gateway for primary managed instance](./media/auto-failover-group-configure/add-subnet-gateway-primary-vnet.png)
-
-1. Once the subnet gateway is created, select **Create a resource** from the left navigation pane and then type `Virtual network gateway` in the search box. Select the **Virtual network gateway** resource published by **Microsoft**.
-
- ![Create a new virtual network gateway](./media/auto-failover-group-configure/create-virtual-network-gateway.png)
-
-1. Fill out the required fields to configure the gateway your primary managed instance.
-
- The following table shows the values necessary for the gateway for the primary managed instance:
-
- | **Field** | Value |
- | | |
- | **Subscription** | The subscription where your primary managed instance is. |
- | **Name** | The name for your virtual network gateway. |
- | **Region** | The region where your primary managed instance is. |
- | **Gateway type** | Select **VPN**. |
- | **VPN Type** | Select **Route-based** |
- | **SKU**| Leave default of `VpnGw1`. |
- | **Location**| The location where your secondary managed instance and secondary virtual network is. |
- | **Virtual network**| Select the virtual network for your secondary managed instance. |
- | **Public IP address**| Select **Create new**. |
- | **Public IP address name**| Enter a name for your IP address. |
- | &nbsp; | &nbsp; |
-
-1. Leave the other values as default, and then select **Review + create** to review the settings for your virtual network gateway.
-
- ![Primary gateway settings](./media/auto-failover-group-configure/settings-for-primary-gateway.png)
-
-1. Select **Create** to create your new virtual network gateway.
-
-# [PowerShell](#tab/azure-powershell)
-
-Create the primary virtual network gateway using PowerShell.
-
- ```powershell-interactive
- $primaryResourceGroupName = "<Primary-Resource-Group>"
- $primaryVnetName = "<Primary-Virtual-Network-Name>"
- $primaryGWName = "<Primary-Gateway-Name>"
- $primaryGWPublicIPAddress = $primaryGWName + "-ip"
- $primaryGWIPConfig = $primaryGWName + "-ipc"
- $primaryGWAsn = 61000
-
- # Get the primary virtual network
- $vnet1 = Get-AzVirtualNetwork -Name $primaryVnetName -ResourceGroupName $primaryResourceGroupName
- $primaryLocation = $vnet1.Location
-
- # Create primary gateway
- Write-host "Creating primary gateway..."
- $subnet1 = Get-AzVirtualNetworkSubnetConfig -Name GatewaySubnet -VirtualNetwork $vnet1
- $gwpip1= New-AzPublicIpAddress -Name $primaryGWPublicIPAddress -ResourceGroupName $primaryResourceGroupName `
- -Location $primaryLocation -AllocationMethod Dynamic
- $gwipconfig1 = New-AzVirtualNetworkGatewayIpConfig -Name $primaryGWIPConfig `
- -SubnetId $subnet1.Id -PublicIpAddressId $gwpip1.Id
-
- $gw1 = New-AzVirtualNetworkGateway -Name $primaryGWName -ResourceGroupName $primaryResourceGroupName `
- -Location $primaryLocation -IpConfigurations $gwipconfig1 -GatewayType Vpn `
- -VpnType RouteBased -GatewaySku VpnGw1 -EnableBgp $true -Asn $primaryGWAsn
- $gw1
- ```
---
-### Create secondary virtual network gateway
-
-Create the secondary virtual network gateway using the Azure portal or PowerShell.
-
-# [Portal](#tab/azure-portal)
-
-Repeat the steps in the previous section to create the virtual network subnet and gateway for the secondary managed instance. Fill out the required fields to configure the gateway for your secondary managed instance.
-
-The following table shows the values necessary for the gateway for the secondary managed instance:
-
- | **Field** | Value |
- | | |
- | **Subscription** | The subscription where your secondary managed instance is. |
- | **Name** | The name for your virtual network gateway, such as `secondary-mi-gateway`. |
- | **Region** | The region where your secondary managed instance is. |
- | **Gateway type** | Select **VPN**. |
- | **VPN Type** | Select **Route-based** |
- | **SKU**| Leave default of `VpnGw1`. |
- | **Location**| The location where your secondary managed instance and secondary virtual network is. |
- | **Virtual network**| Select the virtual network that was created in section 2, such as `vnet-sql-mi-secondary`. |
- | **Public IP address**| Select **Create new**. |
- | **Public IP address name**| Enter a name for your IP address, such as `secondary-gateway-IP`. |
- | &nbsp; | &nbsp; |
-
- ![Secondary gateway settings](./media/auto-failover-group-configure/settings-for-secondary-gateway.png)
-
-# [PowerShell](#tab/azure-powershell)
-
-Create the secondary virtual network gateway using PowerShell.
-
- ```powershell-interactive
- $secondaryResourceGroupName = "<Secondary-Resource-Group>"
- $secondaryVnetName = "<Secondary-Virtual-Network-Name>"
- $secondaryGWName = "<Secondary-Gateway-Name>"
- $secondaryGWPublicIPAddress = $secondaryGWName + "-IP"
- $secondaryGWIPConfig = $secondaryGWName + "-ipc"
- $secondaryGWAsn = 62000
-
- # Get the secondary virtual network
- $vnet2 = Get-AzVirtualNetwork -Name $secondaryVnetName -ResourceGroupName $secondaryResourceGroupName
- $secondaryLocation = $vnet2.Location
-
- # Create the secondary gateway
- Write-host "Creating secondary gateway..."
- $subnet2 = Get-AzVirtualNetworkSubnetConfig -Name GatewaySubnet -VirtualNetwork $vnet2
- $gwpip2= New-AzPublicIpAddress -Name $secondaryGWPublicIPAddress -ResourceGroupName $secondaryResourceGroupName `
- -Location $secondaryLocation -AllocationMethod Dynamic
- $gwipconfig2 = New-AzVirtualNetworkGatewayIpConfig -Name $secondaryGWIPConfig `
- -SubnetId $subnet2.Id -PublicIpAddressId $gwpip2.Id
-
- $gw2 = New-AzVirtualNetworkGateway -Name $secondaryGWName -ResourceGroupName $secondaryResourceGroupName `
- -Location $secondaryLocation -IpConfigurations $gwipconfig2 -GatewayType Vpn `
- -VpnType RouteBased -GatewaySku VpnGw1 -EnableBgp $true -Asn $secondaryGWAsn
-
- $gw2
- ```
---
-### Connect the gateways
-
-Create connections between the two gateways using the Azure portal or PowerShell.
-
-Two connections need to be created - the connection from the primary gateway to the secondary gateway, and then the connection from the secondary gateway to the primary gateway.
-
-The shared key used for both connections should be the same for each connection.
-
-# [Portal](#tab/azure-portal)
-
-Create connections between the two gateways using the Azure portal.
-
-1. Select **Create a resource** from the [Azure portal](https://portal.azure.com).
-1. Type `connection` in the search box and then press enter to search, which takes you to the **Connection** resource, published by Microsoft.
-1. Select **Create** to create your connection.
-1. On the **Basics** tab, select the following values and then select **OK**.
- 1. Select `VNet-to-VNet` for the **Connection type**.
- 1. Select your subscription from the drop-down.
- 1. Select the resource group for your managed instance in the drop-down.
- 1. Select the location of your primary managed instance from the drop-down.
-1. On the **Settings** tab, select or enter the following values and then select **OK**:
- 1. Choose the primary network gateway for the **First virtual network gateway**, such as `Primary-Gateway`.
- 1. Choose the secondary network gateway for the **Second virtual network gateway**, such as `Secondary-Gateway`.
- 1. Select the checkbox next to **Establish bidirectional connectivity**.
- 1. Either leave the default primary connection name, or rename it to a value of your choice.
- 1. Provide a **Shared key (PSK)** for the connection, such as `mi1m2psk`.
-
- ![Create gateway connection](./media/auto-failover-group-configure/create-gateway-connection.png)
-
-1. On the **Summary** tab, review the settings for your bidirectional connection and then select **OK** to create your connection.
-
-# [PowerShell](#tab/azure-powershell)
-
-Create connections between the two gateways using PowerShell.
-
- ```powershell-interactive
- $vpnSharedKey = "mi1mi2psk"
- $primaryResourceGroupName = "<Primary-Resource-Group>"
- $primaryGWConnection = "<Primary-connection-name>"
- $primaryLocation = "<Primary-Region>"
- $secondaryResourceGroupName = "<Secondary-Resource-Group>"
- $secondaryGWConnection = "<Secondary-connection-name>"
- $secondaryLocation = "<Secondary-Region>"
-
- # Connect the primary to secondary gateway
- Write-host "Connecting the primary gateway"
- New-AzVirtualNetworkGatewayConnection -Name $primaryGWConnection -ResourceGroupName $primaryResourceGroupName `
- -VirtualNetworkGateway1 $gw1 -VirtualNetworkGateway2 $gw2 -Location $primaryLocation `
- -ConnectionType Vnet2Vnet -SharedKey $vpnSharedKey -EnableBgp $true
- $primaryGWConnection
-
- # Connect the secondary to primary gateway
- Write-host "Connecting the secondary gateway"
-
- New-AzVirtualNetworkGatewayConnection -Name $secondaryGWConnection -ResourceGroupName $secondaryResourceGroupName `
- -VirtualNetworkGateway1 $gw2 -VirtualNetworkGateway2 $gw1 -Location $secondaryLocation `
- -ConnectionType Vnet2Vnet -SharedKey $vpnSharedKey -EnableBgp $true
- $secondaryGWConnection
- ```
---
-### Create the failover group
-
-Create the failover group for your managed instances by using the Azure portal or PowerShell.
-
-# [Portal](#tab/azure-portal)
-
-Create the failover group for your SQL Managed Instances by using the Azure portal.
-
-1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type Azure SQL in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
-1. Select the primary managed instance you want to add to the failover group.
-1. Under **Settings**, navigate to **Instance Failover Groups** and then choose to **Add group** to open the **Instance Failover Group** page.
-
- ![Add a failover group](./media/auto-failover-group-configure/add-failover-group.png)
-
-1. On the **Instance Failover Group** page, type the name of your failover group and then choose the secondary managed instance from the drop-down. Select **Create** to create your failover group.
-
- ![Create failover group](./media/auto-failover-group-configure/create-failover-group.png)
-
-1. Once failover group deployment is complete, you will be taken back to the **Failover group** page.
-
-# [PowerShell](#tab/azure-powershell)
-
-Create the failover group for your managed instances using PowerShell.
-
- ```powershell-interactive
- $primaryResourceGroupName = "<Primary-Resource-Group>"
- $failoverGroupName = "<Failover-Group-Name>"
- $primaryLocation = "<Primary-Region>"
- $secondaryLocation = "<Secondary-Region>"
- $primaryManagedInstance = "<Primary-Managed-Instance-Name>"
- $secondaryManagedInstance = "<Secondary-Managed-Instance-Name>"
-
- # Create failover group
- Write-host "Creating the failover group..."
- $failoverGroup = New-AzSqlDatabaseInstanceFailoverGroup -Name $failoverGroupName `
- -Location $primaryLocation -ResourceGroupName $primaryResourceGroupName -PrimaryManagedInstanceName $primaryManagedInstance `
- -PartnerRegion $secondaryLocation -PartnerManagedInstanceName $secondaryManagedInstance `
- -FailoverPolicy Automatic -GracePeriodWithDataLossHours 1
- $failoverGroup
- ```
---
-### Test failover
-
-Test failover of your failover group using the Azure portal or PowerShell.
-
-# [Portal](#tab/azure-portal)
-
-Test failover of your failover group using the Azure portal.
-
-1. Navigate to your _secondary_ managed instance within the [Azure portal](https://portal.azure.com) and select **Instance Failover Groups** under settings.
-1. Review which managed instance is the primary, and which managed instance is the secondary.
-1. Select **Failover** and then select **Yes** on the warning about TDS sessions being disconnected.
-
- ![Fail over the failover group](./media/auto-failover-group-configure/failover-mi-failover-group.png)
-
-1. Review which manged instance is the primary and which instance is the secondary. If failover succeeded, the two instances should have switched roles.
-
- ![Managed instances have switched roles after failover](./media/auto-failover-group-configure/mi-switched-after-failover.png)
-
-1. Go to the new _secondary_ managed instance and select **Failover** once again to fail the primary instance back to the primary role.
-
-# [PowerShell](#tab/azure-powershell)
-
-Test failover of your failover group using PowerShell.
-
- ```powershell-interactive
- $primaryResourceGroupName = "<Primary-Resource-Group>"
- $secondaryResourceGroupName = "<Secondary-Resource-Group>"
- $failoverGroupName = "<Failover-Group-Name>"
- $primaryLocation = "<Primary-Region>"
- $secondaryLocation = "<Secondary-Region>"
- $primaryManagedInstance = "<Primary-Managed-Instance-Name>"
- $secondaryManagedInstance = "<Secondary-Managed-Instance-Name>"
-
- # Verify the current primary role
- Get-AzSqlDatabaseInstanceFailoverGroup -ResourceGroupName $primaryResourceGroupName `
- -Location $secondaryLocation -Name $failoverGroupName
-
- # Failover the primary managed instance to the secondary role
- Write-host "Failing primary over to the secondary location"
- Get-AzSqlDatabaseInstanceFailoverGroup -ResourceGroupName $secondaryResourceGroupName `
- -Location $secondaryLocation -Name $failoverGroupName | Switch-AzSqlDatabaseInstanceFailoverGroup
- Write-host "Successfully failed failover group to secondary location"
-
- # Verify the current primary role
- Get-AzSqlDatabaseInstanceFailoverGroup -ResourceGroupName $primaryResourceGroupName `
- -Location $secondaryLocation -Name $failoverGroupName
-
- # Fail primary managed instance back to primary role
- Write-host "Failing primary back to primary role"
- Get-AzSqlDatabaseInstanceFailoverGroup -ResourceGroupName $primaryResourceGroupName `
- -Location $primaryLocation -Name $failoverGroupName | Switch-AzSqlDatabaseInstanceFailoverGroup
- Write-host "Successfully failed failover group to primary location"
-
- # Verify the current primary role
- Get-AzSqlDatabaseInstanceFailoverGroup -ResourceGroupName $primaryResourceGroupName `
- -Location $secondaryLocation -Name $failoverGroupName
- ```
---
-## Use Private Link
-
-Using a private link allows you to associate a logical server to a specific private IP address within the virtual network and subnet.
-
-To use a private link with your failover group, do the following:
-
-1. Ensure your primary and secondary servers are in a [paired region](../../availability-zones/cross-region-replication-azure.md).
-1. Create the virtual network and subnet in each region to host private endpoints for primary and secondary servers such that they have non-overlapping IP address spaces. For example, the primary virtual network address range of 10.0.0.0/16 and the secondary virtual network address range of 10.0.0.1/16 overlaps. For more information about virtual network address ranges, see the blog [designing Azure virtual networks](https://devblogs.microsoft.com/premier-developer/understanding-cidr-notation-when-designing-azure-virtual-networks-and-subnets/).
-1. Create a [private endpoint and Azure Private DNS zone for the primary server](../../private-link/create-private-endpoint-portal.md#create-a-private-endpoint).
-1. Create a private endpoint for the secondary server as well, but this time choose to reuse the same Private DNS zone that was created for the primary server.
-1. Once the private link is established, you can create the failover group following the steps outlined previously in this article.
--
-## Locate listener endpoint
-
-Once your failover group is configured, update the connection string for your application to the listener endpoint. This will keep your application connected to the failover group listener, rather than the primary database, elastic pool, or instance database. That way, you don't have to manually update the connection string every time your database entity fails over, and traffic is routed to whichever entity is currently primary.
-
-The listener endpoint is in the form of `fog-name.database.windows.net`, and is visible in the Azure portal, when viewing the failover group:
-
-![Failover group connection string](./media/auto-failover-group-configure/find-failover-group-connection-string.png)
-
-## Remarks
--- Removing a failover group for a single or pooled database does not stop replication, and it does not delete the replicated database. You will need to manually stop geo-replication and delete the database from the secondary server if you want to add a single or pooled database back to a failover group after it's been removed. Failing to do either may result in an error similar to `The operation cannot be performed due to multiple errors` when attempting to add the database to the failover group.-
-## Next steps
-
-For detailed steps configuring a failover group, see the following tutorials:
--- [Add a single database to a failover group](failover-group-add-single-database-tutorial.md)-- [Add an elastic pool to a failover group](failover-group-add-elastic-pool-tutorial.md)-- [Add a managed instance to a failover group](../managed-instance/failover-group-add-instance-tutorial.md)-
-For an overview of Azure SQL Database high availability options, see [geo-replication](active-geo-replication-overview.md) and [auto-failover groups](auto-failover-group-overview.md).
azure-sql Auto Failover Group Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/auto-failover-group-overview.md
- Title: Auto-failover groups-
-description: Auto-failover groups let you manage geo-replication and automatic / coordinated failover of a group of databases on a server, or all databases on a managed instance.
-------- Previously updated : 2/2/2022--
-# Use auto-failover groups to enable transparent and coordinated geo-failover of multiple databases
-
-The auto-failover groups feature allows you to manage the replication and failover of a group of databases on a server or all databases in a managed instance to another region. It is a declarative abstraction on top of the existing [active geo-replication](active-geo-replication-overview.md) feature, designed to simplify deployment and management of geo-replicated databases at scale. You can initiate a geo-failover manually or you can delegate it to the Azure service based on a user-defined policy. The latter option allows you to automatically recover multiple related databases in a secondary region after a catastrophic failure or other unplanned event that results in full or partial loss of the SQL Database or SQL Managed Instance availability in the primary region. A failover group can include one or multiple databases, typically used by the same application. Additionally, you can use the readable secondary databases to offload read-only query workloads.
-
-> [!NOTE]
-> Auto-failover groups support geo-replication of all databases in the group to only one secondary server or instance in a different region. If you need to create multiple Azure SQL Database geo-secondary replicas (in the same or different regions) for the same primary replica, use [active geo-replication](active-geo-replication-overview.md).
->
-
-When you are using auto-failover groups with automatic failover policy, an outage that impacts one or several of the databases in the group will result in an automatic geo-failover. Typically, these are outages that cannot be automatically mitigated by the built-in high availability infrastructure. Examples of geo-failover triggers include an incident caused by a SQL Database tenant ring or control ring being down due to an OS kernel memory leak on compute nodes, or an incident caused by one or more tenant rings being down because a wrong network cable was accidentally cut during routine hardware decommissioning. For more information, see [SQL Database High Availability](high-availability-sla.md).
-
-In addition, auto-failover groups provide read-write and read-only listener end-points that remain unchanged during geo-failovers. Whether you use manual or automatic failover activation, a geo-failover switches all secondary databases in the group to the primary role. After the geo-failover is completed, the DNS record is automatically updated to redirect the endpoints to the new region. For geo-failover RPO and RTO, see [Overview of Business Continuity](business-continuity-high-availability-disaster-recover-hadr-overview.md).
-
-When you are using auto-failover groups with automatic failover policy, an outage that impacts databases on a server or managed instance results in an automatic geo-failover.
-
-You can manage auto-failover group using:
--- [Azure portal](geo-distributed-application-configure-tutorial.md)-- [Azure CLI: Failover Group](scripts/add-database-to-failover-group-cli.md)-- [PowerShell: Failover Group](scripts/add-database-to-failover-group-powershell.md)-- [REST API: Failover group](/rest/api/sql/failovergroups)-
-When configuring a failover group, ensure that authentication and network access on the secondary is set up to function correctly after geo-failover, when the geo-secondary becomes the new primary. For details, see [SQL Database security after disaster recovery](active-geo-replication-security-configure.md).
-
-To achieve full business continuity, adding regional database redundancy is only part of the solution. Recovering an application (service) end-to-end after a catastrophic failure requires recovery of all components that constitute the service and any dependent services. Examples of these components include the client software (for example, a browser with a custom JavaScript), web front ends, storage, and DNS. It is critical that all components are resilient to the same failures and become available within the recovery time objective (RTO) of your application. Therefore, you need to identify all dependent services and understand the guarantees and capabilities they provide. Then, you must take adequate steps to ensure that your service functions during the failover of the services on which it depends. For more information about designing solutions for disaster recovery, see [Designing Cloud Solutions for Disaster Recovery Using active geo-replication](designing-cloud-solutions-for-disaster-recovery.md).
-
-## <a name="terminology-and-capabilities"></a> Failover group terminology and capabilities
--- **Failover group (FOG)**-
- A failover group is a named group of databases managed by a single server or within a managed instance that can fail over as a unit to another region in case all or some primary databases become unavailable due to an outage in the primary region. When it's created for SQL Managed Instance, a failover group contains all user databases in the instance and therefore only one failover group can be configured on an instance.
-
- > [!IMPORTANT]
- > The name of the failover group must be globally unique within the `.database.windows.net` domain.
--- **Servers**-
- Some or all of the user databases on a logical server can be placed in a failover group. Also, a server supports multiple failover groups on a single server.
--- **Primary**-
- The server or managed instance that hosts the primary databases in the failover group.
--- **Secondary**-
- The server or managed instance that hosts the secondary databases in the failover group. The secondary cannot be in the same region as the primary.
--- **Adding single databases to failover group**-
- You can put several single databases on the same server into the same failover group. If you add a single database to the failover group, it automatically creates a secondary database using the same edition and compute size on secondary server. You specified that server when the failover group was created. If you add a database that already has a secondary database in the secondary server, that geo-replication link is inherited by the group. When you add a database that already has a secondary database in a server that is not part of the failover group, a new secondary is created in the secondary server.
-
- > [!IMPORTANT]
- > Make sure that the secondary server doesn't have a database with the same name unless it is an existing secondary database. In failover groups for SQL Managed Instance, all user databases are replicated. You cannot pick a subset of user databases for replication in the failover group.
--- **Adding databases in elastic pool to failover group**-
- You can put all or several databases within an elastic pool into the same failover group. If the primary database is in an elastic pool, the secondary is automatically created in the elastic pool with the same name (secondary pool). You must ensure that the secondary server contains an elastic pool with the same exact name and enough free capacity to host the secondary databases that will be created by the failover group. If you add a database in the pool that already has a secondary database in the secondary pool, that geo-replication link is inherited by the group. When you add a database that already has a secondary database in a server that is not part of the failover group, a new secondary is created in the secondary pool.
-
-- **Initial Seeding**-
- When adding databases, elastic pools, or managed instances to a failover group, there is an initial seeding phase before data replication starts. The initial seeding phase is the longest and most expensive operation. Once initial seeding completes, data is synchronized, and then only subsequent data changes are replicated. The time it takes for the initial seeding to complete depends on the size of your data, number of replicated databases, the load on primary databases, and the speed of the link between the primary and secondary. Under normal circumstances, possible seeding speed is up to 500 GB an hour for SQL Database, and up to 360 GB an hour for SQL Managed Instance. Seeding is performed for all databases in parallel.
-
- For SQL Managed Instance, consider the speed of the Express Route link between the two instances when estimating the time of the initial seeding phase. If the speed of the link between the two instances is slower than what is necessary, the time to seed is likely to be noticeably impacted. You can use the stated seeding speed, number of databases, total size of data, and the link speed to estimate how long the initial seeding phase will take before data replication starts. For example, for a single 100 GB database, the initial seed phase would take about 1.2 hours if the link is capable of pushing 84 GB per hour, and if there are no other databases being seeded. If the link can only transfer 10 GB per hour, then seeding a 100 GB database will take about 10 hours. If there are multiple databases to replicate, seeding will be executed in parallel, and, when combined with a slow link speed, the initial seeding phase may take considerably longer, especially if the parallel seeding of data from all databases exceeds the available link bandwidth. If the network bandwidth between two instances is limited and you are adding multiple managed instances to a failover group, consider adding multiple managed instances to the failover group sequentially, one by one. Given an appropriately sized gateway SKU between the two managed instances, and if corporate network bandwidth allows it, it's possible to achieve speeds as high as 360 GB an hour.
--- **DNS zone**-
- A unique ID that is automatically generated when a new SQL Managed Instance is created. A multi-domain (SAN) certificate for this instance is provisioned to authenticate the client connections to any instance in the same DNS zone. The two managed instances in the same failover group must share the DNS zone.
-
- > [!NOTE]
- > A DNS zone ID is not required or used for failover groups created for SQL Database.
--- **Failover group read-write listener**-
- A DNS CNAME record that points to the current primary. It is created automatically when the failover group is created and allows the read-write workload to transparently reconnect to the primary when the primary changes after failover. When the failover group is created on a server, the DNS CNAME record for the listener URL is formed as `<fog-name>.database.windows.net`. When the failover group is created on a SQL Managed Instance, the DNS CNAME record for the listener URL is formed as `<fog-name>.<zone_id>.database.windows.net`.
--- **Failover group read-only listener**-
- A DNS CNAME record that points to the current secondary. It is created automatically when the failover group is created and allows the read-only SQL workload to transparently connect to the secondary when the secondary changes after failover. When the failover group is created on a server, the DNS CNAME record for the listener URL is formed as `<fog-name>.secondary.database.windows.net`. When the failover group is created on a SQL Managed Instance, the DNS CNAME record for the listener URL is formed as `<fog-name>.secondary.<zone_id>.database.windows.net`.
--- **Automatic failover policy**-
- By default, a failover group is configured with an automatic failover policy. The system triggers a geo-failover after the failure is detected and the grace period has expired. The system must verify that the outage cannot be mitigated by the built-in [high availability infrastructure](high-availability-sla.md), for example due to the scale of the impact. If you want to control the geo-failover workflow from the application or manually, you can turn off automatic failover policy.
-
- > [!NOTE]
- > Because verification of the scale of the outage and how quickly it can be mitigated involves human actions, the grace period cannot be set below one hour. This limitation applies to all databases in the failover group regardless of their data synchronization state.
--- **Read-only failover policy**-
- By default, the failover of the read-only listener is disabled. It ensures that the performance of the primary is not impacted when the secondary is offline. However, it also means the read-only sessions will not be able to connect until the secondary is recovered. If you cannot tolerate downtime for the read-only sessions and can use the primary for both read-only and read-write traffic at the expense of the potential performance degradation of the primary, you can enable failover for the read-only listener by configuring the `AllowReadOnlyFailoverToPrimary` property. In that case, the read-only traffic will be automatically redirected to the primary if the secondary is not available.
-
- > [!NOTE]
- > The `AllowReadOnlyFailoverToPrimary` property only has effect if automatic failover policy is enabled and an automatic geo-failover has been triggered. In that case, if the property is set to True, the new primary will serve both read-write and read-only sessions.
--- **Planned failover**-
- Planned failover performs full data synchronization between primary and secondary databases before the secondary switches to the primary role. This guarantees no data loss. Planned failover is used in the following scenarios:
-
- - Perform disaster recovery (DR) drills in production when data loss is not acceptable
- - Relocate the databases to a different region
- - Return the databases to the primary region after the outage has been mitigated (failback)
--- **Unplanned failover**-
- Unplanned or forced failover immediately switches the secondary to the primary role without waiting for recent changes to propagate from the primary. This operation may result in data loss. Unplanned failover is used as a recovery method during outages when the primary is not accessible. When the outage is mitigated, the old primary will automatically reconnect and become a new secondary. A planned failover may be executed to fail back, returning the replicas to their original primary and secondary roles.
--- **Manual failover**-
- You can initiate a geo-failover manually at any time regardless of the automatic failover configuration. During an outage that impacts the primary, if automatic failover policy is not configured, a manual failover is required to promote the secondary to the primary role. You can initiate a forced (unplanned) or friendly (planned) failover. A friendly failover is only possible when the old primary is accessible, and can be used to relocate the primary to the secondary region without data loss. When a failover is completed, the DNS records are automatically updated to ensure connectivity to the new primary.
--- **Grace period with data loss**-
- Because the secondary databases are synchronized using asynchronous replication, an automatic geo-failover may result in data loss. You can customize the automatic failover policy to reflect your applicationΓÇÖs tolerance to data loss. By configuring `GracePeriodWithDataLossHours`, you can control how long the system waits before initiating a forced failover, which may result in data loss.
--- **Multiple failover groups**-
- You can configure multiple failover groups for the same pair of servers to control the scope of geo-failovers. Each group fails over independently. If your tenant-per-database application is deployed in multiple regions and uses elastic pools, you can use this capability to mix primary and secondary databases in each pool. This way you may be able to reduce the impact of an outage to only some tenant databases.
-
- > [!NOTE]
- > SQL Managed Instance does not support multiple failover groups.
-
-## Permissions
-
-Permissions for a failover group are managed via [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md). The [SQL Server Contributor](../../role-based-access-control/built-in-roles.md#sql-server-contributor) role has all the necessary permissions to manage failover groups.
-
-### <a name="create-failover-group"></a> Create a failover group
-
-To create a failover group, you need Azure RBAC write access to both the primary and secondary servers, and to all databases in the failover group. For a SQL Managed Instance, you need Azure RBAC write access to both the primary and secondary SQL Managed Instance, but permissions on individual databases are not relevant, because individual SQL Managed Instance databases cannot be added to or removed from a failover group.
-
-### Update a failover group
-
-To update a failover group, you need Azure RBAC write access to the failover group, and all databases on the current primary server or managed instance.
-
-### Fail over a failover group
-
-To fail over a failover group, you need Azure RBAC write access to the failover group on the new primary server or managed instance.
-
-## <a name="best-practices-for-sql-database"></a> Failover group best practices for SQL Database
-
-The auto-failover group must be configured on the primary server and will connect it to the secondary server in a different Azure region. The groups can include all or some databases in these servers. The following diagram illustrates a typical configuration of a geo-redundant cloud application using multiple databases and auto-failover group.
-
-![Diagram shows a typical configuration of a geo-redundant cloud application using multiple databases and auto-failover group.](./media/auto-failover-group-overview/auto-failover-group.png)
-
-> [!NOTE]
-> See [Add SQL Database to a failover group](failover-group-add-single-database-tutorial.md) for a detailed step-by-step tutorial adding a database in SQL Database to a failover group.
-
-When designing a service with business continuity in mind, follow these general guidelines:
-
-### <a name="using-one-or-several-failover-groups-to-manage-failover-of-multiple-databases"></a> Use one or several failover groups to manage failover of multiple databases
-
-One or many failover groups can be created between two servers in different regions (primary and secondary servers). Each group can include one or several databases that are recovered as a unit in case all or some primary databases become unavailable due to an outage in the primary region. Creating a failover group creates geo-secondary databases with the same service objective as the primary. If you add an existing geo-replication relationship to a failover group, make sure the geo-secondary is configured with the same service tier and compute size as the primary.
-
-### <a name="using-read-write-listener-for-oltp-workload"></a> Use the read-write listener to connect to primary
-
-For read-write workloads, use `<fog-name>.database.windows.net` as the server name in the connection string. Connections will be automatically directed to the primary. This name does not change after failover. Note the failover involves updating the DNS record so the client connections are redirected to the new primary only after the client DNS cache is refreshed. The time to live (TTL) of the primary and secondary listener DNS record is 30 seconds.
-
-### <a name="using-read-only-listener-for-read-only-workload"></a> Use the read-only listener to connect to geo-secondary
-
-If you have logically isolated read-only workloads that are tolerant to data latency, you can run them on the geo-secondary. For read-only sessions, use `<fog-name>.secondary.database.windows.net` as the server name in the connection string. Connections will be automatically directed to the geo-secondary. It is also recommended that you indicate read intent in the connection string by using `ApplicationIntent=ReadOnly`.
-
-> [!NOTE]
-> In Premium, Business Critical, and Hyperscale service tiers, SQL Database supports the use of [read-only replicas](read-scale-out.md) to offload read-only query workloads, using the `ApplicationIntent=ReadOnly` parameter in the connection string. When you have configured a geo-secondary, you can use this capability to connect to either a read-only replica in the primary location or in the geo-replicated location.
->
-> - To connect to a read-only replica in the primary location, use `ApplicationIntent=ReadOnly` and `<fog-name>.database.windows.net`.
-> - To connect to a read-only replica in the secondary location, use `ApplicationIntent=ReadOnly` and `<fog-name>.secondary.database.windows.net`.
-
-### <a name="preparing-for-performance-degradation"></a> Potential performance degradation after geo-failover
-
-A typical Azure application uses multiple Azure services and consists of multiple components. The automatic geo-failover of the failover group is triggered based on the state the Azure SQL components alone. Other Azure services in the primary region may not be affected by the outage and their components may still be available in that region. Once the primary databases switch to the secondary (DR) region, the latency between the dependent components may increase. To avoid the impact of higher latency on the application's performance, ensure the redundancy of all the application's components in the DR region, follow these [network security guidelines](#failover-groups-and-network-security), and orchestrate the geo-failover of relevant application components together with the database.
-
-### <a name="preparing-for-data-loss"></a> Potential data loss after geo-failover
-
-If an outage occurs in the primary region, recent transactions may not be able to replicate to the geo-secondary. If the automatic failover policy is configured, the system waits for the period you specified by `GracePeriodWithDataLossHours` before initiating an automatic geo-failover. The default value is 1 hour. This favors database availability over no data loss. Setting `GracePeriodWithDataLossHours` to a larger number, such as 24 hours, or disabling automatic geo-failover lets you reduce the likelihood of data loss at the expense of database availability.
-
-> [!IMPORTANT]
-> Elastic pools with 800 or fewer DTUs or 8 or fewer vCores, and more than 250 databases may encounter issues including longer planned geo-failovers and degraded performance. These issues are more likely to occur for write intensive workloads, when geo-replicas are widely separated by geography, or when multiple secondary geo-replicas are used for each database. A symptom of these issues is an increase in geo-replication lag over time, potentially leading to a more extensive data loss in an outage. This lag can be monitored using [sys.dm_geo_replication_link_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-geo-replication-link-status-azure-sql-database). If these issues occur, then mitigation includes scaling up the pool to have more DTUs or vCores, or reducing the number of geo-replicated databases in the pool.
-
-### <a name="changing-secondary-region-of-the-failover-group"></a> Change the secondary region of a failover group
-
-To illustrate the change sequence, we will assume that server A is the primary server, server B is the existing secondary server, and server C is the new secondary in the third region. To make the transition, follow these steps:
-
-1. Create additional secondaries of each database on server A to server C using [active geo-replication](active-geo-replication-overview.md). Each database on server A will have two secondaries, one on server B and one on server C. This will guarantee that the primary databases remain protected during the transition.
-2. Delete the failover group. At this point login attempts using failover group endpoints will be failing.
-3. Re-create the failover group with the same name between servers A and C.
-4. Add all primary databases on server A to the new failover group. At this point the login attempts will stop failing.
-5. Delete server B. All databases on B will be deleted automatically.
-
-### <a name="changing-primary-region-of-the-failover-group"></a> Change the primary region of a failover group
-
-To illustrate the change sequence, we will assume server A is the primary server, server B is the existing secondary server, and server C is the new primary in the third region. To make the transition, follow these steps:
-
-1. Perform a planned geo-failover to switch the primary server to B. Server A will become the new secondary server. The failover may result in several minutes of downtime. The actual time will depend on the size of failover group.
-2. Create additional secondaries of each database on server B to server C using [active geo-replication](active-geo-replication-overview.md). Each database on server B will have two secondaries, one on server A and one on server C. This will guarantee that the primary databases remain protected during the transition.
-3. Delete the failover group. At this point login attempts using failover group endpoints will be failing.
-4. Re-create the failover group with the same name between servers B and C.
-5. Add all primary databases on B to the new failover group. At this point the login attempts will stop failing.
-6. Perform a planned geo-failover of the failover group to switch B and C. Now server C will become the primary and B the secondary. All secondary databases on server A will be automatically linked to the primaries on C. As in step 1, the failover may result in several minutes of downtime.
-7. Delete server A. All databases on A will be deleted automatically.
-
-> [!IMPORTANT]
-> When the failover group is deleted, the DNS records for the listener endpoints are also deleted. At that point, there is a non-zero probability of somebody else creating a failover group or a server DNS alias with the same name. Because failover group names and DNS aliases must be globally unique, this will prevent you from using the same name again. To minimize this risk, don't use generic failover group names.
-
-## <a name="best-practices-for-sql-managed-instance"></a> Failover group best practices for SQL Managed Instance
-
-The auto-failover group must be configured on the primary instance and will connect it to the secondary instance in a different Azure region. All user databases in the instance will be replicated to the secondary instance. System databases like _master_ and _msdb_ will not be replicated.
-
-The following diagram illustrates a typical configuration of a geo-redundant cloud application using managed instance and auto-failover group.
-
-![auto failover diagram](./media/auto-failover-group-overview/auto-failover-group-mi.png)
-
-> [!NOTE]
-> See [Add managed instance to a failover group](../managed-instance/failover-group-add-instance-tutorial.md) for a detailed step-by-step tutorial adding a SQL Managed Instance to use failover group.
-
-> [!IMPORTANT]
-> If you deploy auto-failover groups in a hub-and-spoke network topology cross-region, replication traffic should go directly between the two managed instance subnets rather than be directed through the hub networks.
-
-If your application uses SQL Managed Instance as the data tier, follow these general guidelines when designing for business continuity:
-
-### <a name="creating-the-secondary-instance"></a> Create the geo-secondary managed instance
-
-To ensure non-interrupted connectivity to the primary SQL Managed Instance after failover, both the primary and secondary instances must be in the same DNS zone. It will guarantee that the same multi-domain (SAN) certificate can be used to authenticate client connections to either of the two instances in the failover group. When your application is ready for production deployment, create a secondary SQL Managed Instance in a different region and make sure it shares the DNS zone with the primary SQL Managed Instance. You can do it by specifying an optional parameter during creation. If you are using PowerShell or the REST API, the name of the optional parameter is `DNSZonePartner`. The name of the corresponding optional field in the Azure portal is *Primary Managed Instance*.
-
-> [!IMPORTANT]
-> The first managed instance created in the subnet determines DNS zone for all subsequent instances in the same subnet. This means that two instances from the same subnet cannot belong to different DNS zones.
-
-For more information about creating the secondary SQL Managed Instance in the same DNS zone as the primary instance, see [Create a secondary managed instance](../managed-instance/failover-group-add-instance-tutorial.md#create-a-secondary-managed-instance).
-
-### <a name="using-geo-paired-regions"></a> Use paired regions
-
-Deploy both managed instances to [paired regions](../../availability-zones/cross-region-replication-azure.md) for performance reasons. SQL Managed Instance failover groups in paired regions have better performance compared to unpaired regions.
-
-### <a name="enabling-replication-traffic-between-two-instances"></a> Enable geo-replication traffic between two managed instances
-
-Because each managed instance is isolated in its own VNet, two-directional traffic between these VNets must be allowed. See [Azure VPN gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md)
-
-### <a name="creating-a-failover-group-between-managed-instances-in-different-subscriptions"></a> Create a failover group between managed instances in different subscriptions
-
-You can create a failover group between SQL Managed Instances in two different subscriptions, as long as subscriptions are associated to the same [Azure Active Directory Tenant](../../active-directory/fundamentals/active-directory-whatis.md#terminology). When using PowerShell API, you can do it by specifying the `PartnerSubscriptionId` parameter for the secondary SQL Managed Instance. When using REST API, each instance ID included in the `properties.managedInstancePairs` parameter can have its own Subscription ID.
-
-> [!IMPORTANT]
-> Azure portal does not support creation of failover groups across different subscriptions. Also, for the existing failover groups across different subscriptions and/or resource groups, failover cannot be initiated manually via portal from the primary SQL Managed Instance. Initiate it from the geo-secondary instance instead.
-
-### <a name="managing-failover-to-secondary-instance"></a> Manage geo-failover to a geo-secondary instance
-
-The failover group will manage geo-failover of all databases on the primary managed instance. When a group is created, each database in the instance will be automatically geo-replicated to the geo-secondary instance. You cannot use failover groups to initiate a partial failover of a subset of databases.
-
-> [!IMPORTANT]
-> If a database is dropped on the primary managed instance, it will also be dropped automatically on the geo-secondary managed instance.
-
-### <a name="using-read-write-listener-for-oltp-workload"></a> Use the read-write listener to connect to the primary managed instance
-
-For read-write workloads, use `<fog-name>.zone_id.database.windows.net` as the server name. Connections will be automatically directed to the primary. This name does not change after failover. The geo-failover involves updating the DNS record, so the client connections are redirected to the new primary only after the client DNS cache is refreshed. Because the secondary instance shares the DNS zone with the primary, the client application will be able to reconnect to it using the same server-side SAN certificate. The read-write listener and read-only listener cannot be reached via [public endpoint for managed instance](../managed-instance/public-endpoint-configure.md).
-
-### <a name="using-read-only-listener-to-connect-to-the-secondary-instance"></a> Use the read-only listener to connect to the geo-secondary managed instance
-
-If you have logically isolated read-only workloads that are tolerant to data latency, you can run them on the geo-secondary. To connect directly to the geo-secondary, use `<fog-name>.secondary.<zone_id>.database.windows.net` as the server name. The read-write listener and read-only listener cannot be reached via [public endpoint for managed instance](../managed-instance/public-endpoint-configure.md).
-
-> [!NOTE]
-> In the Business Critical tier, SQL Managed Instance supports the use of [read-only replicas](read-scale-out.md) to offload read-only query workloads, using the `ApplicationIntent=ReadOnly` parameter in the connection string. When you have configured a geo-replicated secondary, you can use this capability to connect to either a read-only replica in the primary location or in the geo-replicated location.
->
-> - To connect to a read-only replica in the primary location, use `ApplicationIntent=ReadOnly` and `<fog-name>.<zone_id>.database.windows.net`.
-> - To connect to a read-only replica in the secondary location, use `ApplicationIntent=ReadOnly` and `<fog-name>.secondary.<zone_id>.database.windows.net`.
-
-### Potential performance degradation after failover to the geo-secondary managed instance
-
-A typical Azure application uses multiple Azure services and consists of multiple components. The automatic geo-failover of the failover group is triggered based on the state the Azure SQL components alone. Other Azure services in the primary region may not be affected by the outage and their components may still be available in that region. Once the primary databases switch to the secondary region, the latency between the dependent components may increase. To avoid the impact of higher latency on the application's performance, ensure the redundancy of all the application's components in the secondary region and fail over application components together with the database. At configuration time, follow [network security guidelines](#failover-groups-and-network-security) to ensure connectivity to the database in the secondary region.
-
-### Potential data loss after failover to the geo-secondary managed instance
-
-If an outage occurs in the primary region, recent transactions may not be able to replicate to the geo-secondary. If the automatic failover policy is configured, a geo-failover is triggered if there is zero data loss, to the best of our knowledge. Otherwise, failover is deferred for the period you specify using `GracePeriodWithDataLossHours`. If you configured the automatic failover policy, be prepared for data loss. In general, during outages, Azure favors availability. Setting `GracePeriodWithDataLossHours` to a larger number, such as 24 hours, or disabling automatic geo-failover lets you reduce the likelihood of data loss at the expense of database availability.
-
-The DNS update of the read-write listener will happen immediately after the failover is initiated. This operation will not result in data loss. However, the process of switching database roles can take up to 5 minutes under normal conditions. Until it is completed, some databases in the new primary instance will still be read-only. If a failover is initiated using PowerShell, the operation to switch the primary replica role is synchronous. If it is initiated using the Azure portal, the UI will indicate completion status. If it is initiated using the REST API, use standard Azure Resource ManagerΓÇÖs polling mechanism to monitor for completion.
-
-> [!IMPORTANT]
-> Use manual planned failover to move the primary back to the original location once the outage that caused the geo-failover is mitigated.
-
-### Change the secondary region of the managed instance failover group
-
-Let's assume that instance A is the primary instance, instance B is the existing secondary instance, and instance C is the new secondary instance in the third region. To make the transition, follow these steps:
-
-1. Create instance C with same size as A and in the same DNS zone.
-2. Delete the failover group between instances A and B. At this point the logins will be failing because the SQL aliases for the failover group listeners have been deleted and the gateway will not recognize the failover group name. The secondary databases will be disconnected from the primaries and will become read-write databases.
-3. Create a failover group with the same name between instance A and C. Follow the instructions in [failover group with SQL Managed Instance tutorial](../managed-instance/failover-group-add-instance-tutorial.md). This is a size-of-data operation and will complete when all databases from instance A are seeded and synchronized.
-4. Delete instance B if not needed to avoid unnecessary charges.
-
-> [!NOTE]
-> After step 2 and until step 3 is completed the databases in instance A will remain unprotected from a catastrophic failure of instance A.
-
-### Change the primary region of the managed instance failover group
-
-Let's assume instance A is the primary instance, instance B is the existing secondary instance, and instance C is the new primary instance in the third region. To make the transition, follow these steps:
-
-1. Create instance C with same size as B and in the same DNS zone.
-2. Connect to instance B and manually failover to switch the primary instance to B. Instance A will become the new secondary instance automatically.
-3. Delete the failover group between instances A and B. At this point login attempts using failover group endpoints will be failing. The secondary databases on A will be disconnected from the primaries and will become read-write databases.
-4. Create a failover group with the same name between instance A and C. Follow the instructions in the [failover group with managed instance tutorial](../managed-instance/failover-group-add-instance-tutorial.md). This is a size-of-data operation and will complete when all databases from instance A are seeded and synchronized. At this point login attempts will stop failing.
-5. Delete instance A if not needed to avoid unnecessary charges.
-
-> [!CAUTION]
-> After step 3 and until step 4 is completed the databases in instance A will remain unprotected from a catastrophic failure of instance A.
-
-> [!IMPORTANT]
-> When the failover group is deleted, the DNS records for the listener endpoints are also deleted. At that point, there is a non-zero probability of somebody else creating a failover group with the same name. Because failover group names must be globally unique, this will prevent you from using the same name again. To minimize this risk, don't use generic failover group names.
-
-### Enable scenarios dependent on objects from the system databases
-
-System databases are **not** replicated to the secondary instance in a failover group. To enable scenarios that depend on objects from the system databases, make sure to create the same objects on the secondary instance and keep them synchronized with the primary instance.
-
-For example, if you plan to use the same logins on the secondary instance, make sure to create them with the identical SID.
-
-```SQL
Code to create login on the secondary instance
-CREATE LOGIN foo WITH PASSWORD = '<enterStrongPasswordHere>', SID = <login_sid>;
-```
-### Synchronize instance properties and retention policies between primary and secondary instance
-
-Instances in a failover group remain separate Azure resources, and no changes made to the configuration of the primary instance will be automatically replicated to the secondary instance. Make sure to perform all relevant changes both on primary _and_ secondary instance. For example, if you change backup storage redundancy or long-term backup retention policy on primary instance, make sure to change it on secondary instance as well.
-
-## Failover groups and network security
-
-For some applications the security rules require that the network access to the data tier is restricted to a specific component or components such as a VM, web service, etc. This requirement presents some challenges for business continuity design and the use of failover groups. Consider the following options when implementing such restricted access.
-
-### <a name="using-failover-groups-and-virtual-network-rules"></a> Use failover groups and virtual network service endpoints
-
-If you are using [Virtual Network service endpoints and rules](vnet-service-endpoint-rule-overview.md) to restrict access to your database in SQL Database or SQL Managed Instance, be aware that each virtual network service endpoint applies to only one Azure region. The endpoint does not enable other regions to accept communication from the subnet. Therefore, only the client applications deployed in the same region can connect to the primary database. Since a geo-failover results in the SQL Database client sessions being rerouted to a server in a different (secondary) region, these sessions will fail if originated from a client outside of that region. For that reason, the automatic failover policy cannot be enabled if the participating servers or instances are included in the Virtual Network rules. To support manual failover, follow these steps:
-
-1. Provision the redundant copies of the front-end components of your application (web service, virtual machines etc.) in the secondary region.
-2. Configure the [virtual network rules](vnet-service-endpoint-rule-overview.md) individually for primary and secondary server.
-3. Enable the [front-end failover using a Traffic manager configuration](designing-cloud-solutions-for-disaster-recovery.md#scenario-1-using-two-azure-regions-for-business-continuity-with-minimal-downtime).
-4. Initiate manual geo-failover when the outage is detected. This option is optimized for the applications that require consistent latency between the front-end and the data tier and supports recovery when either front end, data tier or both are impacted by the outage.
-
-> [!NOTE]
-> If you are using the **read-only listener** to load-balance a read-only workload, make sure that this workload is executed in a VM or other resource in the secondary region so it can connect to the secondary database.
-
-### Use failover groups and firewall rules
-
-If your business continuity plan requires failover using groups with automatic failover, you can restrict access to your database in SQL Database by using public IP firewall rules. To support automatic failover, follow these steps:
-
-1. [Create a public IP](../../virtual-network/ip-services/virtual-network-public-ip-address.md#create-a-public-ip-address)
-2. [Create a public load balancer](../../load-balancer/quickstart-load-balancer-standard-public-portal.md) and assign the public IP to it.
-3. [Create a virtual network and the virtual machines](../../load-balancer/quickstart-load-balancer-standard-public-portal.md) for your front-end components.
-4. [Create network security group](../../virtual-network/network-security-groups-overview.md) and configure inbound connections.
-5. Ensure that the outbound connections are open to Azure SQL Database in a region by using an `Sql.<Region>` [service tag](../../virtual-network/network-security-groups-overview.md#service-tags).
-6. Create a [SQL Database firewall rule](firewall-configure.md) to allow inbound traffic from the public IP address you create in step 1.
-
-For more information on how to configure outbound access and what IP to use in the firewall rules, see [Load balancer outbound connections](../../load-balancer/load-balancer-outbound-connections.md).
-
-The above configuration will ensure that an automatic geo-failover will not block connections from the front-end components and assumes that the application can tolerate the longer latency between the front end and the data tier.
-
-> [!IMPORTANT]
-> To guarantee business continuity during regional outages you must ensure geographic redundancy for both front-end components and databases.
-
-## <a name="enabling-geo-replication-between-managed-instances-and-their-vnets"></a> Enabling geo-replication between managed instance virtual networks
-
-When you set up a failover group between primary and secondary SQL Managed Instances in two different regions, each instance is isolated using an independent virtual network. To allow replication traffic between these VNets ensure these prerequisites are met:
--- The two instances of SQL Managed Instance need to be in different Azure regions.-- The two instances of SQL Managed Instance need to be the same service tier, and have the same storage size.-- Your secondary instance of SQL Managed Instance must be empty (no user databases).-- The virtual networks used by the instances of SQL Managed Instance need to be connected through a [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md) or [Express Route](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md). When two virtual networks connect through an on-premises network, ensure there is no firewall rule blocking ports 5022, and 11000-11999. Global VNet Peering is supported with the limitation described in the note below.-
- > [!IMPORTANT]
- > [On 9/22/2020 support for global virtual network peering for newly created virtual clusters was announced](https://azure.microsoft.com/updates/global-virtual-network-peering-support-for-azure-sql-managed-instance-now-available/). It means that global virtual network peering is supported for SQL managed instances created in empty subnets after the announcement date, as well for all the subsequent managed instances created in those subnets. For all the other SQL managed instances peering support is limited to the networks in the same region due to the [constraints of global virtual network peering](../../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints). See also the relevant section of the [Azure Virtual Networks frequently asked questions](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) article for more details. To be able to use global virtual network peering for SQL managed instances from virtual clusters created before the announcement date, consider configuring non-default [maintenance window](./maintenance-window.md) on the instances, as it will move the instances into new virtual clusters that support global virtual network peering.
--- The two SQL Managed Instance VNets cannot have overlapping IP addresses.-- You need to set up your Network Security Groups (NSG) such that ports 5022 and the range 11000~12000 are open inbound and outbound for connections from the subnet of the other managed instance. This is to allow replication traffic between the instances.-
- > [!IMPORTANT]
- > Misconfigured NSG security rules leads to stuck database seeding operations.
--- The secondary SQL Managed Instance is configured with the correct DNS zone ID. DNS zone is a property of a SQL Managed Instance and underlying virtual cluster, and its ID is included in the host name address. The zone ID is generated as a random string when the first SQL Managed Instance is created in each VNet and the same ID is assigned to all other instances in the same subnet. Once assigned, the DNS zone cannot be modified. SQL Managed Instances included in the same failover group must share the DNS zone. You accomplish this by passing the primary instance's zone ID as the value of DnsZonePartner parameter when creating the secondary instance.-
- > [!NOTE]
- > For a detailed tutorial on configuring failover groups with SQL Managed Instance, see [add a SQL Managed Instance to a failover group](../managed-instance/failover-group-add-instance-tutorial.md).
-
-## <a name="upgrading-or-downgrading-primary-database"></a> Scale primary database
-
-You can scale up or scale down the primary database to a different compute size (within the same service tier) without disconnecting any geo-secondaries. When scaling up, we recommend that you scale up the geo-secondary first, and then scale up the primary. When scaling down, reverse the order: scale down the primary first, and then scale down the secondary. When you scale a database to a different service tier, this recommendation is enforced.
-
-This sequence is recommended specifically to avoid the problem where the geo-secondary at a lower SKU gets overloaded and must be re-seeded during an upgrade or downgrade process. You could also avoid the problem by making the primary read-only, at the expense of impacting all read-write workloads against the primary.
-
-> [!NOTE]
-> If you created a geo-secondary as part of the failover group configuration it is not recommended to scale down the geo-secondary. This is to ensure your data tier has sufficient capacity to process your regular workload after a geo-failover.
-
-## <a name="preventing-the-loss-of-critical-data"></a> Prevent loss of critical data
-
-Due to the high latency of wide area networks, geo-replication uses an asynchronous replication mechanism. Asynchronous replication makes the possibility of data loss unavoidable if the primary fails. To protect critical transactions from data loss, an application developer can call the [sp_wait_for_database_copy_sync](/sql/relational-databases/system-stored-procedures/active-geo-replication-sp-wait-for-database-copy-sync) stored procedure immediately after committing the transaction. Calling `sp_wait_for_database_copy_sync` blocks the calling thread until the last committed transaction has been transmitted and hardened in the transaction log of the secondary database. However, it does not wait for the transmitted transactions to be replayed (redone) on the secondary. `sp_wait_for_database_copy_sync` is scoped to a specific geo-replication link. Any user with the connection rights to the primary database can call this procedure.
-
-> [!NOTE]
-> `sp_wait_for_database_copy_sync` prevents data loss after geo-failover for specific transactions, but does not guarantee full synchronization for read access. The delay caused by a `sp_wait_for_database_copy_sync` procedure call can be significant and depends on the size of the not yet transmitted transaction log on the primary at the time of the call.
-
-## Failover groups and point-in-time restore
-
-For information about using point-in-time restore with failover groups, see [Point in Time Recovery (PITR)](recovery-using-backups.md#point-in-time-restore).
-
-## Limitations of failover groups
-
-Be aware of the following limitations:
--- Failover groups cannot be created between two servers or instances in the same Azure region.-- Failover groups cannot be renamed. You will need to delete the group and re-create it with a different name.-- Database rename is not supported for databases in failover group. You will need to temporarily delete failover group to be able to rename a database, or remove the database from the failover group.-- System databases are not replicated to the secondary instance in a failover group. Therefore, scenarios that depend on objects from the system databases require objects to be manually created on the secondary instances and also manually kept in sync after any changes made on primary instance. The only exception is Service master Key (SMK) for SQL Managed Instance, that is replicated automatically to secondary instance during creation of failover group. Any subsequent changes of SMK on the primary instance however will not be replicated to secondary instance.-- Failover groups cannot be created between instances if any of them are in an instance pool.-
-## <a name="programmatically-managing-failover-groups"></a> Programmatically manage failover groups
-
-As discussed previously, auto-failover groups can also be managed programmatically using Azure PowerShell, Azure CLI, and REST API. The following tables describe the set of commands available. Active geo-replication includes a set of Azure Resource Manager APIs for management, including the [Azure SQL Database REST API](/rest/api/sql/) and [Azure PowerShell cmdlets](/powershell/azure/). These APIs require the use of resource groups and support Azure role-based access control (Azure RBAC). For more information on how to implement access roles, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
-
-### <a name="manage-sql-database-failover"></a> Manage SQL Database geo-failover
-
-# [PowerShell](#tab/azure-powershell)
-
-| Cmdlet | Description |
-| | |
-| [New-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/new-azsqldatabasefailovergroup) |This command creates a failover group and registers it on both primary and secondary servers|
-| [Remove-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/remove-azsqldatabasefailovergroup) | Removes a failover group from the server |
-| [Get-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/get-azsqldatabasefailovergroup) | Retrieves a failover group's configuration |
-| [Set-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/set-azsqldatabasefailovergroup) |Modifies configuration of a failover group |
-| [Switch-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/switch-azsqldatabasefailovergroup) | Triggers failover of a failover group to the secondary server |
-| [Add-AzSqlDatabaseToFailoverGroup](/powershell/module/az.sql/add-azsqldatabasetofailovergroup)|Adds one or more databases to a failover group|
-
-# [Azure CLI](#tab/azure-cli)
-
-| Command | Description |
-| | |
-| [az sql failover-group create](/cli/azure/sql/failover-group#az-sql-failover-group-create) |This command creates a failover group and registers it on both primary and secondary servers|
-| [az sql failover-group delete](/cli/azure/sql/failover-group#az-sql-failover-group-delete) | Removes a failover group from the server |
-| [az sql failover-group show](/cli/azure/sql/failover-group#az-sql-failover-group-show) | Retrieves a failover group configuration |
-| [az sql failover-group update](/cli/azure/sql/failover-group#az-sql-failover-group-update) |Modifies a failover group's configuration and/or adds one or more databases to a failover group|
-| [az sql failover-group set-primary](/cli/azure/sql/failover-group#az-sql-failover-group-set-primary) | Triggers failover of a failover group to the secondary server |
-
-# [REST API](#tab/rest-api)
-
-| API | Description |
-| | |
-| [Create or Update Failover Group](/rest/api/sql/failovergroups/createorupdate) | Creates or updates a failover group |
-| [Delete Failover Group](/rest/api/sql/failovergroups/delete) | Removes a failover group from the server |
-| [Failover (Planned)](/rest/api/sql/failovergroups/failover) | Triggers failover from the current primary server to the secondary server with full data synchronization.|
-| [Force Failover Allow Data Loss](/rest/api/sql/failovergroups/forcefailoverallowdataloss) | Triggers failover from the current primary server to the secondary server without synchronizing data. This operation may result in data loss. |
-| [Get Failover Group](/rest/api/sql/failovergroups/get) | Retrieves a failover group's configuration. |
-| [List Failover Groups By Server](/rest/api/sql/failovergroups/listbyserver) | Lists the failover groups on a server. |
-| [Update Failover Group](/rest/api/sql/failovergroups/update) | Updates a failover group's configuration. |
---
-### <a name="manage-sql-managed-instance-failover"></a> Manage SQL Managed Instance geo-failover
-
-# [PowerShell](#tab/azure-powershell)
-
-| Cmdlet | Description |
-| | |
-| [New-AzSqlDatabaseInstanceFailoverGroup](/powershell/module/az.sql/new-azsqldatabaseinstancefailovergroup) |This command creates a failover group and registers it on both primary and secondary instances|
-| [Set-AzSqlDatabaseInstanceFailoverGroup](/powershell/module/az.sql/set-azsqldatabaseinstancefailovergroup) |Modifies configuration of a failover group|
-| [Get-AzSqlDatabaseInstanceFailoverGroup](/powershell/module/az.sql/get-azsqldatabaseinstancefailovergroup) |Retrieves a failover group's configuration|
-| [Switch-AzSqlDatabaseInstanceFailoverGroup](/powershell/module/az.sql/switch-azsqldatabaseinstancefailovergroup) |Triggers failover of a failover group to the secondary instance|
-| [Remove-AzSqlDatabaseInstanceFailoverGroup](/powershell/module/az.sql/remove-azsqldatabaseinstancefailovergroup) | Removes a failover group|
--
-# [Azure CLI](#tab/azure-cli)
-
-| Command | Description |
-| | |
-| [az sql failover-group create](/cli/azure/sql/failover-group#az-sql-failover-group-create) |This command creates a failover group and registers it on both primary and secondary servers|
-| [az sql failover-group delete](/cli/azure/sql/failover-group#az-sql-failover-group-delete) | Removes a failover group from the server |
-| [az sql failover-group show](/cli/azure/sql/failover-group#az-sql-failover-group-show) | Retrieves a failover group configuration |
-| [az sql failover-group update](/cli/azure/sql/failover-group#az-sql-failover-group-update) |Modifies a failover group's configuration and/or adds one or more databases to a failover group|
-| [az sql failover-group set-primary](/cli/azure/sql/failover-group#az-sql-failover-group-set-primary) | Triggers failover of a failover group to the secondary server |
-
-# [REST API](#tab/rest-api)
-
-| API | Description |
-| | |
-| [Create or Update Failover Group](/rest/api/sql/instancefailovergroups/createorupdate) | Creates or updates a failover group's configuration |
-| [Delete Failover Group](/rest/api/sql/instancefailovergroups/delete) | Removes a failover group from the instance |
-| [Failover (Planned)](/rest/api/sql/instancefailovergroups/failover) | Triggers failover from the current primary instance to this instance with full data synchronization. |
-| [Force Failover Allow Data Loss](/rest/api/sql/instancefailovergroups/forcefailoverallowdataloss) | Triggers failover from the current primary instance to the secondary instance without synchronizing data. This operation may result in data loss. |
-| [Get Failover Group](/rest/api/sql/instancefailovergroups/get) | retrieves a failover group's configuration. |
-| [List Failover Groups - List By Location](/rest/api/sql/instancefailovergroups/listbylocation) | Lists the failover groups in a location. |
---
-## Next steps
--- For detailed tutorials, see
- - [Add SQL Database to a failover group](failover-group-add-single-database-tutorial.md)
- - [Add an elastic pool to a failover group](failover-group-add-elastic-pool-tutorial.md)
- - [Add a SQL Managed Instance to a failover group](../managed-instance/failover-group-add-instance-tutorial.md)
-- For sample scripts, see:
- - [Use PowerShell to configure active geo-replication for Azure SQL Database](scripts/setup-geodr-and-failover-database-powershell.md)
- - [Use PowerShell to configure active geo-replication for a pooled database in Azure SQL Database](scripts/setup-geodr-and-failover-elastic-pool-powershell.md)
- - [Use PowerShell to add an Azure SQL Database to a failover group](scripts/add-database-to-failover-group-powershell.md)
-- For a business continuity overview and scenarios, see [Business continuity overview](business-continuity-high-availability-disaster-recover-hadr-overview.md)-- To learn about Azure SQL Database automated backups, see [SQL Database automated backups](automated-backups-overview.md).-- To learn about using automated backups for recovery, see [Restore a database from the service-initiated backups](recovery-using-backups.md).-- To learn about authentication requirements for a new primary server and database, see [SQL Database security after disaster recovery](active-geo-replication-security-configure.md).
azure-sql Auto Failover Group Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/auto-failover-group-sql-db.md
+
+ Title: Auto-failover groups overview & best practices
+description: Auto-failover groups let you manage geo-replication and automatic / coordinated failover of a group of databases on a server for both single and pooled database in Azure SQL Database.
++++++++ Last updated : 03/01/2022++
+# Auto-failover groups overview & best practices (Azure SQL Database)
+
+> [!div class="op_single_selector"]
+> * [Azure SQL Database](auto-failover-group-sql-db.md)
+> * [Azure SQL Managed Instance](../managed-instance/auto-failover-group-sql-mi.md)
+
+The auto-failover groups feature allows you to manage the replication and failover of some or all databases on a [logical server](logical-servers.md) to another region. This article focuses on using the Auto-failover group feature with Azure SQL Database and some best practices.
+
+To get started, review [Configure auto-failover group](auto-failover-group-configure-sql-db.md). For an end-to-end experience, see the [Auto-failover group tutorial](failover-group-add-single-database-tutorial.md).
++
+> [!NOTE]
+> - This article covers auto-failover groups for Azure SQL Database. For Azure SQL Managed Instance, see [Auto-failover groups in Azure SQL Managed Instance](../managed-instance/auto-failover-group-sql-mi.md).
+> - Auto-failover groups support geo-replication of all databases in the group to only one secondary server in a different region. If you need to create multiple Azure SQL Database geo-secondary replicas (in the same or different regions) for the same primary replica, use [active geo-replication](active-geo-replication-overview.md).
+>
+
+## Overview
+++
+## <a name="terminology-and-capabilities"></a> Terminology and capabilities
+
+<!--
+There is some overlap of content in the following articles, be sure to make changes to all if necessary:
+/azure-sql/database/auto-failover-group-sql-db.md
+/azure-sql/managed-instance/auto-failover-group-sql-mi.md
+-->
++
+- **Failover group (FOG)**
+
+ A failover group is a named group of databases managed by a single server that can fail over as a unit to another Azure region in case all or some primary databases become unavailable due to an outage in the primary region.
+
+ > [!IMPORTANT]
+ > The name of the failover group must be globally unique within the `.database.windows.net` domain.
+
+- **Servers**
+
+ Some or all of the user databases on a [logical server](logical-servers.md) can be placed in a failover group. Also, a server supports multiple failover groups on a single server.
+
+- **Primary**
+
+ The server that hosts the primary databases in the failover group.
+
+- **Secondary**
+
+ The server that hosts the secondary databases in the failover group. The secondary cannot be in the same Azure region as the primary.
+
+- **Adding single databases to failover group**
+
+ You can put several single databases on the same server into the same failover group. If you add a single database to the failover group, it automatically creates a secondary database using the same edition and compute size on secondary server. You specified that server when the failover group was created. If you add a database that already has a secondary database in the secondary server, that geo-replication link is inherited by the group. When you add a database that already has a secondary database in a server that is not part of the failover group, a new secondary is created in the secondary server.
+
+ > [!IMPORTANT]
+ > Make sure that the secondary server doesn't have a database with the same name unless it is an existing secondary database.
+
+- **Adding databases in elastic pool to failover group**
+
+ You can put all or several databases within an elastic pool into the same failover group. If the primary database is in an elastic pool, the secondary is automatically created in the elastic pool with the same name (secondary pool). You must ensure that the secondary server contains an elastic pool with the same exact name and enough free capacity to host the secondary databases that will be created by the failover group. If you add a database in the pool that already has a secondary database in the secondary pool, that geo-replication link is inherited by the group. When you add a database that already has a secondary database in a server that is not part of the failover group, a new secondary is created in the secondary pool.
+
+- **Failover group read-write listener**
+
+ A DNS CNAME record that points to the current primary. It is created automatically when the failover group is created and allows the read-write workload to transparently reconnect to the primary when the primary changes after failover. When the failover group is created on a server, the DNS CNAME record for the listener URL is formed as `<fog-name>.database.windows.net`.
+
+- **Failover group read-only listener**
+
+ A DNS CNAME record that points to the current secondary. It is created automatically when the failover group is created and allows the read-only SQL workload to transparently connect to the secondary when the secondary changes after failover. When the failover group is created on a server, the DNS CNAME record for the listener URL is formed as `<fog-name>.secondary.database.windows.net`.
+
+- **Multiple failover groups**
+
+ You can configure multiple failover groups for the same pair of servers to control the scope of geo-failovers. Each group fails over independently. If your tenant-per-database application is deployed in multiple regions and uses elastic pools, you can use this capability to mix primary and secondary databases in each pool. This way you may be able to reduce the impact of an outage to only some tenant databases.
++
+## Failover group architecture
+
+A failover group in Azure SQL Database can include one or multiple databases, typically used by the same application. When you are using auto-failover groups with automatic failover policy, an outage that impacts one or several of the databases in the group will result in an automatic geo-failover.
+
+The auto-failover group must be configured on the primary server and will connect it to the secondary server in a different Azure region. The groups can include all or some databases in these servers. The following diagram illustrates a typical configuration of a geo-redundant cloud application using multiple databases and auto-failover group.
+
+![Diagram shows a typical configuration of a geo-redundant cloud application using multiple databases and auto-failover group.](./media/auto-failover-group-overview/auto-failover-group.png)
+
+When designing a service with business continuity in mind, follow the general guidelines and best practices outlined in this article. When configuring a failover group, ensure that authentication and network access on the secondary is set up to function correctly after geo-failover, when the geo-secondary becomes the new primary. For details, see [SQL Database security after disaster recovery](active-geo-replication-security-configure.md). For more information about designing solutions for disaster recovery, see [Designing Cloud Solutions for Disaster Recovery Using active geo-replication](designing-cloud-solutions-for-disaster-recovery.md).
+
+For information about using point-in-time restore with failover groups, see [Point in Time Recovery (PITR)](recovery-using-backups.md#point-in-time-restore).
++
+## Initial seeding
+
+When adding databases or elastic pools to a failover group, there is an initial seeding phase before data replication starts. The initial seeding phase is the longest and most expensive operation. Once initial seeding completes, data is synchronized, and then only subsequent data changes are replicated. The time it takes for the initial seeding to complete depends on the size of your data, number of replicated databases, the load on primary databases, and the speed of the link between the primary and secondary. Under normal circumstances, possible seeding speed is up to 500 GB an hour for SQL Database. Seeding is performed for all databases in parallel.
++
+## <a name="using-one-or-several-failover-groups-to-manage-failover-of-multiple-databases"></a> Use multiple failover groups to failover multiple databases
+
+One or many failover groups can be created between two servers in different regions (primary and secondary servers). Each group can include one or several databases that are recovered as a unit in case all or some primary databases become unavailable due to an outage in the primary region. Creating a failover group creates geo-secondary databases with the same service objective as the primary. If you add an existing geo-replication relationship to a failover group, make sure the geo-secondary is configured with the same service tier and compute size as the primary.
+
+## <a name="using-read-write-listener-for-oltp-workload"></a> Use the read-write listener (primary)
+
+For read-write workloads, use `<fog-name>.database.windows.net` as the server name in the connection string. Connections will be automatically directed to the primary. This name does not change after failover. Note the failover involves updating the DNS record so the client connections are redirected to the new primary only after the client DNS cache is refreshed. The time to live (TTL) of the primary and secondary listener DNS record is 30 seconds.
+
+## <a name="using-read-only-listener-for-read-only-workload"></a> Use the read-only listener (secondary)
+
+If you have logically isolated read-only workloads that are tolerant to data latency, you can run them on the geo-secondary. For read-only sessions, use `<fog-name>.secondary.database.windows.net` as the server name in the connection string. Connections will be automatically directed to the geo-secondary. It is also recommended that you indicate read intent in the connection string by using `ApplicationIntent=ReadOnly`.
+
+In Premium, Business Critical, and Hyperscale service tiers, SQL Database supports the use of [read-only replicas](read-scale-out.md) to offload read-only query workloads, using the `ApplicationIntent=ReadOnly` parameter in the connection string. When you have configured a geo-secondary, you can use this capability to connect to either a read-only replica in the primary location or in the geo-replicated location:
+- To connect to a read-only replica in the secondary location, use `ApplicationIntent=ReadOnly` and `<fog-name>.secondary.database.windows.net`.
+
+## <a name="preparing-for-performance-degradation"></a> Potential performance degradation after failover
+
+A typical Azure application uses multiple Azure services and consists of multiple components. The automatic geo-failover of the failover group is triggered based on the state the Azure SQL components alone. Other Azure services in the primary region may not be affected by the outage and their components may still be available in that region. Once the primary databases switch to the secondary (DR) region, the latency between the dependent components may increase. To avoid the impact of higher latency on the application's performance, ensure the redundancy of all the application's components in the DR region, follow these [network security guidelines](#failover-groups-and-network-security), and orchestrate the geo-failover of relevant application components together with the database.
+
+## <a name="preparing-for-data-loss"></a> Potential data loss after failover
+
+If an outage occurs in the primary region, recent transactions may not be able to replicate to the geo-secondary. If the automatic failover policy is configured, the system waits for the period you specified by `GracePeriodWithDataLossHours` before initiating an automatic geo-failover. The default value is 1 hour. This favors database availability over no data loss. Setting `GracePeriodWithDataLossHours` to a larger number, such as 24 hours, or disabling automatic geo-failover lets you reduce the likelihood of data loss at the expense of database availability.
+
+> [!IMPORTANT]
+> Elastic pools with 800 or fewer DTUs or 8 or fewer vCores, and more than 250 databases may encounter issues including longer planned geo-failovers and degraded performance. These issues are more likely to occur for write intensive workloads, when geo-replicas are widely separated by geography, or when multiple secondary geo-replicas are used for each database. A symptom of these issues is an increase in geo-replication lag over time, potentially leading to a more extensive data loss in an outage. This lag can be monitored using [sys.dm_geo_replication_link_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-geo-replication-link-status-azure-sql-database). If these issues occur, then mitigation includes scaling up the pool to have more DTUs or vCores, or reducing the number of geo-replicated databases in the pool.
++
+## Failover groups and network security
+
+For some applications the security rules require that the network access to the data tier is restricted to a specific component or components such as a VM, web service, etc. This requirement presents some challenges for business continuity design and the use of failover groups. Consider the following options when implementing such restricted access.
+
+### <a name="using-failover-groups-and-virtual-network-rules"></a> Use failover groups and virtual network service endpoints
+
+If you are using [Virtual Network service endpoints and rules](vnet-service-endpoint-rule-overview.md) to restrict access to your database in SQL Database, be aware that each virtual network service endpoint applies to only one Azure region. The endpoint does not enable other regions to accept communication from the subnet. Therefore, only the client applications deployed in the same region can connect to the primary database. Since a geo-failover results in the SQL Database client sessions being rerouted to a server in a different (secondary) region, these sessions will fail if originated from a client outside of that region. For that reason, the automatic failover policy cannot be enabled if the participating servers or instances are included in the Virtual Network rules. To support manual failover, follow these steps:
+
+1. Provision the redundant copies of the front-end components of your application (web service, virtual machines etc.) in the secondary region.
+2. Configure the [virtual network rules](vnet-service-endpoint-rule-overview.md) individually for primary and secondary server.
+3. Enable the [front-end failover using a Traffic manager configuration](designing-cloud-solutions-for-disaster-recovery.md#scenario-1-using-two-azure-regions-for-business-continuity-with-minimal-downtime).
+4. Initiate manual geo-failover when the outage is detected. This option is optimized for the applications that require consistent latency between the front-end and the data tier and supports recovery when either front end, data tier or both are impacted by the outage.
+
+> [!NOTE]
+> If you are using the **read-only listener** to load-balance a read-only workload, make sure that this workload is executed in a VM or other resource in the secondary region so it can connect to the secondary database.
+
+### Use failover groups and firewall rules
+
+If your business continuity plan requires failover using groups with automatic failover, you can restrict access to your database in SQL Database by using public IP firewall rules. To support automatic failover, follow these steps:
+
+1. [Create a public IP](../../virtual-network/ip-services/virtual-network-public-ip-address.md#create-a-public-ip-address).
+2. [Create a public load balancer](../../load-balancer/quickstart-load-balancer-standard-public-portal.md) and assign the public IP to it.
+3. [Create a virtual network and the virtual machines](../../load-balancer/quickstart-load-balancer-standard-public-portal.md) for your front-end components.
+4. [Create network security group](../../virtual-network/network-security-groups-overview.md) and configure inbound connections.
+5. Ensure that the outbound connections are open to Azure SQL Database in a region by using an `Sql.<Region>` [service tag](../../virtual-network/network-security-groups-overview.md#service-tags).
+6. Create a [SQL Database firewall rule](firewall-configure.md) to allow inbound traffic from the public IP address you create in step 1.
+
+For more information on how to configure outbound access and what IP to use in the firewall rules, see [Load balancer outbound connections](../../load-balancer/load-balancer-outbound-connections.md).
+
+The above configuration will ensure that an automatic geo-failover will not block connections from the front-end components and assumes that the application can tolerate the longer latency between the front end and the data tier.
+
+> [!IMPORTANT]
+> To guarantee business continuity during regional outages you must ensure geographic redundancy for both front-end components and databases.
+
+## <a name="upgrading-or-downgrading-primary-database"></a> Scale primary database
+
+You can scale up or scale down the primary database to a different compute size (within the same service tier) without disconnecting any geo-secondaries. When scaling up, we recommend that you scale up the geo-secondary first, and then scale up the primary. When scaling down, reverse the order: scale down the primary first, and then scale down the secondary. When you scale a database to a different service tier, this recommendation is enforced.
+
+This sequence is recommended specifically to avoid the problem where the geo-secondary at a lower SKU gets overloaded and must be re-seeded during an upgrade or downgrade process. You could also avoid the problem by making the primary read-only, at the expense of impacting all read-write workloads against the primary.
+
+> [!NOTE]
+> If you created a geo-secondary as part of the failover group configuration it is not recommended to scale down the geo-secondary. This is to ensure your data tier has sufficient capacity to process your regular workload after a geo-failover.
+
+## <a name="preventing-the-loss-of-critical-data"></a> Prevent loss of critical data
+
+<!--
+There is some overlap in the following content, be sure to update all that's necessary:
+/azure-sql/database/auto-failover-group-sql-db.md
+/azure-sql/managed-instance/auto-failover-group-sql-mi.md
+-->
+
+Due to the high latency of wide area networks, geo-replication uses an asynchronous replication mechanism. Asynchronous replication makes the possibility of data loss unavoidable if the primary fails. To protect critical transactions from data loss, an application developer can call the [sp_wait_for_database_copy_sync](/sql/relational-databases/system-stored-procedures/active-geo-replication-sp-wait-for-database-copy-sync) stored procedure immediately after committing the transaction. Calling `sp_wait_for_database_copy_sync` blocks the calling thread until the last committed transaction has been transmitted and hardened in the transaction log of the secondary database. However, it does not wait for the transmitted transactions to be replayed (redone) on the secondary. `sp_wait_for_database_copy_sync` is scoped to a specific geo-replication link. Any user with the connection rights to the primary database can call this procedure.
+
+> [!NOTE]
+> `sp_wait_for_database_copy_sync` prevents data loss after geo-failover for specific transactions, but does not guarantee full synchronization for read access. The delay caused by a `sp_wait_for_database_copy_sync` procedure call can be significant and depends on the size of the not yet transmitted transaction log on the primary at the time of the call.
++
+## Permissions
+
+<!--
+There is some overlap of content in the following three articles, be sure to make changes in all three places if necessary
+/azure-sql/database/auto-failover-group-sql-db.md
+/azure-sql/database/auto-failover-group-configure-sql-db.md
+/azure-sql/managed-instance/auto-failover-group-sql-mi.md
+/azure-sql/managed-instance/auto-failover-group-configure-sql-mi.md
+-->
+
+Permissions for a failover group are managed via [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
+
+Azure RBAC write access is necessary to create and manage failover groups. The [SQL Server Contributor role](../../role-based-access-control/built-in-roles.md#sql-server-contributor) has all the necessary permissions to manage failover groups.
+
+For specific permission scopes, review how to [configure auto-failover groups in Azure SQL Database](auto-failover-group-sql-db.md#permissions).
+
+## Limitations
+
+Be aware of the following limitations:
+
+- Failover groups cannot be created between two servers in the same Azure region.
+- Failover groups cannot be renamed. You will need to delete the group and re-create it with a different name.
+- Database rename is not supported for databases in failover group. You will need to temporarily delete failover group to be able to rename a database, or remove the database from the failover group.
+
+## <a name="programmatically-managing-failover-groups"></a> Programmatically manage failover groups
+
+As discussed previously, auto-failover groups can also be managed programmatically using Azure PowerShell, Azure CLI, and REST API. The following tables describe the set of commands available. Active geo-replication includes a set of Azure Resource Manager APIs for management, including the [Azure SQL Database REST API](/rest/api/sql/) and [Azure PowerShell cmdlets](/powershell/azure/). These APIs require the use of resource groups and support Azure role-based access control (Azure RBAC). For more information on how to implement access roles, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
++
+# [PowerShell](#tab/azure-powershell)
+
+| Cmdlet | Description |
+| | |
+| [New-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/new-azsqldatabasefailovergroup) |This command creates a failover group and registers it on both primary and secondary servers|
+| [Remove-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/remove-azsqldatabasefailovergroup) | Removes a failover group from the server |
+| [Get-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/get-azsqldatabasefailovergroup) | Retrieves a failover group's configuration |
+| [Set-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/set-azsqldatabasefailovergroup) |Modifies configuration of a failover group |
+| [Switch-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/switch-azsqldatabasefailovergroup) | Triggers failover of a failover group to the secondary server |
+| [Add-AzSqlDatabaseToFailoverGroup](/powershell/module/az.sql/add-azsqldatabasetofailovergroup)|Adds one or more databases to a failover group|
+
+# [Azure CLI](#tab/azure-cli)
+
+| Command | Description |
+| | |
+| [az sql failover-group create](/cli/azure/sql/failover-group#az-sql-failover-group-create) |This command creates a failover group and registers it on both primary and secondary servers|
+| [az sql failover-group delete](/cli/azure/sql/failover-group#az-sql-failover-group-delete) | Removes a failover group from the server |
+| [az sql failover-group show](/cli/azure/sql/failover-group#az-sql-failover-group-show) | Retrieves a failover group configuration |
+| [az sql failover-group update](/cli/azure/sql/failover-group#az-sql-failover-group-update) |Modifies a failover group's configuration and/or adds one or more databases to a failover group|
+| [az sql failover-group set-primary](/cli/azure/sql/failover-group#az-sql-failover-group-set-primary) | Triggers failover of a failover group to the secondary server |
+
+# [REST API](#tab/rest-api)
+
+| API | Description |
+| | |
+| [Create or Update Failover Group](/rest/api/sql/failovergroups/createorupdate) | Creates or updates a failover group |
+| [Delete Failover Group](/rest/api/sql/failovergroups/delete) | Removes a failover group from the server |
+| [Failover (Planned)](/rest/api/sql/failovergroups/failover) | Triggers failover from the current primary server to the secondary server with full data synchronization.|
+| [Force Failover Allow Data Loss](/rest/api/sql/failovergroups/forcefailoverallowdataloss) | Triggers failover from the current primary server to the secondary server without synchronizing data. This operation may result in data loss. |
+| [Get Failover Group](/rest/api/sql/failovergroups/get) | Retrieves a failover group's configuration. |
+| [List Failover Groups By Server](/rest/api/sql/failovergroups/listbyserver) | Lists the failover groups on a server. |
+| [Update Failover Group](/rest/api/sql/failovergroups/update) | Updates a failover group's configuration. |
+++++
+## Next steps
+
+- For detailed tutorials, see
+ - [Add SQL Database to a failover group](failover-group-add-single-database-tutorial.md)
+ - [Add an elastic pool to a failover group](failover-group-add-elastic-pool-tutorial.md)
+- For sample scripts, see:
+ - [Use PowerShell to configure active geo-replication for Azure SQL Database](scripts/setup-geodr-and-failover-database-powershell.md)
+ - [Use PowerShell to configure active geo-replication for a pooled database in Azure SQL Database](scripts/setup-geodr-and-failover-elastic-pool-powershell.md)
+ - [Use PowerShell to add an Azure SQL Database to a failover group](scripts/add-database-to-failover-group-powershell.md)
+- For a business continuity overview and scenarios, see [Business continuity overview](business-continuity-high-availability-disaster-recover-hadr-overview.md)
+- To learn about Azure SQL Database automated backups, see [SQL Database automated backups](automated-backups-overview.md).
+- To learn about using automated backups for recovery, see [Restore a database from the service-initiated backups](recovery-using-backups.md).
+- To learn about authentication requirements for a new primary server and database, see [SQL Database security after disaster recovery](active-geo-replication-security-configure.md).
azure-sql Failover Group Add Elastic Pool Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/failover-group-add-elastic-pool-tutorial.md
Last updated 01/26/2022
[!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-Configure a failover group for an Azure SQL Database elastic pool and test failover using the Azure portal. In this tutorial, you'll learn how to:
+> [!div class="op_single_selector"]
+> * [Azure SQL Database (single database)](failover-group-add-single-database-tutorial.md)
+> * [Azure SQL Database (elastic pool)](failover-group-add-elastic-pool-tutorial.md)
+> * [Azure SQL Managed Instance](../managed-instance/failover-group-add-instance-tutorial.md)
++
+Configure an [auto-failover group](auto-failover-group-sql-db.md) for an Azure SQL Database elastic pool and test failover using the Azure portal.
+
+In this tutorial, you'll learn how to:
> [!div class="checklist"] > > - Create a single database. > - Add the database to an elastic pool.
-> - Create a [failover group](auto-failover-group-overview.md) for two elastic pools between two servers.
+> - Create a failover group for two elastic pools between two servers.
> - Test failover. ## Prerequisites
azure-sql Failover Group Add Single Database Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/failover-group-add-single-database-tutorial.md
Title: "Tutorial: Add a database to a failover group"
-description: Add a database in Azure SQL Database to an autofailover group using the Azure portal, PowerShell, or the Azure CLI.
+description: Add a database in Azure SQL Database to an auto-failover group using the Azure portal, PowerShell, or the Azure CLI.
Last updated 01/26/2022
-# Tutorial: Add an Azure SQL Database to an autofailover group
-
+# Tutorial: Add an Azure SQL Database to an auto-failover group
[!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-A [failover group](auto-failover-group-overview.md) is a declarative abstraction layer that allows you to group multiple geo-replicated databases. Learn to configure a failover group for an Azure SQL Database and test failover using either the Azure portal, PowerShell, or the Azure CLI. In this tutorial, you'll learn how to:
+> [!div class="op_single_selector"]
+> * [Azure SQL Database (single database)](failover-group-add-single-database-tutorial.md)
+> * [Azure SQL Database (elastic pool)](failover-group-add-elastic-pool-tutorial.md)
+> * [Azure SQL Managed Instance](../managed-instance/failover-group-add-instance-tutorial.md)
++
+A [failover group](auto-failover-group-sql-db.md) is a declarative abstraction layer that allows you to group multiple geo-replicated databases. Learn to configure a failover group for an Azure SQL Database and test failover using either the Azure portal, PowerShell, or the Azure CLI. In this tutorial, you'll learn how to:
> [!div class="checklist"] >
azure-sql High Availability Sla https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/high-availability-sla.md
A failover can be initiated using PowerShell, REST API, or Azure CLI:
|:|:|:|:| |Database|[Invoke-AzSqlDatabaseFailover](/powershell/module/az.sql/invoke-azsqldatabasefailover)|[Database failover](/rest/api/sql/databases/failover)|[az rest](/cli/azure/reference-index#az-rest) may be used to invoke a REST API call from Azure CLI| |Elastic pool|[Invoke-AzSqlElasticPoolFailover](/powershell/module/az.sql/invoke-azsqlelasticpoolfailover)|[Elastic pool failover](/javascript/api/@azure/arm-sql/elasticpools)|[az rest](/cli/azure/reference-index#az-rest) may be used to invoke a REST API call from Azure CLI|
-|Managed Instance|[Invoke-AzSqlInstanceFailover](/powershell/module/az.sql/Invoke-AzSqlInstanceFailover/)|[Managed Instances - Failover](/rest/api/sql/managed%20instances%20-%20failover/failover)|[az sql mi failover](/cli/azure/sql/mi/#az-sql-mi-failover)|
+|Managed Instance|[Invoke-AzSqlInstanceFailover](/powershell/module/az.sql/Invoke-AzSqlInstanceFailover/)|[Managed Instances - Failover](/rest/api/sql/managed%20instances%20-%20failover/failover)|[az sql mi failover](/cli/azure/sql/mi/#az-sql-mi-failover) may be used to invoke a REST API call from Azure CLI|
> [!IMPORTANT] > The Failover command is not available for readable secondary replicas of Hyperscale databases.
azure-sql How To Content Reference Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/how-to-content-reference-guide.md
In this article you can find a content reference of various guides, scripts, and
- [Configure Conditional Access](conditional-access-configure.md) - [Multi-factor Azure AD auth](authentication-mfa-ssms-overview.md) - [Configure Multi-Factor Authentication](authentication-mfa-ssms-configure.md)
+- [Configure backup retention](long-term-backup-retention-configure.md) for a database to keep your backups on Azure Blob Storage.
+- [Configure geo-replication](active-geo-replication-overview.md) to keep a replica of your database in another region.
+- [Configure auto-failover group](auto-failover-group-configure-sql-db.md) to automatically failover a group of single or pooled databases to a secondary server in another region in the event of a disaster.
- [Configure temporal retention policy](temporal-tables-retention-policy.md) - [Configure TDE with BYOK](transparent-data-encryption-byok-configure.md) - [Rotate TDE BYOK keys](transparent-data-encryption-byok-key-rotation.md)
In this article you can find a content reference of various guides, scripts, and
- [Configure transactional replication](replication-to-sql-database.md) to replicate your date between databases. - [Configure threat detection](threat-detection-configure.md) to let Azure SQL Database identify suspicious activities such as SQL Injection or access from suspicious locations. - [Configure dynamic data masking](dynamic-data-masking-configure-portal.md) to protect your sensitive data.-- [Configure backup retention](long-term-backup-retention-configure.md) for a database to keep your backups on Azure Blob Storage. -- [Configure geo-replication](active-geo-replication-overview.md) to keep a replica of your database in another region. - [Configure security for geo-replicas](active-geo-replication-security-configure.md). ## Monitor and tune your database
azure-sql Move Resources Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/move-resources-across-regions.md
This article provides a general workflow for moving resources to a different reg
1. Create a [failover group](failover-group-add-single-database-tutorial.md#2create-the-failover-group) between the server of the source and the server of the target. 1. Add the databases you want to move to the failover group.
- Replication of all added databases will be initiated automatically. For more information, see [Best practices for using failover groups with single databases](auto-failover-group-overview.md#best-practices-for-sql-database).
+ Replication of all added databases will be initiated automatically. For more information, see [Using failover groups with SQL Database](auto-failover-group-sql-db.md).
### Monitor the preparation process
Once the move completes, remove the resources in the source region to avoid unne
1. Create a separate [failover group](failover-group-add-elastic-pool-tutorial.md#3create-the-failover-group) between each elastic pool on the source server and its counterpart elastic pool on the target server. 1. Add all the databases in the pool to the failover group.
- Replication of the added databases will be initiated automatically. For more information, see [Best practices for failover groups with elastic pools](auto-failover-group-overview.md#best-practices-for-sql-database).
+ Replication of the added databases will be initiated automatically. For more information, see [Using failover groups with SQL Database](auto-failover-group-sql-db.md).
> [!NOTE] > While it is possible to create a failover group that includes multiple elastic pools, we strongly recommend that you create a separate failover group for each pool. If you have a large number of databases across multiple elastic pools that you need to move, you can run the preparation steps in parallel and then initiate the move step in parallel. This process will scale better and will take less time compared to having multiple elastic pools in the same failover group.
azure-sql Auto Failover Group Configure Sql Mi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/auto-failover-group-configure-sql-mi.md
+
+ Title: Configure an auto-failover group
+description: Learn how to configure an auto-failover group for Azure SQL Managed Instance by using the Azure portal, and Azure PowerShell.
+++++
+ms.devlang:
+++ Last updated : 03/01/2022+
+# Configure an auto-failover group for Azure SQL Managed Instance
+
+> [!div class="op_single_selector"]
+> * [Azure SQL Database](../database/auto-failover-group-configure-sql-db.md)
+> * [Azure SQL Managed Instance](auto-failover-group-configure-sql-mi.md)
+
+This topic teaches you how to configure an [auto-failover group](auto-failover-group-sql-mi.md) for Azure SQL Managed Instance using the Azure portal and Azure PowerShell. For an end-to-end experience, review the [Auto-failover group tutorial](failover-group-add-instance-tutorial.md).
+
+> [!NOTE]
+> This article covers auto-failover groups for Azure SQL Managed Instance. For Azure SQL Database, see [Configure auto-failover groups in SQL Database](../database/auto-failover-group-configure-sql-db.md).
++
+## Prerequisites
+
+Consider the following prerequisites:
+
+- The secondary managed instance must be empty.
+- The subnet range for the secondary virtual network must not overlap the subnet range of the primary virtual network.
+- The collation and time zone of the secondary managed instance must match that of the primary managed instance.
+- When connecting the two gateways, the **Shared Key** should be the same for both connections.
+- You will need to either configure [ExpressRoute](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md) or create a gateway for the virtual network of each SQL Managed Instance, connect the two gateways, and then create the failover group.
+- Deploy both managed instances to [paired regions](../../availability-zones/cross-region-replication-azure.md) for performance reasons. Managed instances residing in geo-paired regions have much better performance compared to unpaired regions.
+
+## Create primary virtual network gateway
+
+If you have not configured [ExpressRoute](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md), you can create the primary virtual network gateway with the Azure portal, or PowerShell.
+
+> [!NOTE]
+> The SKU of the gateway affects throughput performance. This article deploys a gateway with the most basic SKU (`HwGw1`). Deploy a higher SKU (example: `VpnGw3`) to achieve higher throughput. For all available options, see [Gateway SKUs](../../vpn-gateway/vpn-gateway-about-vpngateways.md#benchmark)
+
+# [Portal](#tab/azure-portal)
+
+Create the primary virtual network gateway using the Azure portal.
+
+1. In the [Azure portal](https://portal.azure.com), go to your resource group and select the **Virtual network** resource for your primary managed instance.
+1. Select **Subnets** under **Settings** and then select to add a new **Gateway subnet**. Leave the default values.
+
+ ![Add gateway for primary managed instance](./media/auto-failover-group-configure-sql-mi/add-subnet-gateway-primary-vnet.png)
+
+1. Once the subnet gateway is created, select **Create a resource** from the left navigation pane and then type `Virtual network gateway` in the search box. Select the **Virtual network gateway** resource published by **Microsoft**.
+
+ ![Create a new virtual network gateway](./media/auto-failover-group-configure-sql-mi/create-virtual-network-gateway.png)
+
+1. Fill out the required fields to configure the gateway your primary managed instance.
+
+ The following table shows the values necessary for the gateway for the primary managed instance:
+
+ | **Field** | Value |
+ | | |
+ | **Subscription** | The subscription where your primary managed instance is. |
+ | **Name** | The name for your virtual network gateway. |
+ | **Region** | The region where your primary managed instance is. |
+ | **Gateway type** | Select **VPN**. |
+ | **VPN Type** | Select **Route-based** |
+ | **SKU**| Leave default of `VpnGw1`. |
+ | **Location**| The location where your secondary managed instance and secondary virtual network is. |
+ | **Virtual network**| Select the virtual network for your secondary managed instance. |
+ | **Public IP address**| Select **Create new**. |
+ | **Public IP address name**| Enter a name for your IP address. |
+ | &nbsp; | &nbsp; |
+
+1. Leave the other values as default, and then select **Review + create** to review the settings for your virtual network gateway.
+
+ ![Primary gateway settings](./media/auto-failover-group-configure-sql-mi/settings-for-primary-gateway.png)
+
+1. Select **Create** to create your new virtual network gateway.
+
+# [PowerShell](#tab/azure-powershell)
+
+Create the primary virtual network gateway using PowerShell.
+
+ ```powershell-interactive
+ $primaryResourceGroupName = "<Primary-Resource-Group>"
+ $primaryVnetName = "<Primary-Virtual-Network-Name>"
+ $primaryGWName = "<Primary-Gateway-Name>"
+ $primaryGWPublicIPAddress = $primaryGWName + "-ip"
+ $primaryGWIPConfig = $primaryGWName + "-ipc"
+ $primaryGWAsn = 61000
+
+ # Get the primary virtual network
+ $vnet1 = Get-AzVirtualNetwork -Name $primaryVnetName -ResourceGroupName $primaryResourceGroupName
+ $primaryLocation = $vnet1.Location
+
+ # Create primary gateway
+ Write-host "Creating primary gateway..."
+ $subnet1 = Get-AzVirtualNetworkSubnetConfig -Name GatewaySubnet -VirtualNetwork $vnet1
+ $gwpip1= New-AzPublicIpAddress -Name $primaryGWPublicIPAddress -ResourceGroupName $primaryResourceGroupName `
+ -Location $primaryLocation -AllocationMethod Dynamic
+ $gwipconfig1 = New-AzVirtualNetworkGatewayIpConfig -Name $primaryGWIPConfig `
+ -SubnetId $subnet1.Id -PublicIpAddressId $gwpip1.Id
+
+ $gw1 = New-AzVirtualNetworkGateway -Name $primaryGWName -ResourceGroupName $primaryResourceGroupName `
+ -Location $primaryLocation -IpConfigurations $gwipconfig1 -GatewayType Vpn `
+ -VpnType RouteBased -GatewaySku VpnGw1 -EnableBgp $true -Asn $primaryGWAsn
+ $gw1
+ ```
+++
+## Create secondary virtual network gateway
+
+Create the secondary virtual network gateway using the Azure portal or PowerShell.
+
+# [Portal](#tab/azure-portal)
+
+Repeat the steps in the previous section to create the virtual network subnet and gateway for the secondary managed instance. Fill out the required fields to configure the gateway for your secondary managed instance.
+
+The following table shows the values necessary for the gateway for the secondary managed instance:
+
+ | **Field** | Value |
+ | | |
+ | **Subscription** | The subscription where your secondary managed instance is. |
+ | **Name** | The name for your virtual network gateway, such as `secondary-mi-gateway`. |
+ | **Region** | The region where your secondary managed instance is. |
+ | **Gateway type** | Select **VPN**. |
+ | **VPN Type** | Select **Route-based** |
+ | **SKU**| Leave default of `VpnGw1`. |
+ | **Location**| The location where your secondary managed instance and secondary virtual network is. |
+ | **Virtual network**| Select the virtual network that was created in section 2, such as `vnet-sql-mi-secondary`. |
+ | **Public IP address**| Select **Create new**. |
+ | **Public IP address name**| Enter a name for your IP address, such as `secondary-gateway-IP`. |
+ | &nbsp; | &nbsp; |
+
+ ![Secondary gateway settings](./media/auto-failover-group-configure-sql-mi/settings-for-secondary-gateway.png)
+
+# [PowerShell](#tab/azure-powershell)
+
+Create the secondary virtual network gateway using PowerShell.
+
+ ```powershell-interactive
+ $secondaryResourceGroupName = "<Secondary-Resource-Group>"
+ $secondaryVnetName = "<Secondary-Virtual-Network-Name>"
+ $secondaryGWName = "<Secondary-Gateway-Name>"
+ $secondaryGWPublicIPAddress = $secondaryGWName + "-IP"
+ $secondaryGWIPConfig = $secondaryGWName + "-ipc"
+ $secondaryGWAsn = 62000
+
+ # Get the secondary virtual network
+ $vnet2 = Get-AzVirtualNetwork -Name $secondaryVnetName -ResourceGroupName $secondaryResourceGroupName
+ $secondaryLocation = $vnet2.Location
+
+ # Create the secondary gateway
+ Write-host "Creating secondary gateway..."
+ $subnet2 = Get-AzVirtualNetworkSubnetConfig -Name GatewaySubnet -VirtualNetwork $vnet2
+ $gwpip2= New-AzPublicIpAddress -Name $secondaryGWPublicIPAddress -ResourceGroupName $secondaryResourceGroupName `
+ -Location $secondaryLocation -AllocationMethod Dynamic
+ $gwipconfig2 = New-AzVirtualNetworkGatewayIpConfig -Name $secondaryGWIPConfig `
+ -SubnetId $subnet2.Id -PublicIpAddressId $gwpip2.Id
+
+ $gw2 = New-AzVirtualNetworkGateway -Name $secondaryGWName -ResourceGroupName $secondaryResourceGroupName `
+ -Location $secondaryLocation -IpConfigurations $gwipconfig2 -GatewayType Vpn `
+ -VpnType RouteBased -GatewaySku VpnGw1 -EnableBgp $true -Asn $secondaryGWAsn
+
+ $gw2
+ ```
+++
+## Connect the gateways
+
+Create connections between the two gateways using the Azure portal or PowerShell.
+
+Two connections need to be created - the connection from the primary gateway to the secondary gateway, and then the connection from the secondary gateway to the primary gateway.
+
+The shared key used for both connections should be the same for each connection.
+
+# [Portal](#tab/azure-portal)
+
+Create connections between the two gateways using the Azure portal.
+
+1. Select **Create a resource** from the [Azure portal](https://portal.azure.com).
+1. Type `connection` in the search box and then press enter to search, which takes you to the **Connection** resource, published by Microsoft.
+1. Select **Create** to create your connection.
+1. On the **Basics** tab, select the following values and then select **OK**.
+ 1. Select `VNet-to-VNet` for the **Connection type**.
+ 1. Select your subscription from the drop-down.
+ 1. Select the resource group for your managed instance in the drop-down.
+ 1. Select the location of your primary managed instance from the drop-down.
+1. On the **Settings** tab, select or enter the following values and then select **OK**:
+ 1. Choose the primary network gateway for the **First virtual network gateway**, such as `Primary-Gateway`.
+ 1. Choose the secondary network gateway for the **Second virtual network gateway**, such as `Secondary-Gateway`.
+ 1. Select the checkbox next to **Establish bidirectional connectivity**.
+ 1. Either leave the default primary connection name, or rename it to a value of your choice.
+ 1. Provide a **Shared key (PSK)** for the connection, such as `mi1m2psk`.
+
+ ![Create gateway connection](./media/auto-failover-group-configure-sql-mi/create-gateway-connection.png)
+
+1. On the **Summary** tab, review the settings for your bidirectional connection and then select **OK** to create your connection.
+
+# [PowerShell](#tab/azure-powershell)
+
+Create connections between the two gateways using PowerShell.
+
+ ```powershell-interactive
+ $vpnSharedKey = "mi1mi2psk"
+ $primaryResourceGroupName = "<Primary-Resource-Group>"
+ $primaryGWConnection = "<Primary-connection-name>"
+ $primaryLocation = "<Primary-Region>"
+ $secondaryResourceGroupName = "<Secondary-Resource-Group>"
+ $secondaryGWConnection = "<Secondary-connection-name>"
+ $secondaryLocation = "<Secondary-Region>"
+
+ # Connect the primary to secondary gateway
+ Write-host "Connecting the primary gateway"
+ New-AzVirtualNetworkGatewayConnection -Name $primaryGWConnection -ResourceGroupName $primaryResourceGroupName `
+ -VirtualNetworkGateway1 $gw1 -VirtualNetworkGateway2 $gw2 -Location $primaryLocation `
+ -ConnectionType Vnet2Vnet -SharedKey $vpnSharedKey -EnableBgp $true
+ $primaryGWConnection
+
+ # Connect the secondary to primary gateway
+ Write-host "Connecting the secondary gateway"
+
+ New-AzVirtualNetworkGatewayConnection -Name $secondaryGWConnection -ResourceGroupName $secondaryResourceGroupName `
+ -VirtualNetworkGateway1 $gw2 -VirtualNetworkGateway2 $gw1 -Location $secondaryLocation `
+ -ConnectionType Vnet2Vnet -SharedKey $vpnSharedKey -EnableBgp $true
+ $secondaryGWConnection
+ ```
+++
+## Create the failover group
+
+Create the failover group for your managed instances by using the Azure portal or PowerShell.
+
+# [Portal](#tab/azure-portal)
+
+Create the failover group for your SQL Managed Instances by using the Azure portal.
+
+1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type Azure SQL in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
+1. Select the primary managed instance you want to add to the failover group.
+1. Under **Settings**, navigate to **Instance Failover Groups** and then choose to **Add group** to open the **Instance Failover Group** page.
+
+ ![Add a failover group](./media/auto-failover-group-configure-sql-mi/add-failover-group.png)
+
+1. On the **Instance Failover Group** page, type the name of your failover group and then choose the secondary managed instance from the drop-down. Select **Create** to create your failover group.
+
+ ![Create failover group](./media/auto-failover-group-configure-sql-mi/create-failover-group.png)
+
+1. Once failover group deployment is complete, you will be taken back to the **Failover group** page.
+
+# [PowerShell](#tab/azure-powershell)
+
+Create the failover group for your managed instances using PowerShell.
+
+ ```powershell-interactive
+ $primaryResourceGroupName = "<Primary-Resource-Group>"
+ $failoverGroupName = "<Failover-Group-Name>"
+ $primaryLocation = "<Primary-Region>"
+ $secondaryLocation = "<Secondary-Region>"
+ $primaryManagedInstance = "<Primary-Managed-Instance-Name>"
+ $secondaryManagedInstance = "<Secondary-Managed-Instance-Name>"
+
+ # Create failover group
+ Write-host "Creating the failover group..."
+ $failoverGroup = New-AzSqlDatabaseInstanceFailoverGroup -Name $failoverGroupName `
+ -Location $primaryLocation -ResourceGroupName $primaryResourceGroupName -PrimaryManagedInstanceName $primaryManagedInstance `
+ -PartnerRegion $secondaryLocation -PartnerManagedInstanceName $secondaryManagedInstance `
+ -FailoverPolicy Automatic -GracePeriodWithDataLossHours 1
+ $failoverGroup
+ ```
+++
+## Test failover
+
+Test failover of your failover group using the Azure portal or PowerShell.
+
+# [Portal](#tab/azure-portal)
+
+Test failover of your failover group using the Azure portal.
+
+1. Navigate to your _secondary_ managed instance within the [Azure portal](https://portal.azure.com) and select **Instance Failover Groups** under settings.
+1. Review which managed instance is the primary, and which managed instance is the secondary.
+1. Select **Failover** and then select **Yes** on the warning about TDS sessions being disconnected.
+
+ ![Fail over the failover group](./media/auto-failover-group-configure-sql-mi/failover-mi-failover-group.png)
+
+1. Review which manged instance is the primary and which instance is the secondary. If failover succeeded, the two instances should have switched roles.
+
+ ![Managed instances have switched roles after failover](./media/auto-failover-group-configure-sql-mi/mi-switched-after-failover.png)
+
+1. Go to the new _secondary_ managed instance and select **Failover** once again to fail the primary instance back to the primary role.
+
+# [PowerShell](#tab/azure-powershell)
+
+Test failover of your failover group using PowerShell.
+
+ ```powershell-interactive
+ $primaryResourceGroupName = "<Primary-Resource-Group>"
+ $secondaryResourceGroupName = "<Secondary-Resource-Group>"
+ $failoverGroupName = "<Failover-Group-Name>"
+ $primaryLocation = "<Primary-Region>"
+ $secondaryLocation = "<Secondary-Region>"
+ $primaryManagedInstance = "<Primary-Managed-Instance-Name>"
+ $secondaryManagedInstance = "<Secondary-Managed-Instance-Name>"
+
+ # Verify the current primary role
+ Get-AzSqlDatabaseInstanceFailoverGroup -ResourceGroupName $primaryResourceGroupName `
+ -Location $secondaryLocation -Name $failoverGroupName
+
+ # Failover the primary managed instance to the secondary role
+ Write-host "Failing primary over to the secondary location"
+ Get-AzSqlDatabaseInstanceFailoverGroup -ResourceGroupName $secondaryResourceGroupName `
+ -Location $secondaryLocation -Name $failoverGroupName | Switch-AzSqlDatabaseInstanceFailoverGroup
+ Write-host "Successfully failed failover group to secondary location"
+
+ # Verify the current primary role
+ Get-AzSqlDatabaseInstanceFailoverGroup -ResourceGroupName $primaryResourceGroupName `
+ -Location $secondaryLocation -Name $failoverGroupName
+
+ # Fail primary managed instance back to primary role
+ Write-host "Failing primary back to primary role"
+ Get-AzSqlDatabaseInstanceFailoverGroup -ResourceGroupName $primaryResourceGroupName `
+ -Location $primaryLocation -Name $failoverGroupName | Switch-AzSqlDatabaseInstanceFailoverGroup
+ Write-host "Successfully failed failover group to primary location"
+
+ # Verify the current primary role
+ Get-AzSqlDatabaseInstanceFailoverGroup -ResourceGroupName $primaryResourceGroupName `
+ -Location $secondaryLocation -Name $failoverGroupName
+ ```
+++++
+## Locate listener endpoint
+
+Once your failover group is configured, update the connection string for your application to the listener endpoint. This will keep your application connected to the failover group listener, rather than the primary database, elastic pool, or instance database. That way, you don't have to manually update the connection string every time your database entity fails over, and traffic is routed to whichever entity is currently primary.
+
+The listener endpoint is in the form of `fog-name.database.windows.net`, and is visible in the Azure portal, when viewing the failover group:
+
+![Failover group connection string](./media/auto-failover-group-configure-sql-mi/find-failover-group-connection-string.png)
+
+## <a name="creating-a-failover-group-between-managed-instances-in-different-subscriptions"></a> Create group between instances in different subscriptions
+
+You can create a failover group between SQL Managed Instances in two different subscriptions, as long as subscriptions are associated to the same [Azure Active Directory Tenant](../../active-directory/fundamentals/active-directory-whatis.md#terminology). When using PowerShell API, you can do it by specifying the `PartnerSubscriptionId` parameter for the secondary SQL Managed Instance. When using REST API, each instance ID included in the `properties.managedInstancePairs` parameter can have its own Subscription ID.
+
+> [!IMPORTANT]
+> Azure portal does not support creation of failover groups across different subscriptions. Also, for the existing failover groups across different subscriptions and/or resource groups, failover cannot be initiated manually via portal from the primary SQL Managed Instance. Initiate it from the geo-secondary instance instead.
+
+## Change the secondary region
+
+Let's assume that instance A is the primary instance, instance B is the existing secondary instance, and instance C is the new secondary instance in the third region. To make the transition, follow these steps:
+
+1. Create instance C with same size as A and in the same DNS zone.
+2. Delete the failover group between instances A and B. At this point the logins will be failing because the SQL aliases for the failover group listeners have been deleted and the gateway will not recognize the failover group name. The secondary databases will be disconnected from the primaries and will become read-write databases.
+3. Create a failover group with the same name between instance A and C. Follow the instructions in [failover group with SQL Managed Instance tutorial](failover-group-add-instance-tutorial.md). This is a size-of-data operation and will complete when all databases from instance A are seeded and synchronized.
+4. Delete instance B if not needed to avoid unnecessary charges.
+
+> [!NOTE]
+> After step 2 and until step 3 is completed the databases in instance A will remain unprotected from a catastrophic failure of instance A.
+
+## Change the primary region
+
+Let's assume instance A is the primary instance, instance B is the existing secondary instance, and instance C is the new primary instance in the third region. To make the transition, follow these steps:
+
+1. Create instance C with same size as B and in the same DNS zone.
+2. Connect to instance B and manually failover to switch the primary instance to B. Instance A will become the new secondary instance automatically.
+3. Delete the failover group between instances A and B. At this point login attempts using failover group endpoints will be failing. The secondary databases on A will be disconnected from the primaries and will become read-write databases.
+4. Create a failover group with the same name between instance A and C. Follow the instructions in the [failover group with managed instance tutorial](failover-group-add-instance-tutorial.md). This is a size-of-data operation and will complete when all databases from instance A are seeded and synchronized. At this point login attempts will stop failing.
+5. Delete instance A if not needed to avoid unnecessary charges.
+
+> [!CAUTION]
+> After step 3 and until step 4 is completed the databases in instance A will remain unprotected from a catastrophic failure of instance A.
+
+> [!IMPORTANT]
+> When the failover group is deleted, the DNS records for the listener endpoints are also deleted. At that point, there is a non-zero probability of somebody else creating a failover group with the same name. Because failover group names must be globally unique, this will prevent you from using the same name again. To minimize this risk, don't use generic failover group names.
+
+## <a name="enabling-geo-replication-between-managed-instances-and-their-vnets"></a> Enabling geo-replication between MI virtual networks
+
+When you set up a failover group between primary and secondary SQL Managed Instances in two different regions, each instance is isolated using an independent virtual network. To allow replication traffic between these VNets ensure these prerequisites are met:
+
+- The two instances of SQL Managed Instance need to be in different Azure regions.
+- The two instances of SQL Managed Instance need to be the same service tier, and have the same storage size.
+- Your secondary instance of SQL Managed Instance must be empty (no user databases).
+- The virtual networks used by the instances of SQL Managed Instance need to be connected through a [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md) or [Express Route](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md). When two virtual networks connect through an on-premises network, ensure there is no firewall rule blocking ports 5022, and 11000-11999. Global VNet Peering is supported with the limitation described in the note below.
+
+ > [!IMPORTANT]
+ > [On 9/22/2020 support for global virtual network peering for newly created virtual clusters was announced](https://azure.microsoft.com/updates/global-virtual-network-peering-support-for-azure-sql-managed-instance-now-available/). It means that global virtual network peering is supported for SQL managed instances created in empty subnets after the announcement date, as well for all the subsequent managed instances created in those subnets. For all the other SQL managed instances peering support is limited to the networks in the same region due to the [constraints of global virtual network peering](../../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints). See also the relevant section of the [Azure Virtual Networks frequently asked questions](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) article for more details. To be able to use global virtual network peering for SQL managed instances from virtual clusters created before the announcement date, consider configuring non-default [maintenance window](../database/maintenance-window.md) on the instances, as it will move the instances into new virtual clusters that support global virtual network peering.
+
+- The two SQL Managed Instance VNets cannot have overlapping IP addresses.
+- You need to set up your Network Security Groups (NSG) such that ports 5022 and the range 11000~12000 are open inbound and outbound for connections from the subnet of the other managed instance. This is to allow replication traffic between the instances.
+
+ > [!IMPORTANT]
+ > Misconfigured NSG security rules leads to stuck database seeding operations.
+
+- The secondary SQL Managed Instance is configured with the correct DNS zone ID. DNS zone is a property of a SQL Managed Instance and underlying virtual cluster, and its ID is included in the host name address. The zone ID is generated as a random string when the first SQL Managed Instance is created in each VNet and the same ID is assigned to all other instances in the same subnet. Once assigned, the DNS zone cannot be modified. SQL Managed Instances included in the same failover group must share the DNS zone. You accomplish this by passing the primary instance's zone ID as the value of DnsZonePartner parameter when creating the secondary instance.
+
+ > [!NOTE]
+ > For a detailed tutorial on configuring failover groups with SQL Managed Instance, see [add a SQL Managed Instance to a failover group](../managed-instance/failover-group-add-instance-tutorial.md).
+
+## Permissions
++
+<!--
+There is some overlap of content in the following articles, be sure to make changes to all if necessary:
+/azure-sql/auto-failover-group-overview.md
+/azure-sql/database/auto-failover-group-sql-db.md
+/azure-sql/database/auto-failover-group-configure-sql-db.md
+/azure-sql/managed-instance/auto-failover-group-sql-mi.md
+/azure-sql/managed-instance/auto-failover-group-configure-sql-mi.md
+-->
+
+Permissions for a failover group are managed via [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
+
+Azure RBAC write access is necessary to create and manage failover groups. The [SQL Server Contributor](../../role-based-access-control/built-in-roles.md#sql-server-contributor) role has all the necessary permissions to manage failover groups.
+
+The following table lists specific permission scopes for Azure SQL Managed Instance:
+
+| **Action** | **Permission** | **Scope**|
+| :- | :- | :- |
+|**Create failover group**| Azure RBAC write access | Primary managed instance </br> Secondary managed instance|
+| **Update failover group** Azure RBAC write access | Failover group </br> All databases within the managed instance|
+| **Fail over failover group** | Azure RBAC write access | Failover group on new primary managed instance |
+| | |
++
+## Next steps
+
+For detailed steps configuring a failover group, see the following tutorials:
+
+- [Add a single database to a failover group](../database/failover-group-add-single-database-tutorial.md)
+- [Add an elastic pool to a failover group](../database/failover-group-add-elastic-pool-tutorial.md)
+- [Add a managed instance to a failover group](../managed-instance/failover-group-add-instance-tutorial.md)
+
+For an overview of the feature, see [auto-failover groups](auto-failover-group-sql-mi.md).
azure-sql Auto Failover Group Sql Mi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/auto-failover-group-sql-mi.md
+
+ Title: Auto-failover groups overview & best practices
+description: Auto-failover groups let you manage geo-replication and automatic / coordinated failover of all user databases on a managed instance in Azure SQL Managed Instance.
++++++++ Last updated : 03/01/2022++
+# Auto-failover groups overview & best practices (Azure SQL Managed Instance)
+
+> [!div class="op_single_selector"]
+> * [Azure SQL Database](../database/auto-failover-group-sql-db.md)
+> * [Azure SQL Managed Instance](auto-failover-group-sql-mi.md)
+
+The auto-failover groups feature allows you to manage the replication and failover of all user databases in a managed instance to another Azure region. This article focuses on using the Auto-failover group feature with Azure SQL Managed Instance and some best practices.
+
+To get started, review [Configure auto-failover group](auto-failover-group-configure-sql-mi.md). For an end-to-end experience, see the [Auto-failover group tutorial](failover-group-add-instance-tutorial.md).
+
+> [!NOTE]
+> This article covers auto-failover groups for Azure SQL Managed Instance. For Azure SQL Database, see [Auto-failover groups in SQL Database](../database/auto-failover-group-sql-db.md).
+
+## Overview
+++
+## <a name="terminology-and-capabilities"></a> Terminology and capabilities
+
+<!--
+There is some overlap of content in the following articles, be sure to make changes to all if necessary:
+/azure-sql/database/auto-failover-group-sql-db.md
+/azure-sql/database/auto-failover-group-configure-sql-db.md
+/azure-sql/managed-instance/auto-failover-group-sql-mi.md
+/azure-sql/managed-instance/auto-failover-group-configure-sql-mi.md
+-->
+
+- **Failover group (FOG)**
+
+ A failover group allows for all user databases within a managed instance to fail over as a unit to another Azure region in case the primary managed instance becomes unavailable due to a primary region outage. Since failover groups for SQL Managed Instance contain all user databases within the instance, only one failover group can be configured on an instance.
+
+ > [!IMPORTANT]
+ > The name of the failover group must be globally unique within the `.database.windows.net` domain.
+
+- **Primary**
+
+ The managed instance that hosts the primary databases in the failover group.
+
+- **Secondary**
+
+ The managed instance that hosts the secondary databases in the failover group. The secondary cannot be in the same Azure region as the primaryF.
+
+- **DNS zone**
+
+ A unique ID that is automatically generated when a new SQL Managed Instance is created. A multi-domain (SAN) certificate for this instance is provisioned to authenticate the client connections to any instance in the same DNS zone. The two managed instances in the same failover group must share the DNS zone.
+
+- **Failover group read-write listener**
+
+ A DNS CNAME record that points to the current primary. It is created automatically when the failover group is created and allows the read-write workload to transparently reconnect to the primary when the primary changes after failover. When the failover group is created on a SQL Managed Instance, the DNS CNAME record for the listener URL is formed as `<fog-name>.<zone_id>.database.windows.net`.
+
+- **Failover group read-only listener**
+
+ A DNS CNAME record that points to the current secondary. It is created automatically when the failover group is created and allows the read-only SQL workload to transparently connect to the secondary when the secondary changes after failover. When the failover group is created on a SQL Managed Instance, the DNS CNAME record for the listener URL is formed as `<fog-name>.secondary.<zone_id>.database.windows.net`.
+++
+## Failover group architecture
+
+The auto-failover group must be configured on the primary instance and will connect it to the secondary instance in a different Azure region. All user databases in the instance will be replicated to the secondary instance. System databases like _master_ and _msdb_ will not be replicated.
+
+The following diagram illustrates a typical configuration of a geo-redundant cloud application using managed instance and auto-failover group:
++
+If your application uses SQL Managed Instance as the data tier, follow the general guidelines and best practices outlined in this article when designing for business continuity.
++
+> [!IMPORTANT]
+> If you deploy auto-failover groups in a hub-and-spoke network topology cross-region, replication traffic should go directly between the two managed instance subnets rather than directed through the hub networks.
+
+## Initial seeding
+
+When adding managed instances to a failover group, there is an initial seeding phase before data replication starts. The initial seeding phase is the longest and most expensive operation. Once initial seeding completes, data is synchronized, and then only subsequent data changes are replicated. The time it takes for the initial seeding to complete depends on the size of your data, number of replicated databases, the load on primary databases, and the speed of the link between the primary and secondary. Under normal circumstances, possible seeding speed is up to 360 GB an hour for SQL Managed Instance. Seeding is performed for all databases in parallel.
+
+For SQL Managed Instance, consider the speed of the Express Route link between the two instances when estimating the time of the initial seeding phase. If the speed of the link between the two instances is slower than what is necessary, the time to seed is likely to be noticeably impacted. You can use the stated seeding speed, number of databases, total size of data, and the link speed to estimate how long the initial seeding phase will take before data replication starts. For example, for a single 100 GB database, the initial seed phase would take about 1.2 hours if the link is capable of pushing 84 GB per hour, and if there are no other databases being seeded. If the link can only transfer 10 GB per hour, then seeding a 100 GB database will take about 10 hours. If there are multiple databases to replicate, seeding will be executed in parallel, and, when combined with a slow link speed, the initial seeding phase may take considerably longer, especially if the parallel seeding of data from all databases exceeds the available link bandwidth. If the network bandwidth between two instances is limited and you are adding multiple managed instances to a failover group, consider adding multiple managed instances to the failover group sequentially, one by one. Given an appropriately sized gateway SKU between the two managed instances, and if corporate network bandwidth allows it, it's possible to achieve speeds as high as 360 GB an hour.
++
+## <a name="creating-the-secondary-instance"></a> Creating the geo-secondary instance
+
+To ensure non-interrupted connectivity to the primary SQL Managed Instance after failover, both the primary and secondary instances must be in the same DNS zone. It will guarantee that the same multi-domain (SAN) certificate can be used to authenticate client connections to either of the two instances in the failover group. When your application is ready for production deployment, create a secondary SQL Managed Instance in a different region and make sure it shares the DNS zone with the primary SQL Managed Instance. You can do it by specifying an optional parameter during creation. If you are using PowerShell or the REST API, the name of the optional parameter is `DNSZonePartner`. The name of the corresponding optional field in the Azure portal is *Primary Managed Instance*.
+
+> [!IMPORTANT]
+> The first managed instance created in the subnet determines DNS zone for all subsequent instances in the same subnet. This means that two instances from the same subnet cannot belong to different DNS zones.
+
+For more info